var/home/core/zuul-output/0000755000175000017500000000000015145110177014527 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015145124155015474 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000352235415145123766020276 0ustar corecoreikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfB e?KYɋI_翪|mvşo#oVݏKf+ovpZj>?xI[mEy},fۮWe~7Nû/wb~1;ZxsY~ݳ( 2[$7۫j{Zw鶾z?&~|XLXlN_/:oXx$%X"LADA@@tkޕf{5Wbx=@^J})K3x~JkwI|YowS˷j̶֛]/8 N Rm(of`\r\L>{Jm 0{vR̍>dQQ.aLk~g\UlxDJfw6xi1U2 c#FD?2SgafO3|,ejoLR3[ D HJP1Ub2i]$HU^L_cZ_:F9TJJ{,mvgL;: ԓ$a;ɾ7lַ;̵3](uzTɚL}ӄ]C }I4Vv@%٘e#dc0Fn 촂iHSr`岮X7̝4?qKf, # qe䧤 ss]QzH.ad!rJBi`V +|i}}THW{y|*/BP3m3A- ZPmN^iL[NrrݝE)~QGGAj^3}wy/{47[q)&c(޸0"$5ڪҾη*t:%?vEmO5tqÜ3Cyu '~qlN?}|nLFR6f8yWxYd ;K44|CK4UQviYDZh$#*)e\W$IAT;s0Gp}=9ڠedۜ+EaH#QtDV:?7#w4r_۾8ZJ%PgS!][5ߜQZ݇~- MR9z_Z;57xh|_/CWuU%v[_((G yMi@'3Pmz8~Y >hl%}Р`sMC77Aztԝp ,}Nptt%q6& ND lM;ָPZGa(X(2*91n,50/mx'})')SĔv}S%xhRe)a@r AF' ]J)ӨbqMWNjʵ2PK-guZZg !M)a(!H/?R?Q~}% ;]/ľv%T&hoP~(*טj=dߛ_SRzSa™:']*}EXɧM<@:jʨΨrPE%NT&1H>g":ͨ ҄v`tYoTq&OzcP_k(PJ'ήYXFgGہwħkIM*򸆔l=q VJީ#b8&RgX2qBMoN w1ђZGd m 2P/Ɛ!" aGd;0RZ+ 9O5KiPc7CDG.b~?|ђP? -8%JNIt"`HP!]ZrͰ4j8!*(jPcǷ!)'xmv>!0[r_G{j 6JYǹ>zs;tc.mctie:x&"bR4S uV8/0%X8Ua0NET݃jYAT` &AD]Ax95mvXYs"(A+/_+*{b }@UP*5ì"M|܊W7|}N{mL=d]' =MS2[3(/hoj$=Zm Mlh>P>Qwf8*c4˥Ęk(+,«.c%_~&^%80=1Jgͤ39(&ʤdH0Ζ@.!)CGt?}=ˢ>f>\bN<Ⱦtë{{b2hKNh`0=/9Gɺɔ+'Х[)9^iX,N&+1Id0ֶ|}!oѶvhu|8Qz:^S-7;k>U~H><~5i ˿7^0*]h,*aklVIKS7d'qAWEݰLkS :}%J6TIsbFʶ褢sFUC)(k-C"TQ[;4j39_WiZSس:$3w}o$[4x:bl=pd9YfAMpIrv̡}XI{B%ZԎuHvhd`Η|ʣ)-iaE';_j{(8xPA*1bv^JLj&DY3#-1*I+g8a@(*%kX{ Z;#es=oi_)qb㼃{buU?zT u]68 QeC Hl @R SFZuU&uRz[2(A1ZK(O5dc}QQufCdX($0j(HX_$GZaPo|P5q @3ǟ6 mR!c/24مQNֆ^n,hU֝cfT :):[gCa?\&IpW$8!+Uph*/ o/{")qq҈78݇hA sTB*F$6 2C` |ɧJ~iM cO;m#NV?d?TCg5otޔC1s`u.EkB6ga׬9J2&vV,./ӐoQJ*Dw*^sCeyWtɖ9F.[-cʚmD (QMW`zP~n"U'8%kEq*Lr;TY *BCCpJhxUpܺDoGdlaQ&8#v| (~~yZ-VW"T- 0@4V{g6R/wD_tՄ.F+HP'AE; J j"b~PO"d.wEр%}5zWˬQOS)ZbF p$^(2JцQImuzhpyXڈ2ͤh}/[g1ieQ*-=hiך5J))?' c9*%WyΈ W\Of[=߰+ednU$YD',jߎW&7DXǜߍG`DbE#0Y4&|޻xѷ\;_Z^sнM\&+1gWo'Y;l>V ̍"ޛ4tO,{=hFѓ$b =D(zn;Y<1x~SJ^{vn 9 j1шk'L"cE=K]A(oQ۲6+ktwLzG,87^ 9H\yqū1)\(v8pHA"ΈGVp"c ?Z)hm.2;sl$瓴ӘIe~H|.Y#C^SJĽHǀeTwvy"v܅ ]?22R.lQPa ˆSܫ1z.x62%z].`Gn&*7bd+, Z`ͲH-nမ^WbPFtOfD]c9\w+ea~~{;Vm >|WAޭi`HbIãE{%&4]Iw Wjoru ݜmKnZ<X; ۢ( nx K8.|DXb +*598;w)zp:̊~;͞)6vnM!N5Cu!8Wq/`FUwWAֻ,Qu W@ Fi:K [Av*_958]a:pmQ&'ᚡmi@ zF(n&P;)_]µ!doR0`pl`~9Fk[ٺ+4Hhao-jϸ??R<lb#P-^39T|L /~p│x@Bq"M/lja\b݋af LnU*P(8W[U6WX ZoѶ^SH:K:%Qvl\b FqQI.ȨHWo;Nw$͹O$oEE-eq=.*Dp,V;(bgJ!gF)892sw*+{[or@x,))[o新#.͞.;=fc<)((b۲Eumw峛M2,V[cm,S~ AF~.2v?JNt=O7^r.@DEuU1}g$>8ac#sĢB\PIPfwJQJ;Qxm &GBf\ZA$Ba-z|A-I @x70 晪MV)m8[6-Te@`E|=U D(C{oVa*H7MQK"<O%MTTtx袥:2JޚݶKd7UZihRk71VDqiގ\<:Ѓ3"gJJčE&>&EI|I˿j2ǯɘCGOa9C1L ={fm&'^tigk$DA' elW@Tiv{ !]oBLKJO*t*\n-iȚ4`{x_z;j3Xh ׄ?xt.o:`x^d~0u$ v48 0_ | E"Hd"H`A0&dY3 ً[fctWF_hdxMUY.b=eaI3Z=᢬-'~DWc;j FRrI5%N/K;Dk rCbm7чsSW_8g{RY.~XfEߪg:smBi1 YBX4),[c^54Sg(s$sN' 88`wC3TE+A\.ԍל9 y{͝BxG&JS meT;{З>'[LR"w F05N<&AJ3DA0ʄ4(zTUWDdE3̻l^-Xw3Fɀ{B-~.h+U8 i1b8wؖ#~zQ`/L 9#Pu/<4A L<KL U(Ee'sCcq !Ȥ4΍ +aM(VldX ][T !Ȱ|HN~6y,⒊)$e{)SR#kהyϛ7^i58f4PmB8 Y{qeφvk73:1@ƛ.{f8IGv*1藺yx27M=>+VnG;\<x7v21՚H :[Γd!E'a4n?k[A׈(sob 41Y9(^SE@7`KIK`kx& V`X0,%pe_ן >hd xе"Q4SUwy x<'o_~#6$g!D$c=5ۄX[ു RzG:柺[ӏ[3frl ô ހ^2TӘUAT!94[[m۾\T)W> lv+ H\FpG)ۏjk_c51̃^cn ba-X/#=Im41NLu\9ETp^poAOO&AٻҦ62L0ډ"ܺ_z9JNȯ=@oUI y4!.zɪ) ӓT)D:fci[*`cc&VhfFp佬)/Wdځ+ uR<$}Kr'ݔTW$md1"#mC_@:m P>DEu&ݛȘPˬ-Ő\B`xr`"F'Iٺ*DnA)yzr^!3Ír!S$,.:+d̋BʺJ#SX*8ҁW7~>oOFe-<uJQ|FZEP__gi(`0/ƍcv7go2G$ N%v$^^&Q 4AMbvvɀ1J{ڔhэK'9*W )IYO;E4z⛢79"hK{BFEmBAΛ3>IO j u߿d{=t-n3Pnef9[}=%G*9sX,¬xS&9'E&"/"ncx}"mV5tŘ:wcZ К G)]$mbXE ^ǽ8%>,0FЕ 6vAVKVCjrD25#Lrv?33Iam:xy`|Q'eű^\ơ' .gygSAixپ im41;P^azl5|JE2z=.wcMԧ ax& =`|#HQ*lS<.U׻`>ajϿ '!9MHK:9#s,jV剤C:LIeHJ"M8P,$N;a-zݸJWc :.<sR6 լ$gu4M*B(A ݖΑِ %H;S*ڳJt>$M!^*n3qESfU, Iĭb#UFJPvBgZvn aE5}~2E|=D' ܇q>8[¿yp/9Om/5|k \6xH.Z'OeCD@cq:Y~<1LٖY9# xe8g IKTQ:+Xg:*}.<M{ZH[^>m0G{ ̷hiOO|9Y"mma[sSbb'Rv&{@6; KE.a\}:<]Oyve3h9}E[kMD,5 %sO{킒 8.K?]i/`׎tp NvԻV4|<{H@#*h{Yp/E%dlh\bU:E%h@&SEK [ Ƣ xg{z%ǻViX~鮦w35QE~qp[ʕ@}ZL! Z0!A⼏q)[f &E1K3i+`JG P/EG 4 9LڑKL|`PОnG#|}qOR{Q|2_tH߫%pD?1%(@nfxOrs25rMլf{sk7݇fjӞh2HkeL'Wʿ}Ƞ%>9cSH|cEyQp 'ˢd:,v-us"Iidw>%zM@9IqrGq:&_p3õB!>9'0LL]M[lwWVR9I5YpVgtuZfG{RoZr3ٮr;wW:͋nqCRu1y=㊻Ij z[|W%q0 CJV٨3,ib{eH7 mҝ(3ɏO/̗-=OR\dIoHZ6n`R֑&#.Mv0vԬ]I˟vrK}F9X|FI#g.Gi)%!iK|o}|ֵ7!ېATJKB2Z/"BfB(gdj۸=}'),-iX'|M2roK\e5Pt:*qSH PgƉU'VKξ ,!3`˞tӱ&Jy%١oBbFM=$OQYꐙ^=Zza5a%פG,ϒPV3^KPbGVO'daOU%tt!ƖRG9lhfd#]y=DFT8F}$RD<8 ].v\-v:8F+Mt|ga.!! р#ݴtӫߴ]vWͽ2]Q6Û͘`_}KnK"]p<)Xg '鸽= &Xu=y`g[#ɯO"?5Vg3gR(Җ}f`ӀSqUق0D L?U7_nMBLϸY&0Ro6Qžl+nݷ" 㬙g|ӱFB@qNx^eCSW3\ZSA !c/!b"'9k I S2=bgj쯏W?=`}H0--VV#YmKW^[?R$+ +cU )?wW@!j-gw2ŝl1!iaI%~`{Tռl>~,?5D K\gd(ZH8@x~5w.4\h(`dc)}1Kqi4~'p!;_V>&M!s}FDͳ֧0O*Vr/tdQu!4YhdqT nXeb|Ivż7>! &ĊL:}3*8&6f5 %>~R݄}WgѨ@OĹCtWai4AY!XH _pw騋[b[%/d>. !Df~;)(Oy )r#.<]]i-*ػ-f24qlT1  jL>1qY|\䛧\|r>Ch}Ϊ=jnk?p ^C8"M#Eޑ-5@f,|Ά(Շ*(XCK*"pXR[كrq IH!6=Ocnи%G"|ڔ^kПy׏<:n:!d#[7>^.hd/}ӾP'k2MؤYy/{!ca /^wT j˚ب|MLE7Ee/I lu//j8MoGqdDt^_Y\-8!ד|$@D.ݮl`p48io^.š{_f>O)J=iwwӑ؇n-i3,1׿5'odۆ3(h>1UW蚍R$W>u rFHqy5)Mr18=?&ǝOS5Uݎa}{Y!#p{ce#Z-(p|X `Ö;u:]/!7r+OF5Y 2 J:JBAHc{L+VCZt!Ux rBH 7le+2g(I҅31d5LB-ذF-0vEz:02;IGQoY8b( $BLm Tr}S >h [B6ܰŷd Ȼ 2t{/Jkl#nq ZRD# 3tT5%TzzzZ1B܋`ЕN"-;csjY ~#ƶbCGѓUV%9::myB20yYq]ˌ?ULz ɘ 5ԱFlaey74$\U:9ƹb5*7jkNA6 + Fnpbi})j bH'XKBoiH!V"@UB 猖x[Q@VVm3@!Noܲ HJ\Ȣ|Fޟ+kd]>ɒFW ɌoUHt4b+q&FP[uN;_+N$fr:D6B6h3};г˙8XM#^ V;Sl-QmԿR{DN%xBˈR Y t!5^mԛK+AWu蝢Ip|xi(= d!8t7> c7rY1,2{i,5!kP駞)1;!G}8 ۢ P}mNaL^h9- ownr S jȍMYE;{  \,Ӳ9|9^@oTgR|Y1PS @"őu O!)ǡZE.,yr9d9P 4"8GYFqɳ\+"\!`]"? [뱷g:ȧtOؑ%h?-t u:) {_XŌUÝZaTg%|Tn17l[V n%ɚ? V4pW{oS&!`(zm{QVqajUVpb +A.4<.u2L[xp0?CVZASc >=IW? VJw ېPupR"Jp D{ rBxb9T!ҕ3ap܉+al@a hsxe1YX: ^KVmٽ<$Ya5h2胙V=)<޶-ê/m{I Yp0x>}up~)O@Wk92+ {[;e8Ԋ1> m:T JNa%nE 3l#Ɇ 3u[a VCZX0;s1%7rcHA=zÜu [~] L+@-ᣪތ{?HϘc بqa3rTrE9/Tz=We5<)d(߯<l7xĿT jOlqFTˮ;{\6?N1}i5]}FG#W`%kr!E'g]on~z/JӸʻRq#XO?Hba/\0mQ75wImԅwwou)n7\Sy{sSlnm/^yA|q}.oBqbq¿qqI-#8n]ȦNK]ϿO!ưI6Z+X c ,l1y[hN*dEZXsϘ# UmqX^Ip)K(kp)d)@+!q.W :j- x,[%5]Ƿ$Jd ʬ[ M- C!i!`kQ C'XZ׍-O! k x1< \9> <GK@+9hd!N֡EV&a-pY[ @ s` "}g}Sj;2Ө׃@:F#碀VW6૎p!RUXsO$@V+Z4\؆ }`#N:ܤf^  Em[TqK rԥ`X[lD|K<^Fw0ޓxF9kD~YqaU`o O"q6X9+ٴR,`'۶Ӑ2,ʰenWyO9)X-Z%*C63tX:2CyAb&*Paﴤw]\Yk;n&:+%H6 - 'Fe0:0mЁ7)@+.'+Ks5nP^O*ltUI-/>``ۢХ6#I "D0MXGWAoӏXH] 19V6ޒUI[xť!yWW{Onu=C$o˾J9,$b,GN]H)Y'񝥮"Xi>DxѲƙ`n-\%")[ap@l0-êP:-?{4#G ]}!KD E.u;.jkV`N+a,)@9ᩤ }*`{'avOQEi5n3r\^H Q8CŔa5X1Xm1ǚ 0ntXCn[s19:㒢F} *Vfn oWQ +X~eA AТt>[$GߤD TcvX Rw:8„-{j:iZl2•Cn&&Eob^awaϊpqK͎2dW܅"X Ru-9reаeV|lqhzrZb qeVո33t\-j6-sxh P @/~KN,Rt#s15i#+x OMk64WQ*"1ujxkF+G[[HNq5wb=\5}<d}|"p?dƉ'}sA X A/?h!j&kYl`XZƭD:qۄ!X)B AhKoI9qfBY`uZH y**Z\z!7lw*;^8> \Db ǝ{!5M+BU:g [voIF1 |bOG/{@Krйc L j|Ƕ [}Uئ6$p1SeT:ȣK>h-Zgӏh? + y?L qec-X%jK#pkhZ@~qKeFc.ȏ,\ 2FW<-T4$,LM7TE3ƽ=.Xu>7n72NBw tZJjaEtfr:E *4sC3tX:rA֬zGܰŷm0n;<?譶xH;/:Ӄ9<|;xJ+KMF|Lƅa%X)56D .#JuECeCg`u:A@dԒGE [h*|No~7lu^wqHt0J !}||:rjRtz>!q%|TotGm@oj+Lae`$+SxX ύK-P0ZU]U8-TUKGmt~2/i^޿ o,xܛTFqPon~{{]7ϢzR|{}JM!A.n6rsMϢz{juWޱ|,lqT}ڈۧ;Q0oOw_Ty׭6 68ݰ$_n4psqJ+Bbq=8G֠jh@Q/cYA{^i} f ;V6ޒU!8fjZ @c J阸MI 2t4b؆c%T!ҟSV4 P(ME-NT>IV >;do[qj+r:<h6WD򹷬p4T͋Ad=`yDm X9@Z Gg1;4?YXU 3XXGᰝ]koF+/}8ӦFħjS(%x[y3CR^Y&xb3ٹΤ#+$1ZCh~'أ$i*γ';Jd fզ(e1V%Wxw/"XބƏUXa\i,2)T2 e7OHaQoʪ\7[SYYq{\~(ϣB6Id S{x!_2?Z5S'Gk)Xb5Y@NՌ1Vʛbbp,SKHD dt9vX AO= +6>g?Jg:شHq !{z지}Z>9JJ ^NI:%e'ON0//s04p<kme-ŕtr fEoOx/+fҊL `<:6føkf`E蘎V$_fX6øq^{@V"/뙧;YckC$_ ݗר;p%8Z Fc$Յ"wGBohT&s׃'ESXjG̉3^`R)c~hXLX,l@%"Uu!a*.ޖšT (cUe:K9YuQ0Y-j>%xx%ZW~ko&Q^NRx8V ;L-6醔:¦lκ0KQVGKN'I,e@Lb5uW% NҏiZW1=|zb! [FQ(};<2HYדv|F@ |lOuc{SsMۜ|]9.cMR 2*`DCYS^X$U3Njzv{qӶ[")P/đG4̋oL'o|U^^cHl\u-6r ׳eiޒ;k:~SS$o(.:yJH?9_X@ "k_3p/wXdx m  Y9퍥*u)EzM<40 OHԹ~y0/2sI&ѾwI5xs~[{QZh+]soF}˼)(IPljbrJp(ɏűV'N:Ct~.eOׂ]CcX e zkFM8]ޤ=m9l;'Iބr[yped ;Quplh% a#">#l'"-y.)vw%h`\!xqMpomKI)[D"Yo_qG a2<9z`qc_p]+$iss]E3TH@ F`R³34blަTytWĒ RPL7I _K$EfS3 M',!Mx&d]-h >]yw4NI֗k?x۴IWߓe]ჳ- kQ7 AՊH8KZMS2r!~ttWKY!jC&E8v.| i ֔0"zW"p&?xDD9>ҧHL$ɯd5Y%"I:Frt^FDM<;4-y5v0I vNfX*jyjk2]h}JRr֦0{ H,qTzM4GmW\MY X<  ) X٣PFIc}?~jXQ!h@4v$^ZK"+D)kkW 7RdUܽNLqAAcQvUO:5 &rb[6K6/R wΑaGESYxWcVkF/ѣeXg& UU(nmk8! #1Sk/" hSlm'-s7FE'b0Df*eXj2OlaUp_mitUY]FUe4X@LZv.Š0}l"(">I)tmAp+ Ծ&lG\˶-uS2*E&4Us.}*@"-_dwڰ.چXb h]Ӛ 8^*kdMs DXnx9X(M+~w\yBЦZ^׃l-yV-fݻ抰\@Ʊ",.߽v~Sm|@e*a]MTZScl:mHV֔ ъxƊc!(e{h(E4dilXXrHk!<ɻ8PhiK"ۺ} P ˛/j5cR%n}|PS\xFp 2uUݶ-%T3nLMLi-Ѷ|E'rzW 7p걯OuOo'yu2ȉ2T9Y-՚IC1KnmPrӵf<ժ+"?0TsA[W* 8sݣnM[kE5`?W<(55NJ꿫0l 9# C(E!|Eun0?"nDazVf"t'4Vm71@iv"[k78Jjݬ$nzwJI˼AN! 0tnL9bWCeK,xc3~C!; ;>DJwc\{;Dt:wd=+.eK"L]hDJ GI.OVmY@tۮ"wh;Dk)ե"1mD;8XOeLQOqc,O3G0GʢvƜu#m zHI:yiu[9:EIx Sn&p ]O_b)SYUyM[?_nZ.6L5.IUX c[ψv{檪9ujHjf u^IFUGHs #GiIyUw~&B[%oJsL$О_ +}m ҰZ7C _4d:a`Wu>tAxtn\'tߦϏ[ȿQ7}}}1 ƪqFh87|*H 0C+gko>9L=ƭ;˜y΁^4ZnPqC3910Ú  d.7 n0ntM苁q0ui$ \%75aYwA Xcɚ8*}hO.4' n:=a8eF㴫LVn%-Ž)e`Dn}Zy ֈsd}H@9^ >ƷUf9 pSBT|M!u=w޶ʦmֶ\H'Z<\uҸC=4XgZ6_Bզn 3xt)EU"[+YU.bRBr &IQO\] ~i(jsm@BW@ڠ49:Wx Ey"cϝ!i2$_1DmԪJZEp%1ymDit9nL%9G0UQDuqM?x3aޣE[Bu;sz$SaWg)\jJ:;ZeUdwNJʼnOHO=9u%[p2+FQY5鏝'ORdNXdTM*'^|^RgӞFt+,sm5<˽4o yz?g._^pxVο>֥K:"@tFwDC0ױҰ$;dZA@wּ>:"tؖmWep@mFB@]3G * sM{ıO!pAoW`fg!Sf U4Wcēwd؊b¬e`M3 H 8֠ĉNpH4:Ǡ_oz?͊1gZde!ѫqI%#8:u 1Jh=s]ע_S1ՊZ,wG:LKysG\UX s j7-`}f" Dz`/2PY_кXQ/.f TC34xti6節*f:=1;\Mh(rŁդyBԿ]g. "ev=7%6MZ3L<$q4䰹)6W3l6-~p7]9ҥDz%2zb9h4DpD͚_p ibQ~ =XpU2uPG q =:?!jE@t߽ÀLq1y V.-;ul<.KPfD\$8PL)Uq*Q>x!MUR oA?Z<<9!etk`^Ef̼t;LEA2~Om9!C}kZ,P}u`/J-Lt]0?0-;XqاMqK*tg){*%qe4%yZdM'F`6K2Շ!`%f`V u;Yߌ yZqVaٵ\<=0XcDzIatxh0p`Ka1M,eAjST _Y +a=`c&?FeoLol|whjo=]W>S{,<bt,`Y5죜0ǘ'{-i`;\圃q_^.F,Q2WκNGrÈvRNtmm/$d?N!0N[:[9]u i 8oDm5gztR`m=sy[We TV$U9.'1@[1߇sg]E*JII 9."8᷺PL(݀P9t7B閄M]Rɘ-8tan4 cn2lZlsZn-)ej/jo@9n[jo@L:lI2F%z˄zmNޖz/o@9n[o@hLhlIh|P|sBn- k]%3tNBWZ<&Dx@;3\&(LcՃmVZNsJM>:G_*=fVXQaҳH_JO;[D/?lTӳL=^:WN><fQ6Ie]hF^#"+Ss6ߋ Р2Y״1N <<;iEõ_<U+']4ϲ P4䢎S:?-򸏅@}ZE??8'.yY <+=t \q!-0b\=85cT9~X.CT5rOi`2 4Dt~с3KfmGh4E|Ei<# !ܲ 0*_ ||#4ʨIe/5ri?O jH 3L홶Th ߍ䜿ZHH]/ -"禔4£ tE3|e`?N|c\u `^H><~^q+dqPi2,wkhVs &K78-O0+Z EL:x0xY#/AaYP 凛hOFsA?pװabLx,m)}ZTWrtnƯ #;*S`J)"c::U>4.&gb")$' Qf 6]QT聵өKlm1[SH+iU`$7_?c.>֛M^M^3BJ5Br<4>~͵)vҡVn?1 V-1c!W #.23WJڳi5^qӣA:%Q:Z_2].UguF+g]'[ gYǴT Q+k'r -BC zXo{baCכdpMJMj ZnϢJZbt+y$᳴vu,KVՁOC麹ՉYUFC_Kn29s!B:jr]R"BG(>ARK<>> ,7`}w]? DD 1B^`kѪ;rj&!!Y}h` *,A\kBXݤRQТGhesC1U_]E уƂ>X[>>v[II1[:+Os: $:?6!(p(Nd*+$s! IJ8T5YyPؗC)T3GQbʯLI̺kt; Vyt¤f:R_kw$%Y8hU%UXi(ΕhbStWRJÛb˃0H^JW t,b7fBe4Ye*vLm_WHQs)Cht \mK ô]L$ =S/ڻ"C^=}=Hh8CQNKQs4Y+Ks_t"ā%Fāp \{v3Ya_PSK04Uҹ$wwem%I_Y9}1̂FJ(EҒKE豣Yse Kr9tQfqrx﷛_=vVr\[8׿bU>T+ӏl7Ǥy6Vq͏77|xnXoVn:I8ro^?l֫_5>=oczۄ7so7J g7 Bgm>ibى˛=To}ͿN#[kVV(W0^8t@0R0Z%H!n;5l\c*=%9-ym0ؓpQg'˟~8i(>(#Y4ɱZSjO/70Ԝihbߝ 8Xr *)*9BkF;8yr'Ѩ8jex\`+KUYfl me3q# `Ƞ`tʒy9\*-g"$G z/+l{/j9`=ooqCkg(*2CcR+UT,ߗt>(Rs5)njl Kzξ/֪io0.p%{(pT|0 Ѥb2jNQpA!dz&QтA+xkuf)k|4ZmFXV9{]*kr5 CKc9%lkO# hG01( ڐe:6%t;`X[RҲ"# MDWrWB~ݼ#`[$ I.M `5c,Icp#0VA.y\L1p WLY#ԓT 0heqXtqfIp)^ypPꄿc-I(^@ ''DbJsM۷óBWM'I1x56њZZі52 .Pш3CkNf6d*rw!x3C.v,D8h-=C х 7!5y\!F҅BoHB=?׼w: DoAp,䨨BW"'0.C{9A Â0 >p,27'1ԖSp.vR:A@H})c-%03?yY\kyAYBO2_HmDMe'!"{DC+)#ib=#H !̪3Lb:)^*ϭ$*p@d%ƥj0[K&ylHpjVⓇ\Fy`CE6{ n̢?ŞMIL 8x ޸Țզ Ӯ{pRKG<g1@%4؋%Aj[ɲ rfEmFf/g6C,apV+`- ȍcޙE^:ޞES^۵yBոy,m6bVk@j<#`ib -f0],h6&mQ2 #2(p 1BI_82Ha9=5O3?Imj XQ3IC=(KcD)Ý8ƴj\hU,Xݹ0aZ! 5B%diCwrs }"D˧d_Ww[1vX01j=xE9s ">a%1Y nFYu>!:@ke0Ɨ" @ ڔKM)nsҠާet``:R5wJŔ1b z Xׁ]ݟfbx:E܄zYmj;~5mo$qĄyq6V#I0fER0SLcfHpenk8j(DW➄,ib /X+^h&Gjplۊ`$}Hss$qrvJV,d7Bϣ ZI[}G@[|y\lܭwM<4=.vng5]Oу(5%G^=N`U?k3an/ O?}xE~lfv}A4휍{L"áJ$CEs6-|D[]A,Wny:[ߪ_vQTzVWfnq}:1b(*iv`)a6ǀIx`,ЄFE7ۨ3-t*qr(3AWڡrA27j\) agǂO(SVI5(fƽzt5Q{BC$y`ɤ)k\jw0GOt_CwKY֜9%x*V_%5H eBm\$8\ᴍo*#App[$p4TA-wVxVt `İMN-{+jKmܡ78 ~\Mf{72vT/^IZ(yjuV~p x-EoؐzoXx> ^3E}h0ƫ;H_~VE.ʞfzMgv}I3N_Мe0P D/fփԥd1UK !Э1̊lB[W^pY0) 0Gk^ ~4L6YkG #nN`sl 3PTw(,eka1y@7xuSG0}`;})>Ml^.2Q_3ݚ^{<&i9m̴ti]PȊAȃߣj7AD]ll !x9Aԡ:9Z8,a9T)gzY,J1!e9Q*6tyrKG'@jL4ł(]=c$e{6 ]7>|^? a?(1Dב{"-1Ӆ}_&qnk 89gйYe|DL|<u2'f5?<9W?JǸXƴո a3 9x뵭pS놣_b *3+kr*WaV5ߏ]S8؏Hګ vc$OhAW+pՔ.*{γK"sa$]a[h H]koF+&\#ڽfF:(֣MD< }oEYvK2%ԭsu5'fwѦ[qY>F#ݞ=%٭zIR@us@.wz{>d>\?z@`PpEWj& Yu\mfuB{ɏo;o\iy|EվIB6+=l6(qǓt j5VPX y99Q0O&$K(~6%q8ثM\UbʨVc IIh\)~vNyi=|&0 ! 'ףm#[J,| wLǾwgMHۛeQY,B9eF= GFh .g,Йs?"Oc7^)_~7&dk1bt }.F9wRcu9e,$D1MFv48;ĚHDV[H4H|r-h u-9s6JW6W2P0U`Q3n6R ᅞ=ĞdžƣaoShL@%cp)k?^yz |3I'?[?69џ*f0'pG:DShpZ\te׍x(O)NC.}$e^q~L'W( '?Oÿ<'?읫&Ĭ8=[ʾiI$^'|Rhc;>n]gSW`CFӱ@)$؛Fh~4Cx_'=7.cb@c~^qx4dhfbv-/v4s:G ӔV2;S=Q}O FijnJU'OFU^d73"nG x fdP '㈂,&+=v:Gpxt]h͘]t ~ 1r_X&ZTH<QV81kԯ|0>83co\ĔhfŊea? 6tfe Ւ/m]GΕȕg&Uev="l"b5;.L9y[}3FxU˰$Zό%Pڲ8 Sp(xTvf\ _p7W֥ʱ,,L AJ+?11'3>dIx'epS"ػ˒F8`қ AIԮzoqoF5xH8U| lLoO^U( \pMyEW\=~uf)6#EaF&ߟuA1O@7WY>g S3KY9[9OofK%K, ԓG>AH7>yʛm"&Z Uܠo7h&K%n6@(&m75-Ȗrk-ņFO:Grce؂c&qo[`U{5DM~V|@`Z4@\ݞGQR>bڤ B: wʅ| j5&xs6$cKYM'x kuCHo(tnOp~K-G)FyrʢvB ^p4lǿ v  壸5kPc)nن/t2"x Ժ(/}1]ߟLt>夂~f0ƅ!F5xiG@EX$貸XOgs|_wyJJVEg5xE'aiw_/@`DiZJap,uM_tf EOb7j6Ў~:H^yFr8JM&g/khjAwXz.G`%cQ:":j#TX0bRYU?>&uvc,Ϗg1* Yi"3g!w'3t+/h}N@;gtDa[AwZ^GF`U1 _9i&aIg0e"HRD@1YĥsAFh! #UHd< o[&oijѷkZ<^#Τqg!7ysG/ܮTk.{΀Gn~#yBjwRפPS!+C3@,xGŮs_kpEԅ Q Oxfg`$;Tc@;/?ai8aj>/77{"P@_#5Xlu'DCMn[ٮB99Q+09}$6;h}x3@ r_V{Tf,tYye|n+2`nY_2DxWWdVH$wr7tZ0d+ /e vʮm;W/UvPʦ@Sxk_B %-Q`[ּK/Jˎ-0,-9jEz60Ρߡb;YqDj縷}"j,6UhsȬNyqt_`Rnus1 V(JCKm<R)OVISF2eZuI-?I>_МlOlQqp<]/0~BE` ҕ `[]j/v {#>x:DDasZ'L?fE!Rfr笠HedfWƪ\¹8 /!uX囉7/.Vٍ MV&YDD.a }NK~ͷx8o|= IR1'Sf\s3i:㛁T&jGM_e;6㏍4] ]T,z,zm8$_ocXjler$r\N7CYRߜoƯb[tͥE W%iZ/4/iuk{mo|p/j-o-elMsʸX۰ժŭrTtMo@co%߅_ sz'&Gbno j]c*pU޻TE/DŽ"kxiy<agx\J>eAl's܂9--?\{֒m y:[6VbU[m|+J[LSnH*P:P pt(a! X :[J VX-sU q15NxzR.w6xb: ZWVQDp=J if9) :~ݟ6~ocZOq [sXmCH'6(cC6O]8y/|Wblt^T3lY̎Ia4[B<8B,l^j#'r311~.jOcij<EhMSwa Y*J!AZo4 m}9R\Z\ kɜQeH?JSn$GoL`Nl&G,I۽ (C)xF^ROChO8puY$'DK DN M&Ml7|@,i9&l3Ҧ{%@@5O5fh,'Y&;P'),Ȕ5|p{ѠR1S4@JZkA`8qz189h^`NqSTN&{޶ !Ԕh^{Hr8wyAkK[/%' KJl&EI"W3 PA&T $ICeW*ԠPDf! {P r;=L Yє?.V4gBh 4# e- 5C OH8SF=F$5ƾS/x`c+,=bBJ,fdA@yM_"(+Pn w s|l}'.»ÐO,|aYHL.'D׿_ 4jp,bFi(@BǎpcsIa⿷&5s3O.k؊ߜ?.8&y..o6SO't~4NJ)__dl ϰ~o0<_reC񛪂2,9:yaU y/IQ:z]w U<;щ$]={kfCԋF+0`˝no9lF&L}l# h.G{NTPk 5ä2$"sߎ3iDPR̟&bQHŜ-klEO܄-I#<kmJ$T|Ovqߓ6U Wk]R,a tPp *v鉚CPX$'W$z90„!TaPq"UH&al‰c84dbCSĘEXvpB8:R욀I9>U#k5w|9 L)҉G.f0()bF$ F4#J #(2ذ'xL#&zg9/ö ۆst0иW'gSڣMHsOUOU_R辤@3pv4V"-kkZ!ЃFp͗ӵ~vmk'֫]Xt1Tvq,JyiԞ@]Gk$5P. g-%#+CP30O+CH(fP*#$A&*5$~,4ky#puOAY+r}B./ȵvٶE$A^3x)H^p GXxiF24RΤ=wl'"Etg%(Mp*Lh&fcqN܆n+50خ)L6 8&GGG`mϦpR}tJc_MfBCP#< qiĴZUqf3^r G},*oik,Ϩ?'Kkgd_W{잣 ȁh[5w8˖rq!jb[r9*6O|Yll$&~x⧆e/U=,1󅟥~%۬c7o˹DHx&_d*y x>x`}JX "m_SVS.dmJaSZַ&_N`etn{/p &8@"|BqMzi"d%RIe*xIu˝///ykl623{H4ԟ,ꈵ =)t3Xꊱ4S`ҹm!$ګZGjT!ؤfQbӌryJlwM_W3=j0%zP "Dʠ5rTŌIYM*ŷn&g&雿7u65WnF253s8a6<;){<NJkJzz~&c-Gg jK@-5m҄py?,&nb!r? O6о+[TWƦ:wAW3LUY4|h+IW_m2bO93ڵg]\3 w $mU/wqMjn~M٢\b3{1Ϧ?hd@FU~C2.kd6"ռ 6|\{Yb$"5L| x}kXٺY6G&)iuʋZ<8bm,n,xVyzgG֐GXqqɳ[,Ԯ2S۪OujD䱗+i40?FQ`>P`7dA7y-/ Y<#aE,]! %ŗnaڛ{fwӧhT):ViS^vS,ޫ zTe]vxP)M[E=V! VIVB%bo`8]:mG%񨋳j(ez. NM40EogWQ` tv"=U9Ǘ/a;6HWC&f9<)n>X,ScE:8i)P15 dq/nG1?O>;ɍW7nC%z:i^`~,ԩm]=ca2z?w X^ '쥹W-m}DrGC|FQkə3YVīxYJ|RZɔFI"R,aiI blCTL럜mCSgĨz}҂jXwF҂A}kSo0K}HQI3I*s5VM=:j<~ ܂L K /&y+xw3wslfyPZ9$IzR೫6+}_Ϯm³Vh%Y̭c0 TBI\=K8I~/_]?S'$7.TO(g"ߺ!"08 C1bQb pSGZ'&1S%#,CHE8`N_YKf7M Bĥ /A|&E+LnZ?.r2s9_gIx,vϴv@&zr:V>64>Oou gNnZub^H7H/sSm xjYB\j}`dםדٚE{ι5L'Ŕjrk8bieU8͗Qt|T Norڬm 8^-V>mZEe8 &{m:% X ,?^, ty{Mte`T{xnU'r/dt߼o5=^+I]x 3gٴ^l<}YXC+6udEr%+& ߦ?Fe M2b: 'HiE1 q,"J©ӇeP3ʉC9$sCȶVd<q03ߤLʏ@P4B*Bz_η+h׼%iT)%)'R$1g OPQ=]#7nEOI9r%%˹Λ<$vH6P-zf;htlWMP  JH=18Z,ǁegˇK<c˔fXe+ϭU"4,h>ҪX %fE%BGK̯m Ԝbe~s,uP _3 )\p|hX2G,ǀdy?܆q_ ܜ>h1{ =X4KK;l'* `<XnY}|Dzou`$~v$I);b5Ew 4$Ά+-Q/cեZь;`:(q%#L^zkы(`[OGcz*?ǻ+&2ʊ ²i⒬U`:Ф~ap6|&tȮAYVX$U?M  2V%ișbI; ͘-6ٚZR+/I@NDd*>N2A-2U<{]f֯oq`qx2I3a\(#ցf4d1=w1C[ŁτrSX@O"0OM~8kX.uAgDž)=1[LƁ>&Oz-zgyc57 ky8-[G4ޗB$H8D[%^AQ01ayt08$W1du[D=c]%5+TtxZ=л(0V=8zԚOx)n, c%h,N l 9A e4YM.G;>t97rKN/B& Q2"WpjF#px@k|ל%wxܭRWҪB29R7Ϥi##>K (=1ZƁelN+ 02`!AWJM#q m :J mPQC+&(+j2G#*;G;퉵v"{c/;=6=/klnP6Oox]xo0~#E,4f g8NZHvĸvr; |_5M]z[q {]P. 0;N@b'- FZ(.iHIT/~miٖ'5z2t<4 gM3EFŅ*'\3Ie&hT>YA@h_d܈)od2elCe!4,/Ny#˝7@C28C(ej@%+I& rI'++`N^ < ̪n7}(N@%p4ڒâJáhR$4/xGFrpD/ Yr0N$O)&5F#u> | s3#jϕm3O;Mgif\I=18Z,ǁeVo' ymDHar;164%Dv" |(G4(O4 \gW.,rqUr7p |d`1q4s6w]Me^᭜2*~j1D_׊#12sD8tNy6ҙy6+2 By=~L7T)v&[:Oz'D-,2٤%<Z3DUώFSӈѮiDd9c|3$+TrTF*\:* mUqKa?G?RlVv6M_9ki6bE g g`r$~?h ,,&?ڃy mmA.1XrV:Y-qIɂ틫{~XJV>ò|Zr;kZ6HcHGw;Gͪgʉ~+Flo$,oD> ) eXYeTQU)3M 1Eek_oJ'< D#[$1)-iGڷ̯5V̧ݐ[$ցvH|vO³ ̠T=^\?Uy7oz-ިpϮT`њ "ҬdJFezafM4@5>6MMh.| Pi3 0sX sSk6wW>["@-vo/*`Y bQX>US; uM`wV@c< |/&IJSt5#A2 ^R`Im NA iN(,8H b)kQɭuW|h\q(X^R5DF~4yGIAhܱx?BnUKU{Ugƭtj?%`j=ⱝnCZU։̊vZ@UEP\(SqYb1| ͈e߽뼙T%0^1ӟv#\Qjx*}X7%+X3ε91;"{T7~1)7x] MϮzv!;;)O?NH{`?<E;??tTw馂 5[ i\uV:o~E [([,[],f~rS+ՂS)' (,gZatO}͊K7@y(^\׵Y=ZXV]y&j l~4, @UR˛2-XEmd(k P)9!S >?DŽpx ܥ?GuL oEfHE I ;'C7(#`c8\O|jӗBρjd g14l C?^ I;]>!s eȄOO]ĿiC) vg9"nn34;ܩcLVl=|)N`P];\sF֜5>wlb94gvMhTp)琓| Пk1#2F{^Rݗ4β/۾#N#&<OώGozF;ķ1mWO,Nk6eO9|f,^#YpdO\ %NQ>A C p ?0Wx5xjx|spC%!9n6;HNguo߯jͿ7FwߤV^7%sf}̛oVWWiEvv|N`o~sHix'Û_7iま/nOFG^Pp@\*p}U֘`Z ѷ^mHiWޯz;9ͽ9ϟ-ޯSwߒ><lKXD7 0/z vŒ\iV' 2UW;nד~(@ݷjՒܾ{P2AuB$OLHJ9Ė}O* @]lTP6ܾڠ.:PV*u"Uzx{0ڔu}"ͮ458z0M+FsRRe%Tqx2_{EQRZ{΃8-X;0uVD~CKW<]o,o>> E[=K^k[;lMp-3YFa}{b}Δץưzض,<|ՠqK{%s3OF`r87U #e̲Q iLeA93 n|սBuBl l8&E.go**Wm,8+G+{զ^0A~))+R&a*o1΀Ԕ31M"0&oPUhZ5(LÆl18'-[_gܚ}ɣ|V3>8,OWۅA֟o{lN{SRT)õ*(u>#U`Adb،X}H-sڃ*U~0/`\)_C cv.~:8V/- AK `NUroKۍ*q!|jw诮O)\~\lcJ7E3!!TQ[|Dzk wHq{uAT-axx n&M,t2ՠhrlj2K|7AQPC*OGR-sAZ}$b`$ |(sM;\c'+3T@Ԝ_>>=[n;hDvSm~r۟4f(|dH>!\xz+wȕw]9eNk7gL`j4NZ(K3 ͣgu?;>P5D N`p0 cHpђ\g%{cpXr9]w2xƌ_:t{ߡ<Κzúm!*7AQ>Ty F !F |p#MwqېeҰzFy  |(G|+moeGήys#l}\ԇ>_oGѬ|G⺤t"yw7n^V'tUV/ӝH/ J4XZ6 |T}^RF{"7x+`Q"xK`2dK#4Y8\MO*(~Pi8K3X$נUqz9jq Qm-> Qm6n؝rRe8g/ei6ך eoTE[UrS;&'Jّ_阋þLO`+-dl[(KY,sx1O,~\&d:$[݇u^Ǩ%!.!"RV^{wRQoCy,-4/}s /CΑUօ1Zh}EgѴVz-:ycC˽-f@cC+C28 m*QJ)L⨇!@B>dglZe7vlcG|s~g a:I7̷;Ԣ"tvӵJ\Fd0v7PtN Bӻ"4|,2{ڙ{`$ dci.cH ]0 5hX5.ʲЃUs>e\c[2(1* mC*pAXxY@'̐aG̉)b>z?@y>yNv$:),73%J!EKihk\9N#;(vR@Ǯ']Y9Ӌ0a$h[2gM1|t W;AaGd+> 7tyؒBUoɔdp VQgV!#x\35;ۍe%Լć6nEŃ17+&̍F$MP[e,o =k{BH}$ w(Eb`}E쫯+~h)=@3qŝ&j hP-\_*2ˁ㊐rKoKr7z~U z,w%7;ލ٘:u.Rnڈ-S^ZK|hLޥ<--ܷ'x '2iock}X3//=S0U捅e*w:Nj`F߂;,'h0}C)'>'^.!W\#=HK.~: !9/x Q׳2ͤ?dI9UJt ,޻PR?S!iBr 2Jj\R9NuQƲv1qp)Anʕ{ =5;uF}6ͻ =NW顅x j1C˙MZ@ м/9}Xg`G]C݀`T*ãfZ|*w>%^qܾo}@2(NGŁ"ZӁ/60ZKSNk~oզGm.Pއ.w0&p!W%1b8i،]4|,Vq2]oV"{N>fԮ`  &>))+%*,.%^1bk@=<*NheQ1gC=hzxT\F,r7 uȤNLhN RNYs7AvAKMv|d: Uq5&cowMPYVTnAz%5k'ƈ: dg&IVd]K$@!n0QUE3]G3i0S._ABKba|囗e cPfdlFT0 S3}al w4r4|,Tt\*NcHFP) >.i*z2fE'thj"I)H6˯Q] >[[3L7B4+3jdPD<"Y㓒?IG%cAè ãf@B(.d/iҏn" Ĉ$Ka-4c5԰N/tVMbX~4j@pYãf_Y*?7t3;tVch9k;{\xL 'b |.ãpe-A+Hw8f6osp}'C'`hgm+w xMg@Yԛ.0&مM`QcA\I27rרt=C^n^s󁬛/#RԌʱiE2{hz b}Y۾7W7xPl$O3-/~@jR/BtfA C M›3Q _[k4=ٷzsF%ܷ~9rb]Eo-(Ӵ٠Faj'z3@ǭgn>mTI펄IRٙsT OVi8:ő^4 rҵܾ\=ǭ2jxdSi3$֭} I^I.o\pW2>mɦ{ \\&]:jeص'śWAEm hrڤޤ]_4"j4L7;*p×KsD/ZNMDi0bgM[ #4bcl<\d v</.qtr*Mfql"+7IJ׹>F6;䬣7NzA\>VlSg l`2Q#ES...մ՜flZEy;vn` ?J&Rb<$t#o#y_nUx{ :==)_:Q=<ȣepnZwCr@N]%sy&>UϿٲ1~芫bU؞UfEܰWu T mMRH@r^h$s6Qw#+8c9M+?C|:m:I!hbBu΋Â"ηۂkk|C6 Ru5}Y.VàUc9׌6r4^SBlC"I@q3qBբTifǬ2S!)f1&cXk~I:Q\H\k%nGNz@4hx/Ku"9A\[{&wnyb d ?gNxgF>r5I);y ^Tz;~)k` ɍ͌whC {|ʬEl=Ee°6j#/8 z:bY?pɌ˾xFރ6 yq MD@u gA1#38rjۨon-yAfdcNOcƛr~9QX?^KɾíAՏxȗxmVY)ep.#9nzD)C|''zf@| U)mk|ذ_O*aa6VTꇸAlQx$sӹ]9i66)_#CD , Q Q s"SNP$V= | j8$yG1 " 9^Bf@w0GVwoI3Waf@Y3Dh6uwd)fxM*A2J_GMw`#38D7@E4=izQӰLimlsu+3^|YZO'ְy:.Gd3p7 f#XIAb)g0itg0ͷI? Evmګ[޿_!o ͷ]?aOvZwm͍TyHr<+}f'ubo`E/E()E@$jdJ%\zzv<[Twe]}Q5QJg*< U㠢*ʍ٫k1vaI{(=nU.- zrGRU~~ut/rIxs_?{okSWѷuC\}-O^5YrZJ{pG`K7p dil#wzQI;Fqo6K?V%sV蚳NsQF:r\F<*o7ydE"+_bwrJUT<񛫫jUQ23kwb/2_iq=qa5ZNo"GNHsszd{Q~^5KKkV~Bek~#fOEtݘxsvɰYB~pޮj<*v.૷lY߯:Rk: Y}&(6qeO-$clt?£ bP}ZJ0UC5zo:ꫨn?Α=KsB\/V67&U8NW$_H"8ζWέQE^|.Z!EmJ1qF!Z4:.1rfWWWlƤ<_֬_'IBmVa,* !R5D+3=R}@|7}=X/x˥HyLu#k(CUs (OTɄi#2d![(wv-iGW f3p6SU4.1sf ҃~qY3?JfM4S)!=2Q(~/QMW)_R1>Ř>D@*HE-Ex=d|i}<ɷMXf/Hۭ@S3-Nګwk7 cB|T% x ¯V b.V1.&4 9ľȥњY(B h-FnՄoWS7ٮq8w^EE>9r_*voؔ/pKy3\[qK1kK}g<1]e/n+B[@|EjHWO8z5a܇A9:~,oEpDn8 L}e pc8vRŸq uJ9ȐTad6H˭@SjAU;901ccEubɱc_fB(hvihM|V1Fq(P Nznb AS*>%PX(HӠ XS_;@}c}l@u[;'͓}QS5 Pb̠Q rcDHn|(=qBhAN}S{<"=P6@PԎ6@Z8Cj8x\!fYۧym1pٙeXEVOӞ౳5B,0 hA˂8o5ucufb2nݞg½!\Z='Çpm&0S/?Cŧ"-,墘6G .dB<1&< Zi:Aa[V/8X=׊b!mp!Avr{dM݄u~bY1F{pC_}kc, TK+^?48 0p*Ms%?ŌY2 ?<"ޫ¯U_1lc:kmy96kk9.,4\t=v i <*yw%mxub^E$FM*n8v8*ExEJuY镫tie81 j9aPt4t;OGCH:Gup=Qvz|dH?1 y-"$ák vMiS!h^viI3|HEzI@5w ?lM,_w !ZsXR=UEtg,q J9-XG4u(^n`mfeѭIo+VfGFb@ot*}=w)3RwV$eA8Y,T ;xfHM\uWbY1)e/*YU@!pݳj!CD2mAջDޯ+Gwkun*$:/~28>*,<==lnI70Æ/%|1+J! udn +Ź^;^|*ḁN˼B~s6Lo>^_Q~]}[d[rRg\ (nO<ݟ_u]Ko͗﷙EEKEU~ؖGkOo?f[~vmQs>5h6"p/#[x􅖦(nͪyP16, ~^uKq3iK/ʛh&uxzvz5e_m}uߟ/ \c#7?U Y_鏝WnojQfݱW΀ޖ/n ʼn1T)Ә%ntf7ib%#4:h4W"0}ӊ~U慷ݤ.=og ¬TVբpSej\퍍ڞrYGhNƮQC=MiJq'U_Շ@%zYq+Ϭ s?pduvs: .vY?z#QXK\[o?+ieګо̃WYGrLjYr5ofQT+P°ݭV~qШt<5}xg6`gy:W)'.q80g02˺HN,[nR\"{'x#jql J7Z\C7e)^__c"_?~%>%J wQn_9#e9KOgj:8T+q:{&xIc„>K`TT ( Cqw{Ŕ$"k)&b;!&PI*a8cEE0%9KzXrl}@4_S 9,&yfۉp)JJ°xV2(qT2*G¼_ҽDu׉uR*=^ۨO2|Qshmj/޻zXwuݘZpCL\嶽C>Sm~@ٺnF3M/_={AgԹ6i"y(n2Wu 6y$YS Y2ΐeg41S 1*ěFfF($!* fWqFGpy;2IAJk3Rx_E~M!UbqC t9nXmWzcj mݰXY.+)f\}(tE Ҙ 1+!(t^y=sL(* TQp-fkQX Hc8=,rv ˴Au")armjaב\L̉O ͠;`d8'/`XL\$_>XU9Lxnqwm `/4"̱qKިi4IPN,A̭CT)bՄ!\= 5HQcB/ȍB|g&g <CR43_΢$LNri*g(b\"BFṮz`&FKUM_!«ĘLhrnqϽq}EoC~Lس%e_P 7eY^Mܕ8.Sϙ'YMi-KA,ObX#ny3nY {{(qb֥@'Ǥco.`EJhj9u5X mY\wa0@>h9*w#l*Qt(!ǫ$EOHI3CI3%V/lgy@%/&x|/@ ?rEM3k$T! Ls[,)*xqzMV bpQJ9LV%D`!PcDB4zGD*Ԥ8S41;ːp,D"kI#[|jԍBSc);3#tkZI sLX)5 v˫&g9Wp9]VS>$ y5uB\b̫Ře`bSk (OHX2dHə)ź9^_!Ucl)ym+I<. = ĝMQlau{šB sBUgN1 9{ ¸t*/3]Sg__XpvS~Bb 3dvb(GEpc}L[Y[ġZHps,KIsX)O*8*BA/W` _!jcJMAπ"5Sq(|gɩ&Te6FLX,)MJ>Ai LU|aK]Ȝd檼4Kb<%9GZ1L"(8MHO-iUKh.( bBP.5R3PnuWm 6Oj PcːM9{xQMm\Y4p@> *ܒ:/N<'djqfBBPpg/6{Ol)M2Kg-b9 L`qVR iA'CA>/;LdTKqQpG"MjKGxŖYL%)n uQEC]z-mfe""od{'Zttfg>@yv7. Vp'ӊƟ-|8ׇ]ͥ^o+'(*X]9|K?q0~$Il`4r}vw;ßh×_fv23`-;=syG@71l#[?bk'Fv:{?i[254/՗qao-€[}ǯn:Ύ.FLb1*27oq:mPg+}ޟelA1䗎X\$]|adk|;'`4/O]7{zfo%&}\ń sOPQƎQ),$#9E3,'ӕ3::ܗ׆+ B^o;Kb4N|w`?y?6nd׆r3)ŵ|><+ se.6) ,\<|4^~ErE$.WܑN׎uF^o'eR4?SE.vli_!u|G+M.o_Lǻ4o.s70kH;=' I͓b'TT2LJUӄO|"c`0~;(.k{;KxXHq@Y,I=i<|P_(β_3yO3`hϏ6Q/d$ RHerȱ" מRÒP]RY&zL)rD/"!hB "Յ/jQ{ܙ*9"ugjo"_?f"y8ڤ T%7OKa^vg:+xILˆmM^5wmx[ٮ/>Oޏ+~+sۃQ"'-YF$ @`bK 8eWvL۱[U 8V/y;} /hF6C0a\ٮ:$EZ8ODA`WgQcmI] 8z2ﬧ{ksy7VZ&srۮfѲ|6U@RY)p5Fغ' u(h:ju$P`nqӈ5sv^zvGL qG $ خ#K$mchp/4uIm`ZM&!✰SC[x4V}j=c3X"(<014Ro 1wQ`/ICkDk 6kC~Qr eFas(0lX9(;u0EE:걋Sp`'n 2 =2Qb&LN& Vv/>TXB/9‹E/iBM(d·l|p'0܇:WH5l?^\%:- ? l8f(}*#rd!}LpIa&{Q%Go˕31"17paF)8l ZX zw$N¹QRSmu,EHlI=6Ǜ>#ϢF[x4VnʕS4Th^g s|ysjX޾Ha)jpxr޻QчU9 R2 R:b*# ZLXIz.x4VoU.9LrA-<+LjqwY.U%1hhC C C Өy]$Sւg8\-_'Zk= ϥYۡ|;Z^9|sJlR$Fh7z!(?;ub\:I#Qj,lfUctn{>pk(\%H3 M#[xƺ8pҍSbhh_"3>c[~E܂·h rnN):£rdD-$rJKh_. ;ȇ)ŦAD,6 n燃@okڪJW\d.zJhss(%hKz9!*p$O$_C]2N+,,WTuLԏCM&GK £rPɗ R@FY5x\P0;: ^3C5"vuzJH ]B7jw* 5 FyxäșEӛ7֋{?/ʵ<1,GQ~-!/eEs]?Ra/֊\P&4ϬE0D9X9LW˒XvpcMe7g-"PH!2tEZX 9@+Oqj.Q#.lX9jȍ)CSfqT9>镹ZZt.GcT(?I(l}IGkg=FoX96dATXr Ā,F~eiې)ncz|PqrI!0!NPl L*AP R!1w b7%p5*UcmĖoU?mns0MN2P H.) E 3 }&Q,3NK HESfpc5 jtF(dAȣY RP/£r*m=NU~اX}gP"^`HgeI_:"k f{ `V|fc (*廧its5FQ4h NAEo">ę9V VW:Z0 ݸMzE8[xUs1 AV[X JC.)E)E7V9#A{,-%q[x4VNG,EnX&/-r):+£r*@px×=J(oX9\(#PY%cZ j*9] yؕ}@!Zs)*~I[ \atC/[Vnc%TݡcQ{1Ÿ #c4ș~6D0WǶPQPfU`ڟ^,jGc*->8SZccݙcPɲ>BV,֕绍{(* ~Mc-UhHS!J l@,[2р\eͪ1!IJBa.R vg(9zP}5 |vzƌlX9J?+(<,u"@8}')<L q1 JTP0ԩ}}>֫oSGFɕ0:z&|Ke`skHdaNDO,hn0X n>u<q@#n+dckҝԭm>x]acѝϿB 7*Ẍ́6#1E͐S^v]Ei뛀]E[5$y A(czfl 8 r-pʼnL̷,״e3Ԑ50DĤp bc0FԦ.>hXava"6%.7}MV岸{J۰~v^ԦثMSLhJX6B S*YYP[Xn]ted59϶W>-ӆצ+1td"j2xR*ӡ)Bwh?N6̐"v3ӝP [o]п&uG=ݣmCe!lbp;M'=.,D`( 8b03 (\7JS kALS(1Jn9QgK,P`-rHRεHMnb1>fR&xwbB0%Ox8p v ,m@ Tսtu* hdǩ'#0 z()栢MCl}}zҤNV"$ i2zK0(a.q88EXx#@:$\nB OYc*5(K)µL8z^=,ޝed85/|xgFP,^Q%Exd+.`k_E`%Wa Fq8x-k6\Uh c'PcgmI %ȗZrV#MbӨA0(QIgQTѣ+.q*,lC5_=Q7bh,0\fW#_ ! 4_:8o =СUi?'+KlܶXZ`ybkY,U">KR%f% PITyhJU-,bThP$`AUNqŠ_h^W,GʛX֊mCSC fm~|yf#׍VnJ{Q<|<bH6^.(ȃR[2Ee*Jew1A*}VO}廻W]{;e `p Oh.oB i AWgaX*¬ /F`c\omb2`K.-iGeP@UP7C `pyåH{Kyj|sWL#,Rڤ4 BW9 ݇^.=s7gKvwt3]//],kA&cHlQ,٣q.9KF#gFXjFKgNqPAP#a(B >r $B3c{;`6۶8]P;b0aV!0-pߵ68f 7򝝕M /aN(EKf-D~۪\Φ0-6ӵ(ɧ`HP1G Em )2( SjR.k%'RHDc>!hs8 p\G |0ڻ{>n3E{&%?m Y<(l%:K,Κik ǵVDIpVD*2hW8-S:S$FۆZm?RHhm"# ǣO8Q->X(~U+[,ty%ŵMޕK)Ԇ2xgPPJ@y4"Tb}z .3m֌Q)NW  fRa:# AX:&A0b`2OEk2c2M{y_ꯪ@U4`QGW~bhbKZNEͬC.Lju67k/ {7fV|_uN,(Ӻ$<~ =ٔ\G4p xR|w~_~73yJ\#jCx 9KeH*N٬xF &8#7IGL?;*ϫTlvHz~t1[t Ṗl>p@۸4i_.~_.(pͯ*7<#[8P?y:(u#F H^_>7X 3!+Ɯ fԕ溁n`溁nSzD &rѶrѶ\-mErѶ\-m{z@Q?m(U.DzdR )7( vSxj_)`áCo;^Gj>@n169cUrl0vյPaEpt]gٻ6dW56C!}썮OkBRzR$Gȡ#$2tj) 8/^QaۑLNe(W{bi$\.%6!`BL1jv)Qv>%&zqyV9+? v ˬ X`wWx2]9Zu)Y*|yh;ݗWjOn/zk%+X|"fިsZ3Ŝ: Rf;&qbF̕knԦXޢsI%Yr׻JgVLg"#4N8mYWafXTZI%hD0A4@BnGѸc!P a0; YP2^Ukʺs `!bmfQZCr=,+&6* p)A'H*\\+"\+4mxP -UHzya3kK9mpq!(JA<$AuHrAySD:#ԹRL]L&o{_;)HNz:= lG-6`M@ZX.*lnnJJ^5qY$Aa0ݳpVCミNVSLaʊ7i{=kV^@h-cA)hڛٔn 5;BRIյq 87sy]|=9:ak]r2<\oBLɪ1};e9T/vU W"pW n/ ^3jj&7ԯW#Dm5r J`Xاa<*u>ГE~wzS4kwھ Tf(jBH0&!eOfwp6,~[+ttDz|[0GoO6>}sOޝy{z:?7} #0-3P6A݄_j{4`OߡjTP߼jUdgM.Ի>.֘bKuՒ[(g/o '(S {-Vͥ+`M+lP~o͊(Zj,~MTv,(ʯLFj"ܛw9r2%WR[^=VnK1L$ʇyKItJOiA1#[lE(D:t,]9-{];psnGV[Ab%d54%G-%dЖ ڒA[2hKmɠ-o({iInz0.aހ_6ȉR4gmyd;XAgwy>mW޼@l7p]wF5rW+;e΄չFb4QԞi}=|ΨKDϓqI=yXw5I) 8/^Qa@udLXFBѶ7;~##9\;E[f6X9u&P"Hg%wL@=l -5+#j9fh?K2R3ōť菔ʺs `!bmfQZLlT<)SO T 4VDi'O;qNqqP,?V0[Z d_e^qZt9>|]?+5Xv- 3{ D;V65e=c`#S.tFs!LBv&S0N|l(R85iatHiV5qYx>FxGa9ǜ PѴ7鸭ނ*u⎐aRumu0>_7YvOb? WݯwŃ7W'l1 1%͗ R%'Q6_M _50xI\SR_, )(aa`(U|8}dQgߝ9Mkk%Z睬j$~H0T$dØ%q;8nP2,;(g;qۓ߽MOߜӇwoޞcOqr~ 8LK&轛^ |U;TWMۢjBlW4zSlfZr ,{ߝǯOQ pZ#&g~h6y)hQVYp}_ER fRّz|P"p__~"q5 qm z›$ eɓMO0AahHs)V hIKª$ݷanB}z{H;ɎA0+טN(i'jXg@vEyAF:0)l:ka9P-ၿc}B;#qɢU븪/= `#wݤ:[`\0s,GIr# $g\Krc 3?zܜ\pQ{Mߋ 5xtCcvJX9Qk'n\J[H@*m@*ˋ0ͿxMS+R3Ey:`mw1u5A?n_-jQ֗!ot.G7~bkym!-0> v29*HGwu[Ƣc\h3tny$Xǒ+]Yuz򀫙.=e쒸VQR$*6jg>NLj9+d4k5%kҵ&B" /NJU<ٰ-C-V= c"Hi8\bsA^)Up8(d f*^Fi>\p464wiF;2Oi?}efҫ"*ք\`bms HJ' E~x_e{M XmM-)(GPVH>B'ڄ *Đd?d'C$Hͳr(`,`fd|f=m^ЊT`+2^~gk F z3]Z.`-:9)s.CǙf:lg2<^::eŨț7-R/QƬe'i34vZ^e;87QgO[e׷^Cv:j]g6v08dh:vs}'LoSSaزD)q^ynGil-O㢌&-.kL[)62)eoRdẌ́[g!5u[ڊ|; {cܕqnBS^S湻&l+nvDiz[Z x8in0*KòOP.wp] Mv/᳞a/smYs02H2"2unY7ڼ*xy:)a?g&֩ oSDH< To)(_JFzcoM ȴAm*)sU c 9v 8BiOe)!ˤ5|h@G,Sm-#2m{FFjewQ- 2krqJ¬ad5Q0&Csa$,Ce.>;<aHw$UO޽ߌj-~_ҔY az51o_* P(^(CoŖ|"o8YCXSFy/.6C I2a`i=w&5#"Z9'D%8Oa^S.pYIs;=b.l(?ʩN&j][o\7+%ŀ*d`1`f W8}/jI}$ʒc$՗"._ɪ|qT 09_aI;3ٴ>tx0')L7=M^>}*kW*?jG("#׋$ahNUHs>98?]m42>@JȬ0CęOCQ\ǯgaoP8\f Axm6LT*0-8 0?B摠4g>X M*H mJ1O Dk"8E8luN>>@T9jrC8Zn%41'Ђy6f^^Bd5U]~Ph¼@ڍ|B!Y02T"k -WV"i!&]UXYhó!Z0;mPh撷)&021^Y⤮ʅda~&on=7b>$Ӻ1\଒QS -+pIq`EINgB'ꘋ- -BNf(5qR&),!#6Z&ӣC(4ahݚyZ)+X GFgyr+_+@ ZXD݊%J,R :Nl?C(`޶v:8gS|g[% dBzt~;%(a1Qܪdr<%0&*U(4`rAȉL4) pm۔KKX6D y5yu p'+DUO_o&߯U$οYE}*+ybL#f*m:ȷ2[@ߚTRl&.|7ƟOw(=|R dS#7_S;sdTpcʭ<ڠ)EᲴkZ cCV^Jl-%& `*hy-\DqdӤʓ5}O-v$X\yi{.IݍNkͳ8gQGmy/v}znN8gZI Xí_/~%4Gzq*{`geiG#8z^p9ϲ䒜):"Df455]zYڱ̗@goi 1%erIZVYI@-(i2tE5A18j6~pD6P4u;7XL@:T6΋o4s '5&)(L*~7Ydm,>Z{'ECGsFdŝRBQy(}%͒.h8.Is87P(g go ddC>H퐆c@gkQ݇GA˂bj8{`4tٖg*z^/B@g :#`L֖I)DJtDU 5PN 2uצ 3;k)eZ|J[x oǞoOyum:+z4xH>!Mar;G߽skkϳ߀fu;~W Ȯjv w oaȏhCbX}Bݩ83^mvC}}[A[G0"qb!*=o6ݠ#{83d*msO Nfx%^E HXA2^gΛ}}p'_D!Gp>'^CtE ]TJa< 6цC#{$-h_8x^<^kW=pĜ3hp|!^3_G<@Ҋ?x*k_݆wz$ S:8fcD5rmf{wnKW*ٔNzQ*Ү^,Ȱ%&X`ҵ{>Q+2@I5L3S5\rHJm{gk*sjTYj|9:T'5J+owTjw*^>чuWvrgV^Sk7\]/_8fw'cf=StVNHMfg.k4=#Ffѭj06Yg ֧XAϯǜN/1`稒uQwr7fJGHEyrV&l19[3=D5?Ħۓ8=?&q}w7ן~x滷ސ=f)GOGL9CVCxCKhWdyǸwGi\NM~,};/ ׍nn8]{MpUs=K MnO\fYU3!V몔n|m@ᩣ+']݉۱=bo|tMoZ|W IZfrE zV=p:Rt\r\4v >waeX>Čg($F7u0 2^$2i%a{ٗ 1 WldSg/);O#7L&n$4F/a ɋFowJGHL¸>gT%-9|8+C)4s>VkTʌ8*pS;4kDY#Za؍芿\|akَH뙐Zʱ9؜vlN;6ӎiecsڱ9)csڱq؜vlN;6ӎ66ҁhcwqul 6vWUcw-ё ]uG#8Ucwձ]u:vW~ /T75:m1dS2AFlq>T0:?ytSE1 Z R,FOT!)UaжvDo0ʋ&"7U>ӳﯡTcr^5bFe( ڵܠsB >&N٢E џzX?v%!7Y;K=KdnvK)PπD?Y$ &lbr! yez' u'uƦykeb>%j@w^ܨW[n{7yTEبj19d\)HƐ. $攄V,SL`C?E!|PC0$M#cdD#3NFMV(_I Aߞ)gDgRs³Jb;ʋzhXH:$Mu %!#e**J^lx VE΃?׈ mlȿd3b% E5 F 3(Rt"-J >ǔj&kH=Eh+=3I $>ڻWh[T)mW8f{>]^76YAߗWߚoRi>fFb&Nɷ|9Zrg rٟNwvy:;iN竦xQY~E-jn)ӞxĬWWa-wefB<{=t15 ־uQe)ꎵ*tR*lVPדB]~Vd(xHDrũ 2a UX Jۖ2`~e{0k{Qk҅M D3kEBt].\xuhw~zH;(qI۴2/|ҙ G`;;URLЩ**0ວ> c@qHbJZʠn P )9g %X9x:+M|v5jW=zoKsLV@ohɾ]nr0z6XW!): Go>ztrt: N ԁ!ο}q牣j9M㤤?WR㤙Y\d1MgKqpΊ&xڐ}񨴌j3jM䞯Gӷck][l)8X[syM?ofhnW9`eXJWGsOszv2 40zˏyQ̳@UQqnIoNV]m-th׭7:5ŔҎ.nGK(oϟGϟovX{7Տ}up-gꫯUܽP\˕O+V cڈPsL[nL¬!1E]O$8@s7ā [>JbZС23@)2>JJ7:ewjYnάVLxG&f1('>)rc^I F(^D#@9tˊc BSEZWLgRXIm=g[֝Og[-Ît[!@vX%RsDkPUŲQEE֪gPbLn24ew4E;'jxD +cxD )5\9y,y9➔ͬ>ek-Eoe!L\1\b]LzH`^Xv@Kױb_jĒ΁u+H٠ 1 PNe.7 gv|=~Zo =:<@ҊǷ|AG>uB^.SZ.]'q4 . uYԼM^GvА+qXbSZEcū6A+$2fՄDsHA+8B Ywґ~h쮈Րg=knqv{$H]}HO.:z tqх_W2( 8]B]#v hG R k Lds E!:p*_^`6bWUti,ƃirt&V% $i2\2BF =޺ip$6hjsMolv㉘klXy@_}c)4(E$CE)R5[UI%a gNi6*]',F3s7-}98P/goD3Y D@d1Y Dz4ŀ l l evgܝO]6nX{vs#CdlKCd3Q' 0vlr| D17\2١df3$3ok)bD*!2^ V"%BU1sUiؑ9$)( ZD&hj#?9Ww> ֚ ݥK/㺨e"iebv4[*EUUUDtI)38X=Wq8ZcE[-r=hno-vzmv"Js2V[@Չ"VQES@Eew5(gQՈlPZE]T<;ڜ7W켵Qŀ=Yoݹ^pwW\ޕiN;r uM,F44jIX*rN@,ATOEvF K4hAJ& 0,+)(, + P7O?i`<>TVY^ڭh'ˀ!䁂M ~<H%ϞPB! u'uƦykeb>%j@w^9HK׬OXMDpG5_D k.[W 1䪋I9%!#@2i`J)}%维l[^EƖ3Y|?o(#>L҄92FvLt99b;4c &qo}nP?&Xm,pg{9W?<ih%vs'¡~gx YW=c0c`A qwuwUht heF4aQiƝV2]86X u{=(6a%-q̤6:.(Im( RFOW2EI8o+gFrQY>$Fbf\ &,_T:YA/T1R(UVAw٩xX![[3H™=,bG C2Tٓ C 9RLD J1uS̎/ u| WE-6`M@ZX.*'"jr%/' &⨞,Z.)HFlW5 gks9%X bZ]ڿ54BRI=yS> FUEQvmL˷b_KW?'ߖޜ/N8l(Ĕ93G8VsK6Dd:|Q&8'w&BzhsOlS7ds7Bl,@h}LK3f1r5UiWu5Mnn^t4d$ú}:ՅϋMcp62׭F1"w4*'~MǙNutoN߽9w=~cL_}v/`3I;|nuZw5*֢kBr+~oW{Li̎\DQ7vTTVGO:p5"%g~ MT&U ̗*DtSzd3>(ʲe|eqﴏeoŸ@EOH"Pw]6>~vwn=zHE<< פpj:Rjsfȵ1OL0&{*k\gS|yգezut'mi1M)5N .M'.&k|?MJgRJ~$!-ЅH3osA#Γ@ayΠF#pAV1HbT hVtf(wcUBYr 3[(XIڒ~l`U X][{ ܳ)jR^V*tXdoBkAJ'?e33k~_Z}}k$҅:Pӣ0 ,ۂlj nm)bgxvWv)U 16VwqFnT` q-Eu`S)~R:e{m>%SPd8YGfɌX$mc4n r$,3?5 L*A)"ٟy562cŗ :i.,~pQCI5t ՕZ*W)ZV Fª5kٟ5g5|#yG||fg]i܃Ҝy 3B<6x:o; .g|%m&V/'Ex=[a^ԛE 27,Ut6qcCވNW[["\s,g%0 a6&>n'զILvGZʕzڜqXv:Lnm[*цUQOQ:쩻&ͧELy 9ؤo~mdz ojӁ75;vW{yrUCzu%wk%vo˼ٿ⎯:^Z KyJ@x/Uℭ$k>rKݽTm6Mtac`JK4(ӂJ!iGۘbFox?--6K I>b{6Mt$BbXH{rmb]/oZ,%M/HBnXQ@K#I8[IÒAv؈(%+f$,CSEƏG 1 Њ VwC v2S3Ī0ok$NPJ|4e: "X=Ҹ|Sa^TIdV83%MK(*+>f3m$uSF4I} [xOe02-O^y1-VOjJ>O*N:ni vШɣx󺾛dp}kا&K=KGvy+ Zmey#_y WqӍU-u(,1ZeWTy,n3L{X_^t4i+]}.[GEq~˒}iT r5i WQmE~:eo޾}U`rn0ʔ!d{uy+ /+J/7M.T3߁TӬqote`+e:;!\a @J=7J4ip:fRފSU!lm'z~?x3)l'S^uMrE?t!w"Z*G؃4*[Hcw|?W3XgH`V=,%!xqdp3{}>Fϟg|:ou>RKϟ>F?GY}>F?GZ{@Z"< y9=H\ȫVA۳)Mx6;rKK2n3'ot3O$04*8ʝ7\T ?c93@cĜET\I}otc` y"{rQuﬕT*ʔJɼJܬ5-VYzI,{zzSl%s6p0(E #[cI"|!tvu۶@(AmvЈ#LaDV.85~ך9+žZ]sL(QmGs+:, Ƹ#T|Z))$t\3ruS =st:t8s4KOtψ{mg^4K8~AE#]XWgSP'`9q*,sE D*Oꬁ:Om@m<+}yOf~@у] E5)گk̦\}{ã@&a^.ܼ\ɕPZy:zlA:YI .F L[nÍT>ZNA}iJv%2i%͍%.*j LF7C}f8q~~8u7{*Z6֌lS=W#[n Mnh]k4 RAP*-B6D>bŨ )c{Lr}/O{9J18ˢAr5 lDy b);C`8T .B# @%)\dDԈ H!UR aEFӎ3rZY)W<'1?o3Ũ1dĒK=6oe"g<@*Z4"^KEBO?1+F9ahkM-C\"!F" CwC[{;jݿ}4-(l#X^`ErI͍ $)%030vҀ@X•4Dqaq>Qml<8[(Cޱt z^ՙ6n]kfßx(oY*˙F(W @ɯםtsb0^Q9\y,s&Εf$vJsE}hg5h)71S?B'5E0(z.?I)g(xMDP41b5 EۭGMXG"\JlB O$ąU1kQR$pkYȹߣvϟ0l' rEsyf6X9u&P"Hg% wL@=R:bͭVc&H7xy*/{ԗW,VHR m9P 3âҌ;$$" (O {%YPy,*0Y2o{amk!@Cnl,Vqۊ2xTΥ$2X[S+Ai,l?/=`-Q味T )[bi[A҃bV Io\҂^L`eめԆ(etu(c~T+ O,JqW#j13.|y}/W}FT%A]7^abѥjɮ:8HYS3B_5yr4R~I6Qm& R2R޿.WcrC~{iܔ3qM5ğӛ˽ؿ8#v_޽-Oޜ~N߼=9žq~ǁCpC "M5H hsֳ_ns t=6fqۛa?#z~YgJI2KR˰FA 5D !*)Ll!Yg6b|:H10Q2'%Vw^3M}PJn#?:e΄7PrUoȾQ*ljvÆ[Vy΢I_ɐDe&O.r0{twMgV>t’PyG6L)G.x,<9ȲKXdLIq>ioWb,'B^ JDcep~ LA?~ӬčNQ>Z5k˘g> 0'UɍDL,%08Hh"& L X)V2N:ȅgjm-bnW;:wT 0Wߨ uRR&o P\9omU:UJZE]C>L^٠d;MA%E(0F q&M"CU=Xyd2dh\&HQlur*pC66hm8kĝRH/4r>4Vn֌MvgyY햏I7=:rI5tӐX~9$>hYiH)W%"CF #第bŷ>琢5bn7ҟ9U ЋS3-Vw5H+Jig8MseJ;B:7׻mZZ wo;F9<}6$[> .Mvw-kɹ5}n?@[K uw -Δ6jkHv<eVbp? R(>SiUWedd ** %B$P׋B]wvGAѥ#*)K.Ʉ$9s%3$+trV;X.ǽjޓn|g;>3o%azuٲ-C5v=3JveZںmǠA1t}]{Fn9tBVmKu}AW,r 5.E^wӨvh0Kzp?/z+R:J;arƭlyq;|wRfQI1ƕ'u_B`Zp&NWZg >e)׹V7q]]6G[CuAW5H!38L\"DS~GpJbJc,RE"/ Y`Gm䶫6B^~Sw՘8S%bǴg;l!/Գy}7DG!\RZA[^ۼ P޴M6Gkqn:/>IQuC^_/l{atgyP헙yu(OL{]-6l'*ISRh̨I%*AqR`ZΒ{T-Rv"W;YBcP`)I%4x'%LE2K x&1RrI{.x@ǘs d~o^[1+:tV؈JlPh ݋WPo ˦jaNeY"N01#F)`RVLFL'b_wۚV/O;9;u4Jc9ug8H!($g23KrAk YlrY`'FFn6Ld)hccYYljwв65lz=èty_ ShQ$Lh(u!}"ns 1w"Ц|y)?Sk'X\{I@?L96Tf!ad fA֠V :;vX?O]5m`$ˋ̱lq'$!Zͥ”1I BY;ъbCxp*=|'`4 Y|cƪ87o_Hy@BlD&0ͷ_Kx׼oeK(.)ppZSZ1`]JkjrxŃ\osW]bԧYF>x8ڳ$1E,4)Rl6<wW>a$&p#x,ep($#$"L̝ fťopZ`2X GIc3u ˥:4C+`ht-*Ε8D$ž\Xa%Ht+wqM1*`-',fscO׸UƭJ&AIAGE0 ZM,0=ŪҎTAa:AjWLؒLh} 9*-II: -L4@,ٮ} ~W WGߣo857Վr}T"iŷƇ. ʜQ谐9ޙ&z7gO ABJM^ L~2$kLzOpK@qLn8`xa`&sDvAu\bq.#"ut˭^QFٿܔ3qM5ğӛ˽ؿ8#v_޽-Oޜ~N߼=9žq~E6 po@ڵEIkg ]{Lmb˭cշ7K<$??O~ʁ9A(^Mb~^['nQۣrO!ZXČzP@x~[N{}tM#\)2PCm'^Hcv}ҭI_z%RҫI0W}ϿG1A0xKEd< qe; W1g0SY)eb[2,PTXQ")B'W]eɗўhQ]Wъs#_+s>9oJ; D@Inؖ`KU?\/9/nkxq9*B3RVޖ@?x@xY{BI?-5OKR#tFd'*21mUJ)Xc,d(#4K9*JFKe6'8jsTZioJ7=d4)4Q⇟&5z&Z~ژn+ʛḱgz:_! Wyu2b KEϤ yF%lAe$ |80QrB)i;R+b Ʀ83] xsyyƒp" 4)aR!cy9*`!2)su6 ,.R%$o :c05Byx.2Y(Ō"D#vW]PZtDFӰ!r},1%P%%E01+g'07>8@!gl:ռٗc,lœT'7FhU0IHrD"4,0-Lc'c#朴vwlm-n;Zw܁5[ \tv~P)+.&MD ̕VGD*hUdq^Y|9 yeϻ|4I<q !ȠdѲhA9ΤId5[`鑧ŽtBתxovZUd};J]~ M]j\Fآ##ʹx^ѭC2940+ $_+4#P*u@ J-8jqf EƜ:*%Ac ";o|t}lˠcJ`f'{GPKVF*&3#"%Q4 dhxZ%y _-qDvK12aB8)Иf,>}SMa-*0c6 rƲY1|(Aˆ9)-6¼34FÆ.T[6stzTAR覚 Aep9D1 ҹN86SM ۈ{`㻏8-}_(dëeN2@YhQ<]l|ɋGf-l(c766ҢGkakoL Ro/َc;'.6ց7 ϟr`/J@QjTHfT$J `SEP-=v~WSx+,L`l 0j`1=kNtrnc[R,<n+

0m^]ZVz(8#0I(ӖR)C,@$C_j٣9X:@d) y.wCD'(=RՏTG "5(1󀄶Ƒ ``+z%pmcC(.)ppZSZ1`mJckjxœޥ=հGljNjў$y)w4)Rla*›ΕHD}l?Zr~ޝ\^jRq@p۝%kh- -'KkI}FMK[k=ݙ>^\c/^_O>>x O2ib2koӽ'Qg0q] bs8Y2aY>ڑZ?aaeJ,oi|,Xpпox9f7yhq6*ãiԍk4bq)ps|61vSFpZ?^ʙxग़hӗӯ;w'G~}o}?o^ÛyoID;0HJlSNj oxЮi`C hrճ f\7{}8,uLmB ׃Ko/|uqn56Ŀs߱$Y<*2zУkO@-fTIOQB8.9MLqI6e ,žN;WEAyQ3m9HHZ$D;p~.I-".QƠ5f / GpJ+/YKقE^,B0paltGw)8|%=jLyHbC<~LhMz%B#A >P^M>/ޑEbΉeX8%b-O\:l.<@H*#g]FŸr.{',g#"tAJޅ$T)Df'`4 PpO6y/ߛ€ }1o&AfPQc/F] MY T0޽GƁ~wLjBf,I$\ioW+b,'Dk⑶30+ &v066Q9m B3fMÆm#aaĬ&{JTmT^:y泰 sR%MT$"Q Qs>QxЈ&b0-p"#<{y[ j݁Grr7ˎK\ޭC294 Ĭ$ eV=TJ S%BЃF1P3Y#S40=ː1)/Fۢ+!%Su p#3蘒4Y $9O6<#j`a/Y?<%(JMR}̯"EExdfEfd3S 38,adf}z-g?B8yi@4"N3Nwݖ^LvyͿR(A. `[Οo#k;9o~ߏw\kVCJ·Q)/+jl>J5zӕHu"H%k~Hzyg`XɣIP&s*FVƆTզ*8:pVAZ}raUUPVAY{QbW@_FɅ-HDHKQEioH v8KU@T=oJ <S݆{yŕA KH!UpJL0!u]z-XyKZ6b$ P2F\[jqg:x`VjQEJAG&noE+o{E T3]gZ%E7wӯO'Ph_Í*O*5TnoP fWBκzʊJeVN-Ynvu1TZ| 52PN^/@a C @mz;&=rZ]ܬ|7^%#Wn6ܾ:Vd^;|{HǻV{s,T|K*=>\n87mi:3|G!A~{/Xs6)B2WB;Q{=x?>UGm<EOt:̌ayDx.àR(8?=rqƨRqtzD_ 1fed>F!ޮ}q%꒱=m@q)( Yqj0GP>Tyvba{1ԃ}jTvqjqt3ͽjר%`OeKQ*&l l\G)1t$*;=~ 6׶Oq3pm3fC0o=NQ5m堗8'GG s(;-lLq1܀#zØF[W s/#K{Ҋ_;R3Ev ltGnmDAKFmCWA/BZB75LQW\tߞS!/W) ν4j }8F\DFsnPACIܐAeF.3ӥGx5 y:>P_7eVBd4(Ebh.xw%rk3S69k8'cZ hDW|>GS`8~|0eAtB%k2>9.h-D<b\pJ'Sh߅tc-&? ;n`7fBn[7Qns{{FhۛX~8gLLפV)E_!qP0BZcƠSbTrKP ;ѵkqEZjjEW0w4ۼCn:-6.y\a60ce *!Z$$&D]؀؁eHÒzL) <*T,2%ZTB|D mTvaX"3"UupjT,eEry QFI{ ϝ бEӊMS.ZzP<ާ}Ksޮr2;I1 _.37~n<}ubF~ \?݀BrLgTdw\|GV#~3'! c0xG  78cA~RH%51q>[Q4hKuR]T-uKuR]T-u ܝ#Cz骅 2%p9NҊcς~RYд$ޒ[xKoI- %$r[cKoI-UxKoI-vxْ[tXI-V4hZI- %$iF+IhY bA=w )m6Or6rܥ{oBˬV}Ix˻+6+$x7~{LX(H֮@@}F/n_ 2F l:Hcz#K^Ko\s0l-aVdWQc<\a/EڍxsY_˛+ݒ{s!k{vހ:gX]Ơز-3DM5iMGx:=[k>\&Nfgq1ħݕ"gA}Ƣ).>ͭ̎9@R r>βFjD~i.턺.ٞ+)3Ĝcoz+e;ֹVIPnr^RC*P.U~$|i'҇s\z+YC}ѯ4qu%1 p.rK(_ˮ:(Y%;;Вo%8~j9% ;5cPeKNxMBsЊߵt4UP^ 0S%,x#+5G1&rֱ$809GZc*nU*֪Tr0QZ[磗poc jSf|܃wRr:mUʅsa=76pEI@e$gSGڣ)r :nUrWݴBԪתt#%i:OHﻋ;$w۝`)yQfT1CtB_gQ }L8+v|Rkge4^,u'_BYBZQk|K5dK]z0/>yϼlU8XtJ\;X֌x:DqϿu0vڀ~vQKOF-Ͻ|ǞAz/t20ӲI@uni@7hdqq,<Ң3$O]t@YdOVߘi7eYI[ >!}9&p~9MOMEMe0klhET/Na] hyC6…t!x%*V"c!Bf`wG}gOgTku3>;8Y% 1Tdg0p:EJ^KN'& ;=$)/RhKy{%FΣqQ;CaLR8׊So;>l9!}gtI07п |GcСЧE1rHz3n$˷?2ތq2kIq4t>;@˞oi., sQhz.zyHT1qceNs+D5^=a6JPRRN J02hHQfeZZ~6*qRVp<|ٹ;twE(UrJxya84 ?(#lB,GLBO DŽVN 4Ξ>h6va8O&,!6PXw{lVc'(.䧶6GmܴD7Md*0$hBlD+с1)\ LEpS\$?_s(_g"^僎oyפzXAP76b='(@*2;΍.:/E#Pԣ;X sю$!:g}f&b:~?R->BOwv8}N8.ݙTLH(@'`8<=}\dZO#-IhGkhB@,BR\<%0(s&Нs< 2F0HRH8Nj(C̻]@d5w|]i7;ز6}F{؆11{J3;%"(y9n0Q#IPPG Y*?zuY] 3%cN7MGc)XHڀF+ FQ#EEe Px;Tk7y]oGWeqPCquq{Xn aIIvq}=3(CeӉMif]]S'';et J'TdUA 2­lT\d$l.Oa'O"eN5mHK:B S&N2#=Favԯ"+aآ":P`T.WJ{0q YZ2QYe<ɤ8iw|VRTޅ%QduzBggo{e?.[ӃΔ b$2ఔP8(VCuFos|Vyj brZ^5h/'q">Vɒb%1ퟑOZqNywi:'Wstb٧Q$9z!ɻDRMZN.'15 |zQ4:KiPqǮ`k*q9]Ug6"k{5mB{aK*%T3jOa;Ct_iN(m\O}ԥg0.,Ƙڧbįuλ›E`vCsbNQ>~:nx5oDhUWFV:dWcUv_N3)K_Ɠ4U\q@i5n68s;RV4 4650<;8 7oX??QfчwqD;I[V4Gbݓ;]5u͍ؠkuMU}߻Q[0wqzQ|;1._ͦ@뙟2t5=L'Wq 5?m% Uhm[6!},1ӹT]!n+'\4& a\Y۽x娣]xтZ7#=}܂PKzSdS"Ng|B[+zSOAʼntCmǪ<Vc_.:I;+jRy ފnkE3wg>ZApeE6S#QJK㑷YG 4C48FA}l/حya<4J%q.&ʚ tea^ t\i7tQG60ҺD^v#aSr'wF%y.)-˩.;",g ^2 F'`lI.*-׶l S2HNOPE#]<9듸5gaz{w ,32Kgs阃\s&Biv9HN2Ipe2sD+99Ëv<߅ ;My>B;lՆpSQfdvSJ$ mtr BLHB" @MvG,{ܘV-Oftv.[V.HV1^==߻9231(fAz :cSR!EÉbzcH&C~Ok0~.A[˓AUQ&Jޞ:*SaDmjdpyuwD{?&.{/q w 'Wqojmv?l&5L٦Sܛ N6#aKsz7c4~p4`fw6s~1Mt7T5)\ex ޞ~_AdNja kxB1b\\Uv6D@ 'B*=p>I2P\ծ(P`&U{F⏫\3``yU)Bf22rT*KΔ@s \[Fvp6^r=l2S]7,3||7nKe;TYK\8 8}>LNj͘ ;(A!Km a+w{^Ǜݐ|^{ރ8/,;)'x% fDr( YJw^^^jZ4E/3zA,xKIn]$Ku֖CpJ cL{L+Mxf};BV,J^=^fSf.؁G?Fpn*J,*T9_<\ \m{{ l z-N/v³ޮ{䶋 fn/Ԉ8V#j9.O~vM`kiŭRB[63Aokܙ e-=,j]ޱQ ޘ[a#psC MZl:T8{SH;5dS,Fr*N2r5cn߃-z<~Rk6Zj5KD/&DŽ&8Ι49$SPT7W4(qTei1s|V̱+'QLNh(V$ I#4I΢6\ H.5R-W&zmw^P8k RDnHbR;# 2/b9>^#;M%d9nMRj50#TQ|)`rƧC2=JgY*jY $Sɽ)GyiaerB'!Zb@֑25f4cfo,A;GZk[F)YXMbǐihn=#H5Ʈ`6d.e !M,B:sz`a4!!A+c|Hx*],G`vB+Y@q𹈇1ZmXoAi4ڧn9nm\,Hˑjk!Vz'^ؠg9 &E]Rnn7;IE$,.v$h&[CR0rRh ݗZӧ=c{d> RbOTQޫ_ 0#H F2Qhg7:@ 49 RD@0WRB3 aEuL. maa8 qF@+˻5R(+`)$VhBx92(h=Z`-*RSt*&R)+ 0RlHDDh #uѥ%ND X h >nV#C J,K "(X6 \>Y]P'0VGPDPl$6hH21G@lcBgbΤX "zl\R4 f@57]&?1N@SN+ L C'}L-%: ;zU0 CM0 B@LIlqp @gRvI7M) ~=Xg6S T VR HE4a*%!@pq0Ps8e#; " ZYA4= dp_h)&(ȩH5㲎+֚N%9Tcܽ.OS \84'-)j $+pJs @C'DV I,LYȀH4 3L>^D&%m dg]#mvE$ؒ19q`1: l$hulY`}W&lFS~t@023 S19Yb*-``(2\pLDcOJKJ,BhPJ 5ɋ X E@x! <oB[ b࣫EVgdNDɉ⡫;͊Զ ))e%)( dhvaG`|]hz8^~\Z9HQ1Aޅ\˗UL X#Dꊼ!Bv3I+z:"MކV!Ў2Z옡yAgB.dsDMXՂ61pg%Y֨4r|!hH۔A.vamTgEb[E!ܠasWuCϢjmF2֧gFfcI;tBXvHY gi1M`Yci@fB yHmU=zˬ`݋Yu-4- 3C,Hd_kդy !kt.Y7u0q Zn?i6ʻEW?^sML0,!PC\骅 c5x ѹ`*@?R;5,fw 5$ў繨) 9. r3c\ A{!l}6 3| gr7%xśC|ȇtBK6WFmF /B0w`u )0HH&P#Ƣ'p'"jz+z-m56IcU>?UNr\B\Bb&r 4\&hyƴ2ȅ =E 9RŢ4򨖌50 Yb2[EY'GugyJ;osaoߥK=/.I>'mZ;`.@CԶ|D{9y3Vލ.m:g>kl~x“QG,a^a[|R{_cu=mSד韓d^ {V>ϣapZ6PD]{Íi}+%՘j &j &j &j &j &j &j &j &j &j &j &j8=*WC`Cp55zgjHk?\ )7p ʼ{F @T-+c7ڵr؀ PQy՗*W X_[o}-o}~}N~lD̵9b&暘kb&暘kb&暘kb&暘kb&暘kb&暘kb&暘kb&暘kb&暘kb&暘kb&暘kbrk0o\C`wJ s iz54/v1׸xk(n1JQdukjZW+:pU;  YkaniPڻnjJϋ>\`ӵr97Hf/jg\rzO:;Q_o n^ َwg?{9l <s .H[Htdَ5gv:JlmDdM a9-BHVĕ 9ҧlkrF)$:sľ#>CKO,99] =^|H_|4~}ν+$]97AǩOzzz?k"wDKI K?U_~'z,l_b*Ւν2u=̂*s ێ4+ Άv>&3y>%GWW)*?-r-Y$ԅXK2BŒFU pNU\} ^aI{E;S3*6zwvzFbkìi׮ RqנFfcF󣧘dlSmjMM65٦&dlSmjMM65٦&dlSmjMM65٦&dlSmjMM65٦&dlSmjMM65٦&dlSmjMMdbiPI+5h]Q\,xJɳE(5.ܓE$a.N^dIT@ޕt:g뭔yhm>C^3XvZ&og埰Ѣ8wQLp߻t&.:UaW>[B,ٿZwo'OԪ?Smɩ۩.@#8kQz"wp^ކhqn.Ϗc>d uӳ(~o6=]`g{pӛ(̆6l'VpOv:.ϊa="gPd9b] <#νs4-.O۞ R5J2yQϺORS&6IF/ L{}Qk02&r0uXLt<^lo': "X͊ '',RVR6"dk0XB(Zdpl8\0! %]cj /L_o3g p{K\ :MSeQ]{+#<ܧ(pkB@e 7X7&ꯃb`}d ĊIg)gDȍa͕%$rM72$-m:D4 +%'"RD/SneHaȐ5[Xز<;VgD^Ԥ"lJR TѪG]j%ߞ붫W`T:}_^~7`m[I퍊|-}Zv?6: |]Frf{D?|OGy $z1:H҂VX9j?c\2w^0.^ʛx~o6fnyl3%[yHJCʴM̺r-JqzsiP= 97/c4G#g_~O7Z_o7?0)۟ҍK$3V2.ַvR88JWabԾG gU68f{aބwI=:\vch4N~>]Aփ{T&(ƶ\ɰWj;×]Ͷˈ1fe67m.i>a(6/>_ 5on˭WꫮnrUޫaFUg%,Jg^1=Iԧ2Q[6O 'L\qRx8x8Ofir{wO',MU} ~J;Br 37T8/ߜ&K/ڃz}AxϿmOoV{vWGgKr4${M“$Tm'ucx& PKTu{pкxi!>{mʆqGIC6ּGzi{t2 k]TZdE^+wX 4AH+.'X, PQm0LvG;;Ns7.sI\EDilCP/FO~-iz ,]ܸ[>O?y(O_Q䒖wث{l;k|o&-7+8nR*%3j]eԨ.~&8o ], |}>.Wl8I<5;Ψ,JnOM2uĻ\8X ]H J͵O?ur#d|8d|}Ꝟ]akL ne298Nϊ5`VkqF_H{~$. 6[ V,KcIGߗ쇦4#=cuͮb__UmP}ʨ$&?D`Hϧ5%V}?&b=x ~8u$@^x;T^ur *e?rjV !ݬ ѡ]>f"Z^ȹ,wVgG=veHh9οޖEW *d6,[N1H\tF_ D,?1QGy=2Ȉ죋N ~ʃC튉=ͺ$~b|+ X&)c qee,D "i6)"ETd9L؝ԃFJ{_NZIܼ%Ds\M?/ׅc nv8eL~VW{4O?ß>XLtс@qrSNESL(ZU|Zx\x %*!a>5Px0Yh0 <ΘfO/ݣG]mgfşrS%:#`-I<YM[O? ͝FFnuyeY{Csc6%f3Qm=mxR-(ehakfH: $I8h:oB4S(XD;~lTm>S$WY*qiKLki>_Է\z2G'B Щ0K_a.OE:sG4P٢ 0&pENP| 5evsvyp:p灅XI ,АkH&?&Pd?)@v $|离]2Qp#A iwo3{0SlT]^];[fef̆y9񬳅ͅ +řboNo&ʱ0 cWN\JuQKjj<Շ;U,m9.t}&ZY,wExlݽc8%=T >_ӣW)}po SmbkfSt.Յ"x=jIޛ|E+#^2޺2^&bU6ǙADŽ*؉$Zr*7t@yN(jO3D' F|2'lF=9$^~_4.wtg:0Ǚp;4EMJS%FUmQ!Tրee<4OV{X3LR}&JCD9*2Je$J V.VX"1r@9L8 e6!`8:YBݭ-&(`Ɓ ^XD,FxDZ76T7~9.S0$7CycfI({iF|dA<ѧJ7mivŋo9!cl=_0ZlR!ե d>NЛr7f!lU!(ӗ0֋r9'2Vbu>x^jR_zb=9jnؑ)>xCԆYlǬ4y@ަH]CTaѦQ SͧԆ Ċ}>X*׊<ZrLG>,_Opkp^8]yqtu3xZo Mz٢Zzeg)L02<&*s4xUx]!|PfrqqNYxµVTBވ&Av `2ڹWG巿/SL_W~ݽΛ4jNn;|UŬ jN$ܻ[>I妫Ww>JԏH^ }Q@oݺb%2 ra(X1ƺ JB*X:Kyk!<,z9:[MarF`F.'8(b>fˑ%8j/9хa,(հQtX3נ棢 A5!*={uQ:H|A jYxD6iDF] u=X\i^i XK%Q)XE9cHZZb]i! D(RpmZi `VSRo]l}g/տ/6XO7bZlmhw ޥD<5?gO|"5@j;_ݔ bFk21E]j4EF8Ua!#Yn@%صHcWW"[V4Bi#L)DSeTOK dxF5c+a<Қ3G)ʥ=38s{ x{zSs\ڏFFB46n=E(AWŗ6Ǿvʡ1QgXyٺ2131yͼlc21[ZliyyWz3/Ӛ#$!QVk/KX_rrc/寿8VN_x9djʕ2s8-$q Bid -TiJ $OaDHZI!T_imt6zXtObk V!9wa` 1#*0+Z``@ X  X(#k&Nj'@"đ#B)D1apxQɺs"̡WĀ,&/SzkĎNML/iX"&yHsGD1sW?r9{⦡fho+`Ua5g$ X*f1ki$I_V'mAH\vFYtG$x062L3YU6n 5T3H(u'In$c@ x"eZKJ<* EQDF;Ǖ9!mUIkƙ"# wN&Մ%,3%\4dQ'5r6J'1lאMDdMTFS\MP'E&yF,@ R'mͷ@O%gC P.yE@C@biB 8 vPP+LI3@py; 2VJ1FQ8bʐ4ZR bѲF6' j EK\$gE #PMkK2N<[q7kBHƼΐE. ZS@#$i8AxgA(AEQq4>"2LhEΆuf|OffGgK^˽5ߧx*yphÄk (wBS'湣]D֤YB|Be.r@NKOv}%b`ZAN7)MWE) N$c htFQ" `Y4RVcPȸg0::GA'uN!QR I{FU@'_Ѭ|xa0r"s9w0tZi#A,Xl࿨#WA`yHx%$x[6Hϼ<[Ng>#*(m]Z0*3c R TqsL*Lx,}2)7~r?!*Ejd^-\:&_xSk'9S>U/ \ *N֤Q٘П{iݫ#Y~|n|w=9o.|f1D=#}Ӡv^-ws;$ G}@M!KFRdHB.r0l0J-fY>F*3g1[pp;ne,_:*#G>Q>agzLe$,}2u QeqLhoT:T!O?oBONOߝo~ah8thnC3S7ɸG^2cTl?Ln%@xa_\HvZ;q?AQ/M7- _QT.cB bʙYDCy%˸+hiuj*sN35F T9jC"ViF2WT6ZBuP9Im=v3K{_(ұ= K$nM>QKCbb xpݢN+:{O|#Y9\.i7Kj&kjk땅;Ct{rup>@Prok"<"x8m72z\H>is?lF~cBbkF\ Oep9C-O>saq#|Dμd*s d{b#AAͭCJ)ur|{\r.Z;¡H,ZV&>Hыb!lUy| #=*hZ*2(h^8Ӓ0>h]iqhb;%#!*1'9)P 6PV%Y2@rde(9` >,U] Ջ#q.]ȫ!ˊOk6pN3xWuL\ZMD[ VC,%^&rQبUD^ܓ :h]5Q|^> ]/ZlwރĹy5䝋Y{O`tvq%cBq5}d+<>^L =8mL0c. k6$S#ģ\0*g8PB\">k>_rn]ӠOۭv .9a*[ji5xXPi}ڿQ`tY'7PN 1 DVf)!3ęx=b,9A]`mM6<̒x7/ oJa|*°"[4x͒SN.G.+I Ϻ%ruc╅be7,/7LKXzSՆɦtQO2m_O 6"[m,Mŕ˼=>i!N-Y[l'URy[N26$mkr4Arc>WPԘ{frMlI p54"f6 LRsV%ԴZ޳tmfSbʚMLYuݮhj.;B9cht諙 MՆnP)Uv+ޖ'-F߀\ŷerh=6iܬN=sL.s|7vMKtltQlpFZWpOq"ݱ5dgGiKZ2qs@ ]6'ZNw|{4%-d= ClD8D\ /E!R"1DҀJhh;[#g=rл vOocb%'S阂ѽSBɥ 9IДĉ4)%Z*wcIH e߹Z[mGe@p9}x< h zF j h>̂칣ܢ!aF'e]jlXkRlZ޺93R#Az -$ua EOD1BہUx]f^pL Kf&C4Oic1Aa{~A3 M&-]Awul/7y'ѕNE/\mFˋQcxB_Fۥv9eT:0A(S#ey* #g}*Z$sX &yG@U-Ц`"zgi7s=4Õnhs~/U~_O;?5ۻng&aY+᧿\^Bö,ػEk<#j<óW/(Bƅ6xf$)8P_L!s|bU.V_X5i: A'OLggl t,Y `h5D8>ة iZL9ߥ..k|ڙ %CRxe㽏j! 1X0G EC+h'ol~ y*^$j+ie=[W\op̎1q~]KF4qwσhXDOKWrnw?{ %i]u%BlJٹe7%7>X/_og[T&o ,ZGp Xq6y>LGfL A ĵ ú3x4eݳ{ˀ 2z8^9e+ y)Li}\*oYJ%eI,8I2DuPs:НF\W{9P(!j!fPcp[*PH̋H]B$ʝ^iTkT=͇< '۫Qk߸*v}Azu9}Ko:udϪ SojR Z8k = r%iumL]q2Z_”GH~qPӪCuQnfejGkvUm]馉LlZt}^CoCrߑΉv^rY`&KɕDJzV}RT.K-fK%e[R}w6D,G=C(]pR*A qjkԒĀXX-EaNe.lu#K5/0VgEKu/ &PI&A ڨ<QJOuOUMc[5&GfPϦ_Qg҇osY{zQƧ9 {lcIzI~e~7^ G_\.m?糧 l⌡|EI3aW܍R\-kb!JhH9pVT.6pBF}paUաeEYٮ]@W2 tiiq.:X{&XWCgjX{R.c[ӖgZHBKv~Twu'"; p͘$!)Q_ ERn-p(-BbGl<@l.~hkx8W6Y(T'E9~\C4kR*DPk4$t\Aр-S8e55|q/+\4xS7sdkח߯ ,@f9<8~.|ٖ==,|LO*>b(M68 z۷F*??W?QA|зx]m*+Y28m tW݁h#Ͼ]wzvHq۪;"y`)Qfp\\t, +~OkfՕtPnWf/}V~AZ8{޴d@mk5Y](6mϬ6|n)l⎧$1^khM]K.8LQ*L^ J[`$B]L0+# {0PE F~HduʜF>ZI\,5=F-PϮY9s'ECI(EY}0H4ieޞr2h*iOJR KeM&:/w?뒲ɸWP31pDtb%:m}b-K0}4|Ah8LM$t٠-#Wۻ_OA ;ٷξ-v_ 7Mbi瞅lu^BVM4F8%2V(%ӑE;>{yr߯?b^^d`;tҲ\Sblof0cg]emٵ~[v.Z/sQH a^f`"Y֌'<V)Xu\:qGכ,.(wҿ"9y^,E7B 'w ^+I?_~~`|3hYDPj fw}sg\.+]R`vav AmA{U0?>zx[1e>jf'_C5:~%,y;s e}E?d=ԾbĦO#剭5'Φ{Goɪ>ʥ ų VVzܘ]˫"כsw j$^{Nqri[Y :::VKΒ"'L#`d6lA9 izpﶨԿE鲨d~0PwKz/v] Y="YBc#pT2 R4DfDmOLsP^<sAދ?.K9돯n>_/ҿ&@.kse꼒XH?nj}H {\hh]LH6SeIВ2%i%Ok]NK(QHa"X]q|,VEdqE G;C8=!p!,RQ9 Rq)HiXp脰')Bg@ʍ!)3,1i ;ıv%ar1@R/QL2@9:vڤ0f1mLb 1NklmG>ƂFeP% T`1jIE A)}r(CE%~uˢAiQojM7ZSo@LBF#֒k/eP+Ĺ*{hfbNEWQtu]RGQtu]Tu]Qu]EWQVEWGQtii#ll:۩vl:۩v:ԣ!?Ի+K[QC(YiQݴ JN%:ckE/B.O_Cn)|yg9 CT,/^AJrf-ubrvg*sJ\'6g6tb[xd8Кj6/a9vډn98Vtt"t=(+*ڧ#:bbG pWLjxn>ޖ}X*ʓT@7~xƥ-=+<-ooH9I#%B*D_nFWY|` ?=q[Ʌbڪ4 zgq7;mK(.%v\7ؚ㨞ߛi9=+[ژ%$[!0fEt)ak8W2Ia/RC=뎢lO%vv[NN0i KQ3썜=^fzi~] yZ>6ބiV<H[m˗?0 U Nӟޒ{1 '*joШݽgh{">|l{>=gucEIш¨ݼ{WE+[cBCV}X4X']4k`RZM.8.YT ^FAAeHYrcrYF\X-LD3ef{[y_`o&05Z=paxEբn[KX*,*7UnRn8!FY䱌%#,'W˜wPRhI蹂wA!7WYΆϩanN&V.d-:ce0l6lys]T<lw5Y sųAN,XU(Y(CU&Cô^}bJw$7l҉e@dOֶWdz~KЁfRdU˝[ WMNn[W8ތYAfh36hE'СsŽ9iv~u:z9ޓ.v<<6s~I/7,콣btNOxW,Y˅bG-%;B%c]Tu(†dO&uxSइ7 o HZ3>́E*ڔi(&"A%FrΓh엘2yic)FnƃlݼƝnfۮwaEA+B) zz]+NTdh{=X,RYaH״`2CXHdH5dJܜV̤s.;霟_a7Qa7T;Ԇ`w2h1 !Y Á,Dt3GBFHt0@ x Q;A.HId x˘E QrzX ҡgThusȾ 3Q!\ln+W^*-ͧq4qq-R 'S}zE\k/_Ż -F[ZѱHRI&-%zP>㢓&eiUr{-.UήK: RNC;*+^9I! dfH,YlrXv2NHˈ2d"$DˤAdelRBO9+{SZ9فYQ-ZrHArB R%}"%|ngςStMYbuU/z=}b7 P.{^D͞҉,CƓe"P+tT' yoy#&S)9)EX6V($98I2d$K/AWG/6z7#}lͯ FiMf'-dx1ZL،KNfs鯮dm m""DۧqjJɹ_o,rA· x8HpcX.ZdeQ G$Z8|RKI,C_џx܁MO#}qȡҡz-A?Ma nyfҴܥz]iۥPZ QN 54&lb"$c_{J@QI ochM <b;jrzœr>E6(&qc9!,O 0FftIEX+4i&ƒt- ,k2B!rp 3TITd}$d'i#Ӛ.|)z$Ik&yNi]mУZ1.ۘ2P(@YAjm鼭lnjf/֠?:5˭%0)fpSg5A*ŒtVx$H,50bͭV1'58" > wI*@P*`7uNHRn(fEwZIF0@b^LXT*<b:"& C$ڊEEi .qlFeL b=6ᴐ%*q̤6:.Im( RFOgaf,jpyTn%5,i5N. e _LT((y`{W7ԤרR,ѫ* 4Q?>ef$BVR2Cҳ g?"9[H&$'aXE )?)"D J1uf2 vp}8# w SEql꿪 *tttSzc(*ez>t!,C!)dqr |G߈Qim٭5UyR1Ÿiͼ`vw2 1%g97_8蟜{K)h2Ca$Gb|HMÐahfX>0 v0b͘t4mգ֋lqфŨjIs.u}9 (fl!Fe뿆)FdE`ײ|)w@߽y]?w1QoqLq SA3|кazhC檫 d\ƽ>Uab2w3[ __ĥ׵ $ ,K1d #0m@WbV]?w dH|P"I'.Y7㮼ݖ&v+y$ eIKO0AaA)LMabVçyٺo;Y|N7ۨQ$y uYDq$\@0 eVѸ" L%ro"JFc&ɂ=jނ'crkHcHa&%q+=AE(ƂG!b+B.xE ñ=±q(P; pzf5Y ԁO䔳UY8mqcH:A>g?t*M`>pP|uPRW{3ta+C86a)w>ᏣO@XRŇMWGߟ~|jsNda xQL 3Kd68  tfW.eG&J 7xtSFx CymKxrgCf\l YQ|T|UUo~\Ao5|fz9NJ{do>2#DevX23lU?E6VgWTkѺb)-}_b.рmP9 P= λpufc% f+36=rm_2v@"\qμQ;g#t1xnHFI \3N)&t[ 4q<گY[GݵX;# 1K4>l 9WA.yr!eJˀ YcS{,.X$q`H3.WЗ4oF2KBB1Oa ܸSFڽ畇t '[r߿^Ȫ=ؖ"XSZyH>HԔ1тFQ#2 tbn] w1_U]rmf \;/l3'U>x6_{slScF1ZleF͝QFGR :d4zKMٻg /=$f8^4ݥ.e\ Fg]?*v[KW~ETxs}q?Lt&I9vbW$fBdF˛՟8$3%fU8_:Swe-h=%vc]]lA` 5>;L|[pwtz RtQ(ƅ;[A Zl"Y{v֞YX sYp{ҏY 7zFo_>gSqF3mLT=/t'Mq|FY҄5{6Y8W=Jai{4Kʻ,h/0 Dr2KaQJ4e4FGRk/E.X;4 ;zzk&9K7/shvayҫ$,>ZoIQ# 3 ]J)zt@fed5 vuCnZpq/JT@;E/m\H\۸ahY7%Z}t;YDLCCxWf5-VwS>lͫcaÇC̅u x{b⑑be,fL )j$VnRbm}~0=&m%^&3>QxGm("nI$M'p kevt9Ar}̛W̨b's͆5}npPpSr~n:VUփbMgoUWcR]$80+q5`Ža]QiYI!t׶Ws5jƳIN̤Igq8iA7rRňK눌 S)4řE:%G+cq~yl")AnkeyW6VmEiiAA 7t z A|v-B]G@z@9v3|%gS4a c,[Te8L0˧A ɘ+39Pݤ:;#27?;j:Rjsfȵ1O Dc2oc$O45qs@[ۧoS&iE *}LKh;S8[csO*q y[{/VNc>\y0RP*5-ňWxSAKCA.yyڠbCtFơa~7'UB1_-eZՉхChtRhAB*Z(^QLlp y+ ~H.y٤xRjhI0;=z7=rt&}ClhvӪ 0u>|^xhSQ2T$8g$XrXNme8|} g2Q8RʸvόlS 2`YC@b ]tŒr^n{Jwdz/ڭ(tBI y]YoI+B?byDdF`b~Y`#XlQ۳<$H%ӀlXʊfk9$ F |O V8dDZj:撺QrZK лl1g_)>jq X}TE 'Q-ĉc[&Նc.i.IEG.]^.{a9A-9lq:aUDQQAU$rm ͎*gQhiQ Y'2L0e]Φ"J̞KlU vd9-x݆?:@F3VڣXYd]MIsITE"5dT(q"0|~)P[=?@Fmz{PLJ&?:V\^ie|{jZ;@|yn'] a'2dZYby9VI|X,("!ZzIYIǮ8FQ<O<1-xՓ=FZ~;Axm6F 68:AڦuXMdp9ڸ6Ѩ>67}Gӟ ɵa/~#v}߮ռo|ڮsǘ|ʕC3oyvsobil$ptÍ˵G?< W|&Hmg9"p3;6s03W,n6OdFo7|H-G0UUv/bcM (Jrdkluo9r]`qeF A! VIFkШvV5$_] d\HβI840r1zdc{4r6S*5|8?;_:x)i&jۚ|-_jzk 7NUX3ZUV*ZvY%7gߥl z/BU,xyk` )Rd-+je ⃮%cEΖKH6(1 I{<)+DUMIʕ~SMRC c ˬ JYSYkϲ͑m4ȭl"CXi%,pOc ӯed,X q>cŨ,ܢjBLL6$\*g|63_tY} ֡{݁{r:#ʽܫ돦ܫQ졗{ hL/˽e03uCgCG?WG/XXOiY0w0#C<l19RfB"ӵ &Z]FZ^%~kYnKgZ>Ik0~y˓Km_dO(|?Ԡ2QzW,›XUlߡޞZy7!8Xw4F.D=֨tG޾ț@;ooX&WNW?LrBi`8z pȊL}=ze:':\ا]oM d.@[WK׭$dXu)^erя;"jc-lC*bՇPA `6*jRcֱ9z!z)hTO~qs/K,=*l=7n"ʑ{rLdO*KpQ~7,DZ͍ Ň: ţ) հ20d"ǜxRfmA!:h Zn t&b/FSTlɐ+0i_D9vs;A|9;(6z#CA',ׅz_Vwsн|' _>?'ˏ'Kp`^|-k'ZnObZAȼӟ zPRY?K0c48ɁU 6ibCKVd;~|Q37SG@3sutY4kخ[`3<;82v')(eӳTtitKq@ <_dē 99;=Xp~\^V!-8UZ&>M^ON?|-P2`61f>ꏿVi;eg[-?!p|wPQ#J!=\8Zg=Q鱧~)Wǂ2gx6M1_0!81vd͙@ԸK9pҐWkGKv:?4GJ_O \˂v]DS$9>]winοvie;)+Chod>g1ikVoxEU^I8&}8I=_&TAGB@AUK! &bEGc5A`D(PV83CƢ%佢ՄrW} J9Pl4Z]hXjT,J )BrKu) aE=<Vc9ajAC _% GY3J0 F@~%T5F(UbbYS;#Vplkm ma:YTjI.rzE*TI,<jVÎKDr k6J5 WyH[1{u6ml5ew\B^&BW=)Wh v܄VȣV#:REJEJ/`j (l,%"emWu=v/[cbR9 K/@).U;R\^xL'Y-YMTNKڬxBiu] {u~jq2kyz:NVy?N$\RVY# )6X 8BYu StuU g%U.ҩV,u٣#WiNK /?z?z?z?z?9뱷8z+ފ8z+ފ8z+ފUފ-VGo@e{+ފXo[qtI[qVGoѓ-KVQi#hMTlpŦJŦe)) NA2AoN_ͶKnl0Nࢩ m&Cʨ-jkm#GEȗmg"73ܝ`G1Ė<-ZeEmy܃[d5N=Sd~`=8Lo!2l:~K2iCa6!RCF,dZrnjp["/eN3U.{eUt[t: @%B^*J՞,SaҐ+3.A.RՀَpz|L"/=lDx4_cޗ$S^~pN鲅EPHdjo ӤszI}Mj0]Z^TnM_}bk|l(zƇK]%YaMB yE'ϳ`(PRHZKQ #UېgHƁy ˞#- iU+PzR}ơgIaNY>4*ܢڭv{\Ȼvd IpCq)I%(q )0ZD*ZyA( $"{}5pS>-SdJsBXLCDt:XP/=3, 6y 2`|:`^ˬj5Aq114`._uL=*v+Korq[@7YְqM7 ͼ;Vnܵ)݆ێغfw4vz0򇊢ÊÁI3eB&G4Ch[ Ad`y*r: 29G<8blONvej.6zZ]ϲ)M@"=gKC b05BF2'7R 4LXlvJ~ZC96^ܡJIYR;TOVaf5K^` ,EKf1DцRS9 t0`"1FJ1iX@GMp9!@8uHˎ7af5q6.lUnr]㼧}/LL&ADZ!rCZU,/|.ў-&&I_PqFkOmsN &0f6E#̭ kNz= rP՟9 :R1Ȝ(w0s$̓ 2 $0k5(ِ'EF&&e GñSwtbŻ35o5iZ@L`&K)h@3f1d.;$ur+ %mfDH6-B/z+૦"ۂOV xDYN"zQI()SH*R9$% M{TB ޲F:DPV(cGB0J `][o)TYqr!Qd)jLvt^K 7[.M5?Z~ۄsrDX!($IdsE)`Ngs9U8n$%]xt6Z4S3:]:Ea4(It6K/uREg % tw"p'V< Uvr .NY_r+/@d[ l:RJV"lt\Z-IA3qP'Q$Xi# )lПdCQPvM,0 bU*(fR] m5Ȱ%EocI(>d$@ $SH`oJm^U2E|\f5O|z=+o5N앵ׅOÿﯾ5R!\oWYvZ_y=>$ћn>5<$@,#(b8Hy9LOqK@&N'8=?q8:ߛ<#tזZKҪ]>2#s:(x\i-|:(">j6 YֵZׯ/N<>jRܖFwQo<=mqzvF~.¨lB+=Z^vŕ텷g.f7LZObl]Id:ɧw~=XOO$=u#Ѻ@p2m`ŧ@/O=^[!z}VMݮz*V\btH}2G<*fctQ?o4ʙXӨ&Ǣdqy'"st_~~x{oF#9^ܽE^puntw?~C׮V߲kr|n7W|~o‡Åiq\@G?}};Iߍlq\mW~ʂӅ\me)|(-Bb̏ӎ5JRo!քX,Wv?>h2moN2j4m= Q$IXZ(A=j03JyLlp1nbҌ!Izنi*rʋRiˋDoIEJ 8sܖk3B ",u,NA,<{ҵ.[⪁4%"mKne0!?^q:K "l7q׾W7r˛W?'YG?tv~?{pR.g2}?qjuݣE㯻+?}O,_褽L9=M :b_۳(٣<{t=Z7"mS6d62.W:rClI7oqrQh+.Q^Fa-4ҳD2 : p-ҍīߊMGOkG5Y KH!b܁3s-".QƠ53sA<& &ʢdYK`Ikc!+6 r Ysw.w ЬlɃ}OUYuϡΪ9ufi;!S{L V>LŊx5y, I4'RPCI5n~whS>l~z{H5ej#>2RrPx&s6*(ate$ 87+ܘ8 Zx ̒* Unv2=et _‰,@>!Ls0 (w'Mg+S9gfRV!ȀA ޹R$o1XdL @C(֘1XHaĽDIV16Dn=P”TVXN`Y%n|p1WZq1&IJ$c6Q-,G @aBF40CႭ8D~O'P=w-֊qd0DӁpE`i qs!niuFҪ0kqŇieof%yh,Ӷ |SJ : }(Xxl <"Ȕ4:uTJh1daJ.CƤa@P]h ԭr.tvWں:1RWyR~qc0 c͚TRT1BH䨍e):DF pI%[]G.}Ӵ%=N\<'JNǬo阧(7>qOyx<4Iy '{1ҫF*mO^ t?zpF:30?j鵕,,"K@  ރ, -!TX6n2en+)6 ܆>C2Й -ltdS*Ǐpp|l( zKq>$Ͻ2鄸6tf+zЕcjKc}o=^hPHsƀZ)Ia̎t; .AznW9Fǹvs ENQGJ& d2y']\;ȵM.yjci+WSYqrD`ߐםO>Fs<0r6kwF64cӮ3<A |@J 8tc:a u)')A@fz}5UXŃMI4 YlQDT / ﮨ61׋/h07k/!'%ĸYd@tbm BF$Gi{j/;4Տ_m$!hçA>G `J㊸8xLmiݴ-{]~n~/;j1r_pzǖ0'ta1YEkƯzD]q6w}6RB"i6ߢ5 2J)oYXsS+\A^zrԣpŪKYQZvuO F&tFJlgrRVwzU335<@lsi29*Ǜ!7mB M^zWRC:x#iIb Rѥ)9U@O1Md,ZIk{YށHS7w]l˘6}T|Z`͝"ͷmUE1*`p0aWY)@ {_۰@ڞa*ekq<9nϏR$yT$SQ#'D&Ar Z5%#t^Qy޻,-ܣ,-܋,RL;-)"caDhILL^('2Ѱml.K8tsCSy~V&aҔ# سɗ>ɠF^d@h=S F6֚+}_4 =>CFȮ4N%X0JFrK}췭|ygH9DUQ܀Wt@2I)r&rWb<"փ|6MtM[;߯aǯ O뇾p쥣7*vH /(ஔsA}_ D; 9ܨ-+2۟I3FaBo+q|,VE&H}1eq*Eaeni$EOr'wdQvahhu}D ە BwVKyU:via|^%%O*4'ϴ)a;TR,Kp#]+3A*^Z=1 8!,kof`$pMD Q/ɉCOtcT*zcZl&tzݚYʑ{}( oXtɝjX'LA{^S^MɧW|Eq-J1G!d0x#@*Kgs)=}ЊgU{k5c%Xi.E̙H.E͵t+pTmX ͜q&f<ʼpNMqyut]ܲ:$}Aol@F˯d8v9¢(^HQ8&h6LCV2 kd-|=D5(,+m\8:DbRL)&Hp>fY *jPeOc=#HN$wr2> $472a$ 0T9AczJ)mvC -a]TD6{FqW%'j$IQ!3`6j,$ހHq^cJ9]ƃp |RJaRJRJiw&SPr ʢ,#`-jq3 &lfQWu|0Mh O!jW 0g1IO-cV#X΍Wi#9I`<] -\C(it ss-CCgvVf^$Or]_Ě^cvbfi&מOTD۸,U8*IZJ3 1SI$,Qr"'%[ryΎK:BIw i[ee+'3"D̐IAf9fҙcI#ЖqBD]" LDV 3dPk$sVVjlIg-]{ ZD<]t5hiH*'Í2\EaR%}"&K7giƥ {I_ |E҇H޻{ N,e!a<9`Z&"E/jUtI$(Rr&S ͋Ա`rQQC$2dC'ibbPh\80I\K@i5xf@%F@|%ୂI D64lFKNfdеIgz:?_z] EK|P`N|ېSGdh P("L#7,IcyRZt>P4ؓG=X c{U CK2uz3)kppҴCEͧ/HV(q(>䍂M,SLV6YDZt}|c yCri@{ִ۠1Js>|; @Y x^znnrPChrC 9S`b @l"X+4q&ƒtȲ+kB.%.#%ƁhxITITtHa&1Bm8[OY>u_w;3:|f0H\tɠg7*݁IF+R3[%MXn6$ MLJُ{ uQtAJ6VBE2ZP!e("9"D:XCm"!iY hb&qHz!/䫠"+^lFjQ1I͈z|_hڊk6O4레|^x # ^䭠3,'LOHu YH-*z[D pXj4NR`&Ƞ8u^RWzY& ŪQpurB_5_L/wT.&U |(z2̍#4۳ʨ1H#>7QY$O{d}MJa8H@1L|K&ѩRrd.dJ (l35KrX. of0=doy^s49ן{KS8ڡ<}\#ŧ>O/hRDRp<&GaQb&6ͪavs⬔d|+Fg`> <|`6ck/텟ng/NV 4K`.ܷ8gvoWDכio430ֱf+z|eD2Zw.,hc_vb9/ǣܮ9N= ~CvͲ U'%5-h,}zy74̃6\]5I9k&Մ&gaxuF'bG>/??3O>':qD;0M4Y^vKK-S7|uuGX!py{ƴ0,ܖ4O$6.SN&i"pv=&2N:-T]} ",ŧ∙,H|:2 ᱝ&s=ߍV?+Mڧfό|L3k~>>;+۟CFSC-E)63|LyyYY+R90C2,Bv}}+QWTe=Rf !U6 iӠr1$ՌɪNȱbsy>rl >}`q KQ\YfQpΗI|f9edx˹P]Ekmoiw Rw/`Z˫o$Z< hg`< \ų@ TY (g,Xf4rXᱫ:Tt0GfKapž[ּ I\0>(IKNgp}9 t=at~PIA2r{+Y˸YyڣZ:˙WѬn41%&vd[{n[l[xrIBNi+хQ3<=-dK[ޅX-=4/O;z(>_+}$^MO7{I7ܐܯC`RfG*`PDJRhQ{׶֎d ЌYsԧh1`wّ'9)Ȳn)Kp-67\X6(AAH;Ȳw՚h0KvSEV6q-cs3[DW6<~ Y-A]gw<+<;fw9Ư.X)^)6 c Ӛ!b٩K6qtcghm1UݴHqM)rK!>ӭٮz;HSkRoԣtcq{Edm6s'VlR^׳\ k:hvb1'漳kx:LxD]/dz4*-,Ո褣}׬khܼLo7}p(p 65ͤw*6&aKUhِ":"%(R.јM:Mh3)ed*x|ȅ8 *0GsO華A$BdD) E)^^!-Zj[ն+mqKR,*, I lCIVl, ˠ9qtsZa-0p0Ǐͯ6aÍΫL.sby1Ĕ|E!:/3&*N tm3| i&k-E|דvG(v*Ev~$9'Z#fKJx͐e?0|"fULicYUE&/ajiv =z@{ҞS>K"xҖb| A+BHH*:AR=ce.&Ʀ`UG6ٵ#8nLښm%e}U|}__Zo$px6Ѣtb={LD/BI$Xe!-Lt.o*8:|$\ ${p3s{JxtssW֝{pl}k[۹k8ńYnYYz+<#K2X[u6dgsYbI˿nnQ~>oyƪַ7Tw6w󄶷G9k|KóPRzr͖9WpZ{>u?j|PtvD[%n %3CF> I{\9ƛ^uxLrg&#W.ds!-f N 1[hlGu"` R"*$=, /ǀ:eKRA)VG-\=i^|fnM^ZOfBKFtkHԝ;Dv:`zO9v|׃oofѫcQW~BԺZrI3ͷQZ(wb%/@+K%G!l.)RTGJ*PBIZ‘2]$׮& 71I턶A֔' .Jd[1[%{Uc_ifΎ{x M\QX&S+ "h+ T1:Ƈ[<&| ]P947 9bWBH˜ΑB1s'-!/ޒ`Cz{d`Z}|,PXNؤ͂'-$%`HB ڄIn>vB>6T :;puh6z88@F3 eș3Wf>څc_5YVO>BxFf!½_={'&!^ҋ7.vZI/Zp.WosPo;-"x F5$( !@Phφ,sR8`3]|J: G *}χmOn dIx15_|Ë;"4gH~xm,H]f*^syx}FK8'^-ۉW?<۟ݾ]`t(҇6mBHbۉᑈ~5" p?bS<ʜJgWj.RP9!TE=+B]G]o u=Xg<9`ĒwN9hkp"(*#t)U)坕%wk=qofw=1c 1^g u"aPXPρTBr4Q5__}QD}>N`^̛j\K TdɃlypMQsMs23V VR-"+\0|o56k…7.ߵ 5?770 zVV^f:lco:O1 Z?̆_cwZgiĹN+S9,[C9a4K(;'8P;osrIIƌ6kRd_D3M6o 5T] D [sX !I0uz_QJ'DιTD\:ٞ낶8CR?&l('BiW/^4әhXOYSqZ:uF)KdlRʜKR֘SOYV:)K0eIGRf)Z"J%FҀ3l5Y(<3ob94zmMD i|g '={fK'$=99)YBҚ3#1@z \t}پelח]_v}ٮ/elח]_X\r)]_v}ٮ/eXחSl׾]_v}k]_˾p㊟Q >\aF5pkTk8bjXkHe+x\ݏ[$Bdɢҿ_R6>??Y:9Bb*'[|J[" fD +`,4 ViRbrcVNxOk Gœ(AAHҜ89}of=8v|S8ǔEtE4a?Dx]]!0g(9XR& VEI'Zj1%'lж`#=suyBz#z?q7?pf9>W(}>NeCKy=}G[ǎ Ϗݮ/[}i`<3~ R T݄8T%96f,b@4]tొ9iV\5A %M(̳7fimmRv4>x$r7:ԯw3 PApZY(Ԟ|]:juIjܫ`)nBCNQ]kD*X!tN:GkKkx5U;jvRAĵjUɣ bj{x“b\-M79)Zs-q4&&'./79JnF gج+gb*mR\PwԞfPh.]oGWH~ȇ,e4PQIQRK"l$=GẐ6{]&TĖ,mmʒ =dELDPl9Lk9W3 ڒ9{tߩ,,t,|VYmvYt؅ﳽjg/hh4}4lg߸Ķ d] CI{+Zm{!d 4Xw6IխrdeZb$ AHQĦ&j.L. pN爙EckW9E;Li͸<];)אڋRMk5DUV@+)m ,jFMP AoJ,>;vI1vcrƴ26\&" Ԩhc$Ip%ƶH#9v{PB<6ߋ~DԽ6Qv`p>b+5'ËY_b NyHdt# xhXAGJOI7Q/n^:D`Kxp9h}K3 \֒PzB鷥!n[sMKy1_ڑzu1J&7^y0euZP$w6lgӫy/L"0D$OMKY{}9О9.A`C(&l!®r(< ^.s4B4c*.q[k yH\EE3B]Hm YI9%&-seU9{d7)gb kݕO$~ymv#mަ}ڷy;7 &põ/fm:qFg&n4FI7xy-p*M'ᾼqԌ&gge5 >6$^u0fҢɢ< !A[~-z=q&X|DIk602`܆>d3N mb:2}fZuADK Ö8u] d;.mVXۿ7^z,Gb3B>C̚,q q :3K<|`d >J nW)J=pE ܔ mwgZ)iIs2i`v[hoS"Y cR/[3,nN=0{`0oY=BՃAq1qonL[XzݫƶSzЖZ[m ?Jm W0-胦->^%&.vd$ܾSI]馩m_+j,#-fw=yt6 E9e^Z\u?{ߓH]%V(_h/n`ީ*mօI}=VZrmEn^;P)-O'|o]]: 3eB&G4CѶzC 0I 0k5/INDL *#=5{ |W_ڦ9j!c6xn2ͤk\*+Yڹ:i2HףGhbSEANZ,XAe-n-'|tRJ"1冡4 >`Zj)eK=̿ +=J &pL2 =W6̝ fA =#_uz˶tP%PJI0KtR ?CTtpn *0Dݽ<ǎJ\]|uW@$[ C6ApJUm%҃6`I0R3c8R[^@R":I٠?$b"`A~B)X `pVɲ@ G$lIb&X`>/CJ I%C+%Jo Jwl7O >~*^«텏__{;(9Jb)-~V{եBh7U+ޢ2̻/yplj',3HBi(ͽ@o[dqշʼn'ܒ!Pt?%'h)'ms]3Opgf聸*GaȜDf,n&W2+r0~g x޹VkO~z'\NtWt:)u'[ W 0I*ry;SDɸXMm}✶~VDwcNˉ\ucX֬y4<;tk$'.'[-c[FlH6y0b0ZofY~&&O:D+XVjٮ9uTvQUӰQےtK_&Xn}`El>̳5睛r&vTO:pS2]pyt0@R6ð>2b;.׻-kړi"`y~.4lqqܦʭ?9u"ĊAL#+o (gW:i&VV?olƍ$aɵVу-\{RԢ̜*ٴ%3%ǸI3n{#饽 KwY(NWPˠ( Fգ͵ GSZ?NauF?&F'Nh[UG=㝘FJso'#\v<5V2Δk'}pgbv-|(X\5 ;'ʠ5j\ƺo*>r_w2U7!7k B( Ls-^}ПIt4l#R%="4(?{Ƕb^1V Hg4Z:^=y]f'$Ytbw7 |# )>ZT#5r7PMRhO;1c.x,!RAzN s6*(aE#.y 4%pqҸg9!A30 ZD&+Es[tyq1 ՁF7~l)aRK#)1sTe,9?BrWr8HB3W-7$ S#sR}@C(A1#b(#O>`ޠ>̵鈌Ȱ!r},%XqpKaĬ&{JT旤Ѹ:H1|6aNd#*$X$na9| 5ND!RA#J1v@3ӄӍthaݹWlͧ(cz+3۴O,LZ3Y)b Y=2cR^d0R]5+!2˔dtrXe"f^=KJ8_-/XPuJ S!dA̧CVɆʀhL10{VB{Kb6%/׫M gzH_!ef-)3#2Ÿ2G1E"ev}"Eeؖ*fFeEddۍΝ(o޿ҿQ{7HHܿ9 mB"V@qM>vSV4nj]ت4I8azi:PvBCKxЊ=Qăh+)$|vb| R@R&rFTT+0n+*Tr*FgZ H3z@eTިY$\`;ƏgT<԰!Hpɖ|iAn ߵB;M =kK#7 )d?Gd 3,<7I{%Yw1n=. y)A N_"X\RP*BUqHg;qW ɬlZ]{Mݾsh=-~EYn,y,g?sy/3" |3v=sy?*̥ǙK՟|7bM; ]GsV~f`ոԸS\To] vŞ?@H;]Ж`5yt(0rMBQXr3e|)d S7ھ Tp%\{HRkJe(B:M=]s|Hα{h-!Ms<=C{vSek'[۰yL6؝٪՟_C/COfceW}Jt>PKۜmM JuI >߽}x^tǠ{vt;w6|gݶFy ge5Vy1gއm}_1"r#Qq}zKF5u I"\K?ՇDqև'(4B>YjtЍswރL2[1Jn0AG2#:e>{kNw0ɚr✔$d"YT&xprT|JIFM}`*|ZQ)*HF'|>`UF`pۼȭ޴^ca<vq`P*BNl2I*`b(x9 DI-+PH>ʔ) ɁCg RqϬ.ߋ,[ի4\ӯjH;Q'>X;io?~j~]T?Qy=%G,swOUyVбyA6Qw6>[rdX7sw EP JN)eI6KQ zIPd].1KКL]MA]5b4ϺHI4!DH.4ѠFɂT3ׇ~!()gY,9gD/bԘLr"X}jV4h zg&gsS!}%fwj-O=qJ51tw θ?;cfG(`Pf(|bލZgl-k"1ؠPXciX( S_J2F@s(M9Z+\:9EޟajroVzDm`ޕEZJ+!xg- 1G.yC]˷}pi=ہd} v> h϶VtQ¶ȘED{:ՌْsQs8ƹ\@*V=%m.LP]),r4 +5cg֌Qʳ8㡺w } T^9[\:/Yoz9M~CFp:5CmdJis&8*yOR0)y2t͎uP!+1*PUm`4vE'$*"YuvlWq2khθZ{ZZ{1^iۅ*ؐye&dY d(R.VA,dC"Fu+CR2v]ƼBɼX ٢*Ub!2mRh!XXB6lKZ/iO)i/Zž@(:l "6^Xm9+}nV*Rt_k1Wd#AJMtkV2[Gag;t;:;S jAh-)"' J d7FlBF+-D: _D6=6g-MK4Gr ᕲKiƈXdwg챁:Hšc/BTBa_׾ZOb|/.?f-`f ˻Fq5>b,%`V JrH"> R,wIv'('@&YfB)b;X-Q(,p,wA` f; J@_ʑ^IIP∬3&c9댜=嬕=$1{`Q'1*_.T OXI$K$VEbJWY GC.E E~fw;UKـhCʚ mHPQ*X!)0@,julI>"æJy;V3?ZCZI8g$`LeKJULT+NǔfgHO<0l=E"bGAx{6Ӓ mAT/AG'f7cAua.Ή|'+ya‡ G<ws" DMe* %VX/S.Dɀ`a/ѝxAC*cbbZ k 6|K,n~ag@,dgJ6 *+/W%WK)UU yU*;B^ qPv_yZH;ʗYN6)K۠qư29 |{|lUHZ㮶+s{eVzM JFYRNTBj9FԶʵKx kH+%vL01vJ[+TJd@ ]pj2&1#[u0+r͵Co6hxk6b`{)F!@) 6E$PT HwU^2ǐ'_uIi)uC.gNM09)kfGAIȴٶ8:yg,XȀ`xW[bmh昝OF^X > E!YJ$tҙJ[\D/U U8#Xs2M{ R[0ffhfT"DH,Eئ8] hnA[5[!U-MH\- [6JPZR0eXW EĔ!}Qʫӳ/?lUx݇9c^OxƐ rW5Zt\y570&?`~w׃AJ0JA< n85T?A,<G{k$0i]MgOhŵOgd*G&aM.+r%0[4yW9m6): }n7?,J}m/Lիٵ2ԚEtdz(uO֣<2TG;l'NjwW0jހξ\{݃uGۆ>K{ᇋ61NEKic8.ڂZ۵ `<vQkfh= aτN7M6Oci֙ mOuG˅^9<ζ=*}Cnuv5봞ԉ5_'}1 8Gr0Bm0Oc>p<.yüG•Y4]V~qq8M2cݨWV=a\U] /dDK_oRB:ͭ qm݈f|wSLw𬓭u4q:/{NrFrP '4Z*z֡P8XÈWWaɞK6 XX%")5!%3c܄!饭 K7(/́WsPyQmyXj%83fng!:exSRgP ѻV=tqFwWq$)d8|UUM6օ޹< ^L(/Ӎ6Q4,Wԁtc甒W/UJ^t)yb5o)^"Kt%@$x )h9, b j n e:Oevʟe֒H`k! DCYT&Gզ09ZBC*%GMK;ᜩ;jӷާG92@7+N}:,|k |GzRY9'9v ŁNQ"d?U]<PnC%_×m En{];$f䫯Wc`(2&̏(2LWc.502K7-AhnϿ͛ǡo;ЩDF[OUf+E-/d:cݐOu.po}8k?}*%Аh~/II]^#Ne A2|\[Ej,dȷA #U))t%#"y t`\ {Q[nn7X-6$\|Hii;ʭ/Ʀrtᕤ"`Vdو7 7uQYQ0tƀP%:(/% r%S Q7EYΖipkǥ۠6AZaӦ`CͿmyfYq9\v{Ddz\;Y݂+k bg7n/UL[cW STCzJMgaY'L! FK˴MjrCrk&Qmm(ïa IE$U-4_iEiX1^~wj6on=r\\ξ۬3RJMe[=o\-ǁjmbۮ{.UZQe5Ⱦe_-uh}Vl$3q2F'֠N)JH mezE/1(咘,D2Gbg*0GKUURFQ]%=/Yo2ߦ}FVޣ)5NUnaE)>)w*"N&iR1E'[ 6epV!ri_ȭGgSRNe$)QsT2AQF$IVTNeA%1E)ɘ* s"GԄerΊm(y׽$EDK $QDeJ=KYoO^2 ӡTT+\񜴖~xhEҤ*3HN^t˖64N¬\z N!M? *cPwRb;Xe*`Nj'- DR ZL܌K^_]VY8$,hgZI[^t;U"OVMG+Fr[˂1IEQD&ܰzL4h2t#<ㆌ5z0訵CE_Ei~#jtMn*R(n'KuMN#JC[&ѤU$94e= m%1 LCݭ`8_аFڨ)˛iFXISZўr`OQS=x~ڳ%!OJAIL6N"bPl( M@ ޲jc-rR.x@5ߞўUasNbB OBzШD'ʢt CgR #JBuAξ@]--:at*QTX.I5BDA݃䞊nC:9ilww l%$$Vq KVlQ:.> fhF2RlzRQ@R"8lD"`a,# &SBVA0#iH-*z[DKpAiIBy`HTM3FvjRgł>|.VW %]Msc!kλX4~ӛj[M*v]e9b-aM&~M~N^ZWAB D L~$-F,y-c8`'N|ۦ/xv}^r\j]Cc }bN26O7Ӎ״nWXvP_s?ܬ_}Lqi]*L?ɅHע{m\~IwnI?jRܖ'Qhe 6yﯤ'NI}ˏ4<3'FLu[|KO_N_%f'cf2_<>.ڮ&_zOfq-#i}$6e0b0ZofuFi~'ZEϮ ގ92p*QmԭϪi؛Qt)+>> mIS~'狨qSĎN5_t6/>)m?8?sOO? /?U NN={ {z >ahWkho94X|~3[ƽ:;a?fm(Eu՚ʱ\Nд _Bv.Yy}Vn-/!V|(1L$&VԴx$a=bɵVѡ'd9r퉨H9Uid"gk΀qf܎6,eo4T^hr[^d򖶴ZNY9A +6f&?^TTB=+ssk],7u-!XuKY*⪤kFB\i/hFX (+P?GZjO_w&_mM%:H <4w }LI 1@^7| kΧȲRH ۹b2kI$[ڐ \;!kyv>xz*gV gYu+YW٬GJǂa:bR^^Ow|ҟ+I]dhJ64:Ew㗾z~0}a}hH4|`v$$.Nd'~ (V !B,maT))tiHB62>3W5V5]7$\|Hii;ʭ/Ʀr4вC̊,q3f,F=>p׺Pu{,cl 9vWWQo^ljj҇&$Z6 RòOd%m[CL6i($LࣤۄQ`_zHj۫%nq][iҊzzŽcF:Yipl>6{}!Yg <0&- Էzx{߸Z<ochĶ]E=-q?`œN]$j}˾[<4ݭxFŢpd8`}vpV"g`&,F[&$2x EJpΓ~0ycBtRҜj{cϤ9n5 "mzݠ ;(hP^A Ϛ=;nyd;Yp 0Q4V92i}tPXZiúJ?\9.Q:auO\@);ЧR3@$J{Wª蠱η[cny][oǒ+>b}S,pq讪tP(I߷dJ/"iOEpXU}u1r}ó6*#TRB1>\ф¡*0d A% zT.ʻDo߂ Y>͜ `Xkc\=(}ߎP&dNlވan\KVukh~U\kخv~u/(w{+7χFhyÜ[Gx]>SoH{鮑 eZ?w0I_/Ü0G RiG2A*"nRTT=HI j RTr"ʊp!|ٗDXфGeçtg7r]ł (9;i九>^6}?<]S?ϞV{0?ɉVG}쫾݄Zኋ%=aeq)TfQ!U5Xk9!eC˕G}8щtӅUkDY#ڊ6Ot=]a2>1е؎H0H+)g]Neۻ=4,\|Ct"4]2ؕ!P@Jq 8B]_O\}VfaI.EF됲 GW0&Ͷ$bZj)BR;R ҈`g7nk<ᩇ{_lTߴ=5býö vXGw(E"*޳}>rdVwXy )g俳~ogח7KI`^~ Z>ϔ3oU)`˧ʲqB$Mt:5OD\ҩ4iM8p&Mc8wN*ݿ#Mgof+=l0|УI ƁF!TL^2=e#NG(;-Ev;&% XCd*dli6A&5.[]DAayk9sX2ڔTuR}F)%ȏl]Ƴ8۳B,1娅W6i6fC[r_0B:}+z$W2 8'B_|̩3YRmb6Od70gY0)4fD{w,0-O|r.Lm.5;--_%FK?i=(ɻ .ܻ6dU'@p̃ 8XmF4v8l7\h&5ppErNt=F^;ƉuI+sh{ :8ۙCTߖn=͹CFo}uq{d>oo޽e[vGtd(ok=Mzv[IҮnӤ츭Wo\.i19d^g1>aa{aDKnt @0i05T0n W-Ks/WfZնw}0  ؕ~kt}1<19\ Y>Z C+N'Q)ԛjv14gR*t{AvIU-QZW#Z[[H lLR}iB2JU$p5Zq6c/J*booZ/q(;Qd Q_V[<^ )& +Ҹ<2C`K(5s@P.%Y[l]vW];]UqǮPj)% ZmU2SUTkor#2vge܌Rb ͌ޫ3~mh8ciٟ3\<> <\~\nq!P] e9Z3t6R2bJ\h+04([Mz((D S1U!\;F8-v <mv"G;;[~oPDZmCCFr.ɑN%+$k؂5L>T5Q*`RB!IJ1*Q.37M&g-rr#.M +m\<)͋Hvk\ NzTj%T* U\YURU.ZGUEBem" AB1bcOl\ ] +%EYWk2jh6N޹sQBTVP%YaңX%E[Qؔ/_r`5mJĈuLH ,IEC͹&ӶKr[踤|^'*:vL:U&WrhC0>+B-"Xrg*xL*o¨cGc׌-mD~еzwBͻVۮ w/{Zm{)o=:|CwXXe8ۡX '7i;mӆWMBBy0ԄX =p2ljkasprWPG"\ij{ Њ́Ƌ0CК*,c{&^_Yvؘ~Z24ܖH1tH3뾺׎:W,_ȵxOGJRZ3Ч b4P8T,1rQA3AEy鎳|9YSl-f4-2|;>Od<5b_: >8?%O sB2 |*Ca(pбZin̬ۈ)tY'հ ED VZwA%je5J4*5(^%U_JWhKރFЙju,ĹZ ɂUM9RURgM% $? GHY^]_|Z cv{[c#t?io׬W>> [؀p`zH%ƚdnK6U r6X vĒ礵O)r0ǜja]jD+g1͢TUٮo@[rU]uwdkٔ8X$#<:;g4=A;_?`3hPq,br#gģ`8|` jU少ֺlу1MFFI>sX2ڔTuੂphS)%ȏ >([{>tg;VgkB pmX -wg:}g7O֋M11+ڽ"_.4Kpєy դvXP !hd $Hk}^ғ/Oi^Y݈ExtZ VuۊQkTb[Tbp6#R""YB=%Wt 6WƯrޓ6[+z 3rJ'=@F^R(p95%K%W̿bY۔%oU֥C޳,' E~mvK%@I FZ}؛и"OE9~sUYO/ѷF+{*!=fW3`r<-"}.$FDlJuJr8@"I=9Q ˬRRX7Χܔ;Sӣ~ =IDZ|tժ괬NڪemFNZQSmiH(}Bu `+E,<Zsz uua]DYT{3 iF!!Yƪ#(\/FyČ` r D暠kPͣmg$PFc:yΖN4H,H9Q2Z[  .AB霹,ǸV9kdNiX4|q?U8}Ʈ_ۄ!}k,ϛϏ۲%,Ymֽzde.&5) mjZ t?4N)pLg9cRw:qAAѲJ9J857ʐ~#ݑά[{Qj_oq=/k&}oW0AFViGюIkxKWܡsu-8.{ԷK$N*wEgzhe^[QW){kNy=nl ϽVZ~WSI/}iJǻwdqVzR-ɬqi`0TBQӍV6Yha~Yׅ"&AKUz-]A:_O^>D; ^ړ[AF.\LQFAkSZJkU.tL Z2-mx3&̼l77d|/OrÁrvKM1tS-S$wOno'K}<,rÜ ?66!*.gTeHL! Y YhvC!Y@t).㖧2u .q[m yH\EE3B]Hj Y9%&-skZ쐫??@o\ZCG͵(~i5:\1|KMkT*0=JLH_-E fH׾#[n zǙMPѬn4FIuSx1+N/T݌dʯ>0Ėn]Kf>,_!s3LaH7.?zi(j A&A€ʀYr,qiiAML}fZMuҡ8sÚhـv>nf~N*W[SN ֎`xYG7%pCz)OX8x%V^nW)J=pE )#U .E[:&=,^ndtk LҢ)İ"zذ5~mS..['(#̎0G+uGkŽcGP0A]W SD6%Rk@X@V5x\٠SЃf\';&.8St;(:b i҉ߠVف#gc{D3q}K샏ˊc`,iBZ ̕Arzݟ풓7 v[#p8#0f%֯镮-#/32+mOt w߯ٿO6mɵ y&׭v]:.I8"&o^aFӬhRĆI5^__:?闟_?g.Dt$ւ^ _Wg,j-M͗vXZȚ.[^c^ƴl?.DnK@R~O8@w+\8Ki+F,WA${ijl?BBJ- S/zʤvu>M˯q7G*޹$P<*2zУkOZS%=9[pF71img$a0ft*r%=ӖJ@!s[(cBbSdݥNK?HKvN>Vie%j _:LWO,LmR۫@Ա"(;eb"9'î Ҟ *j8vvUбwȮ,|pJtUWSaWZ8zvU\X;v~ؕ .Y-RO%g>>.Ỹ0=>(IKџ}h24wTECC93 tPV0^빝Ӡ{c?La}܈~H09#zE?{XC ۼ/`v l>dt{ݒ{zز[wSl7:*~Ev~8ҳXjx;\x!4˙azdt7{?ܗÇD?Y(. N.j]6O뻻b͗-FcWV e$#Cs/4xKoiKl S#*ޘ^ K{p{+G/Ec_Qq_MtpU=rURcrZ|셋Ͽ[`ӴfD" GͼH@-.~P_ C&Cp.)) rMNDRej?t^"젒ow0uLB9|9Lgcc̆nĘ߭Wr˿//?KKR2u.e X2`޾ˀu2`Ϻ X2`]ˀu.eze X2`]{[2`]ˀu.eIG*u.e X2`]ktLe X]ˀu.e X2`]ˀu.e X2`]ˀu.e X_:AFѼ~iPćA3?(R9 S*hBäs$|x\_/%\:HK}reYuF;lٱ$duZ;,Tpu?zγlU?⅃ePM%27"$!()"`m ^.ٯҞy:XW3dSbIN;GY`W喫,flGO|u[xl\Fds\s՗qOzGXf_oo\Eq{ML˒d|X}d|E->:_4-m^H'5.vZ_{eײ4r޴,Ų7oo-P`SQDI%JJJe lR$%I:wy 3k˻u}u\p}ڍn\nզUZlLuwl53ٖQ//{ t$Ư.OX)^)62GcӚ!bکS6R[ҰF6m[ĸFƵLfң9E'K=ҭy-TCx>5k6ң#tRW/͆-|xwws[=g$^W54o}xߗZlXiwskx6Lxz:= h1{PiiF@'}=Cg֨/)5;sF ' F߁_7YBApmI+*OV+k8&II1cċdڞ?Z99&P.94, ͙_ 6ci͇d PЏdiN).Q!{XOc ^y,'dgdNg*pp>bA˒5v:ni8ߢ7'%L$TDIXAQ >U<T2)9[tM(9N*[G-*U9ׂJwh)J63g;]pw%,U96fB"".\몰ewZL2<^[xPN)a9W ԧ|NsC%q.{UHںPL!E2Eu Ȱ_ʃ\Ÿ޻yt1娫HXFɩ G $e ",7v;#EG~'w uOz1/h9P {4NGr/b`T9n JzVl-9ؽK{dIfSRd3,mE )j/3z+V9rnnGdV>}p~Z_Ko͟myFQ@G]!:I$Sҧ"Rɡ2PD)p+p 6q9z{ [ґL5 M^Z]tPt=͉ܘgO1'G>Y4ӸA=Rdhc/ I2ƕg8)~\ :#NdK/T"ǨJl r.KT LA1ǣ}؜9 G8m<7#T+v["_C"O>͙v9 ̗//Z4J$9ڤlRΙ\1Q26FBR%O U" & xPwsH"dG䅵Rj2}Vl8 vmrݿj}ki;gkrif!2Kc3m$[E_&LADAIJeMQ,lgg2ǃzXG,2GN]IlV'^Ƭ|0Ad[KU!VQSiS).8n"P$!3uS/-*:i;c86=59^Âs?3gvsxjkv p^d%[˿׬y>n|ҷw5/+t+#76ֱ;_vZ7X(_+L`٬uCΠhll5[v+?4nR&믷xw;|0K[9J[.<]%8l7x/ZJIs/=w}RsW$Y v_s ~-Z%]Z /ߡeIӕvG+jXfkߗӧD7l~jFkdt7-d1ՂB`Y&@j`:sSTlۥ,4ӥ!r*3kEL8'q5^d)j b4E (;Qac>7y:fGcBUcO?>dE8?BV|/}JËIiWc2}x8W>4XgÛjbx<#TH4o/ɡ4("LPKAm]FUnJ{x 8 T'T/҄Q`8S̙R*0hY=\s$ Z9?zaU((k /蕁ɶ#!-iƣ!AZ/s:d*ޘ9^$ 3 ,,K- 6q`z^dI]")zmq\%cIS霅hʱc6X`w}x-}X֘Yg?x7eh ߡމ6)p|9} oP:+80SВK<+I`N͈'+2ePtsv{pgq! d,)%hKЃr+* E3~+1~(Si-RړB"Z-ba2dS#IN"tQ!UdUnM̜Jf"9~u Ǭ[4ګv[TUeXϐCALQo LT'1&-j:ąwpBrNV g'=-! b)Fmrʔ)j(Da!h C`LDNil-6MOlv4 rmbMY]mZW9ڣQzօGl#['ǎNVs٬Ӄ5ؒ9T$$9 MM3/!0BPܽS*:&gƫ!{'ۤ>ka%kz [nö{O>8'Wtzx=ˁ%I ʀFC*c3$~A먽%E%z)M[`aSbNߵ% JXt@c;j2gFOwuUu,/2NU2[~!>Ma֔9{R;y^0>#@'Ku 27Wj6e""󠧓i+zn1VZjY YBOɪis1(]R4(/KjFrXld q,  M~kk4fo5>"Ο5`~06Oxv]I-BxbrjI(Tm`yV_cn[7T 5QrTMe,m.w%`4Rd]뺑VlvsFǁQ{YEԞ_G`׌x8L&&5) HV䔃 %_K@FkmH /H]`|N618ЏjgԒ>D=HTK1b94kzWɛ6>$,C.ɋdMB@a&|HY%ɩFkFu]`X?DD$ZO:ɹO)J SDFYc i#Xz{ g PHQLem8$#E.r&D@#yBIXV٠ҫ$λ%EQY.N.vrqKIagLŨ1CjveQ \N.>\yX;PYvDX.bs_ץC>U:=2DlʼVR]ڢv;Q 2/̻Yĕ=BZ⪐+sWZw]\* Wq3W|6F\rճWZ#w]\* WƀY>cꅒAy8<˫^6A+9S~tq<*R"(8ڿkSg{ˌ[d_lF?.UoY>5wY9')%k| Qq@E̾;kh}-d҆ (&l)C4]P< DLB"rӌU^m6p!q]<Pɻ*Ȭ$Ȉb4ڻ=٠T.U&' 繺|y}}T[O+l+\6P{;D+%0պFL4[/mQp2 =Q2 tT&%A8kk)*Ef3ϛ2l\2hŮ }cum2h4<>&ܗ_}}bk|lH4|havY6EQ B q1$ϳO;QGH i-F)TYbnCB!R̅KrzVz5|s+rgq?ǭ$~?/踺caV:$O n %YNuLI,T?K+'WJR BI&p%1EK;\}gERԸ1anN& nE-6 )l558_A$eJ=aK 1Ԫfjq'u~@M8y;-14N#Vslʘ iRd$bPNpT;yW'ܼ㕻5q |֤9j1 ,dkR C\zlM=Ucm'+O ;͊K-7w`T1MAdj\%A{'e) lCh+5\z*:k #D' ;Aep8޽&쯼dIBl.0. Hέ*Zb KOYHƎqcDt\A%0 RU6`ptUAhHuU/`K3"Qz}9*-IH: -Le5Q&'Jm~.^B3ś/:>r+NzYYڀV=(|Pb|KA~_9ޑ$z7Q P{{?i&?ȃaFKS-9% Cc;e{x'bojRܖ {qQ47 6X<uFz/ͨ)4{Wi67^Bkkmz]{he&1IkĜ/A>:-˵] /gE>%[,1^KV̤Ꙥ}Uӈhref@M8Vb|ǗsOu`嬂uf]]:.7q2"q8gc s}{q㦜57ެ?^Hw?!= i? L0R[I7 ݋A՚O Vn0O=[M5yż7RǴl?.Tn _ξX M\̷r5-lNWlɴU܋[Tx*IjEx!GVK&#<_Cف~rt՚*?lR}$a=ȵVσ-\{"`fNSYV25Jg%ǸI3n;鱃 sM>"(my-HZ 'xr }FbSdݞN=J{H6;;>ECd[̪.miv4(z~tRJLB\QdFX ,4ѰL?,H5e%=i2*K[m3ѽRwd\pgdYu.чQsIL\"DaL/(yL,+vZ-X%om"C.7 r~ȹ8.v̇-W>9+9Tз蹮[v%ؙ>OdX.${fvWaVy΢kIȐDc &O.j[DP=0nqM=@9?KsL)W%;.x,/<9UMȲK^`L{Oƕ8 Zx JDL.+tȹۤGOxCp" 4ɲOAr ݉Sb稀eTlΙ|鐲;HB;W%-7ˀsr%Y,fD !q[,tV16Dne;dx SJnIr孰&{JT-Ǥrk\jKǘg> 0'U2DL,%0q" DLal{=|FsZl Otds Ro9P!YI@˴m$zhߔcCNnO>{X{{{ E&+̙)b Yb=ː1)/FdEWJn vΪ4;#8+/ZƦG^>y}ͱ۽t-xURT1BH>&)w}6CnClUj"/yz Z0vu{?Fozٕ_> /j~w^LFTMUٴP/a8^7.gő3;_s/drLػ/g̎.V4/N_=Խb~ 'ӡVξ.8{&`xny0ԉG32/ɓ4Iػݶv$+y /Swc0@c0t? L|Ȏ$'=??-ɖ/X,9mm.rŵŪQ5E$NxHVu^$"й_ܸurC tw/ۣ0GLZ])9)Ja FkA3W@!. ԩAgC-^@L+jT쟫! hWN'%#PL;v:'~ܟ3zu}uaf\ļz^}Drq^fĿ&7ϟ_oGg,x-bBpkYl4Wb0 !-}70*[s)^Ib -bEPZ^k$Te>Y?<o8^1niP[ ?~?"_J4% dEHP-xdb-a{P7I0G6D&oK@d4U˪zIY2wY 07wRR)-[!K;V&Sw@QJjRe*6mv-sg͜7 ݼs]<3TwgX_R<|&ŃVoPeEwUܠ_5I#w >U_an#7?7#<:O8z0-d[[@7W% sU-}$ 0hHvd/>4Ǡhٟ?Lqfgo>}b1~=g}eEKn0l, UG{nJ!r Q"URAv#9ɧTt!MRs/2Ȯ ?;Ì =ٷezv?!뻳͝<9݇W?7^L/MDm?}{=`t~sk̟^m{ ۤ/cj|_2zOq)!ձ o!՗<Ӡ=)?n7iCFwgQ0LEVi`qʍg4每c>d+VUCo%.g+iBkZJP U_~d>V)1Y@NDU[=ja$@<]zi?r E5 l% p<\1n؎H{mO,Q֐tc_l6t>\ ( QduPzOJYHzג2fL UYD51o*Xz<~V{s8MÔߵ͞5fSwn[v;#;ԋFh:{?es2~qQl5Q$[(8Tyq[X5ʤzlbwt^h6tlUD^bpik1nrw➇>%ѻt'nD;3{t՚<1!fOiwTR@cS{Ԟe./ \z/&e)16È7Ct`LZ"%H)^ЗZk|:]' а.CAƐIlD$-wBWт6gT3s.5%E9K^9^ԅϯv} hO _f6e=:%4op_w=})^o4s+ZO4+OWAO>חI ߯\#(<6&a(fҼR2})K铻N(lĽ(!J2vH d`&Q1b$ b@mjem=06ݗ YCTHJV!H3j 6ҪIB7svPMOM9~_q'a@7KΑ[JCl$vxDO Iۣ۪w9u:ꪋ75JW^FU\* /e VPf:WZGc-JkI!"RhJ>JQRSnx=1g-skQ)}l0)S%Id.Iak)%b 6P@,]:JE! 5R41By a.$T.%Sv:-+W-8Ni%T*Y4Σ͞D-"Sը5e]) #;9v])GwSG Vzbj A8j ޻R?zϹG jE.(ɈltNIjyȜNʢ+cF\ojj!tko46ƕ-?)ؘTeV>+ ,a\ '^_Xe_-@,ry0~Wk zm L_ӾbK$eȹE)"dU ,YzU@Fj]& tȚEVt{(^'ool!yEmY [ 6NFw|N| rq#IbHH\Cq#ؽ9;h\焂W?cxFMﯮϿ.x족SN,'I,Drq^fjƉط!EӺ/&Wqkv"X+J19ze9]JJ@ %ZL!Ś5$I<*񎺙}?~y|}yyS0>{qjӜb~W|=YqXQѩ\5۹s ܹbk~Yx'se3_2 JL% dEHP-s-Fϙ6C'%Ɩm(XCTM. ,=XkWy]icٜge:%r|2yqu u6>XfTXw_;#76ċ2Ujmsխ֋0{f?˥JyswNg[*-yer7P!\rvj7u o|AVvy׷|yͻܛʷTjy ;9Lpvh.,뭃5E2y/x(sJSs^`RbqcZm@7WځGVa΢[N4$;2\XdC]zٙcLi{YwB t|E5IJ!r Q"URA&: dkpSy*%a}vuV?vrv0cBClrw%qF1"$6E&bX {lZ>Eͥ$HRHBP2i+ ]"i;=Gi^ci~eޓ?weN.g-U<;NmV/J -|Q{bH띦tT/D=HpykrFv94Y# %nO)[|NQG5ZU./ZU"h 9rIOjVy @d *& Um`Ўb]:G򒔳+exQ !a:g d1tȵpЬuƒ{ L_]mdq˰#R{C# Ly9}ٻt+ pGx~z5ijHlN,9bh"yV֗u/ʑ!.ɋx<c 5x]1!eL>"ZatQ;ƲΨc.Q/>^>&Շ>ӇO*?T>]Apc E? *ozŵ2\^Kŵ2Z^q 1V\ +ƨ8Zʗt}WŎGa>sn˻oTFy,*-ZJ3a6$&Q-H[LN$#r!YYp5daBT2cI/LdÕf4~\%X$2|'LO3[:&S"]qv7og*4\)/fQ<uGHɸ<%~Evd[Fg`>L=Ӫdټ,яߖW~.Z~ne`f0gtgӓJ}nG&oWv;pG`'{uwOg}݈nf[Xh;D3XfXz}NϦ]dou}>n7L:1R:V$>>W-N󤨍c~'CM9;nI&t<ӋS?:˻?K}K4 8{ܣG G Vy=v]K t-dͧ^M!ꚏmpt4nLn )?L~v4mcfmxVetdX15s8W]\=[s@:l稽%81j3sA SbZaėG<& +"قE^Y`rdQ'G7Аj?p_v>xӒd^,'Q4>}uɏ/㿷ǞFp>/D}tYN}:,)/w&sП*d. sN4:IMKD~;OT;[/P>@;.A`C(&l!®r(< ^F*8-O2?{#"tAH <"ړrJ0LZk g?\!1UDJ+{Il_޽9bnH_51|Nަ}zʦ9޽^wt UǴy#J?vd ;g7AeFwиdtl)HZV7dz36uɗtr5 o&[=pR~qONBu|l..Ͽ~`vC좤}.i2)Le |ZvϾo/I~Q L%u %!2'AN mb^ྱ7c5>Im˪ݽ)Z|fS=n` n_|vKw_;څr!-SfKq3HpC_ǔ`f'+'WHJRh$pLQ(j 7BɻIWsK<,v|nw:hQݷ)`]mVxneǿGdށ#0_7<=:~a`xfPc놩G^5N1u/Z6OR2d%k[CLݝ-΋Q IX5]]}({,$~kxV{bAm+419jl6Ok~^AnjCiI6>3AV{S9{ͨM: ;gZ{㺑_i`id)Xd8Iij;"Vm߾ o E͆SFW>k8LʕZP3w1!yQ%T)6iԆ$g~ 7ߧ*ƆU(.BoJ aSmf۳x 5zj%u<9*ΑR rbR3 VA!h׺$zفN>0EILqQdJYB8%(HX\@FIYZzvDS_z"x)JKVPXuV" ]gPoSʊfY%Sl2r;T U ʓsAMI9o-5ބ}eN}7o3W&SIg4Y,(`AbfH9: *x dTh_0ϼ ^S-d5c(Z 1dˏ*WVKR IQ V:ORLxY*A Ğc.ASx'%&.]5񍔐_."{6-*HREE|͖Ȧʧ+ߊK`#)DL 4ۢPz 2!@  K=~2 &y& +P݃TV'H E'QFHhgN}IU26̳Yyt[ɫi"!c:mXX<;Y|t9E-u_ϜOgsh~/xNlZ썸-qu7j{kīqt;Jԣ/Je@pynv a-fx9NYZN&5Đ/LG͗=ʗCZIeI9Q v9ڸ"{*cPc!(#PCQ@ R5="}}y7ҁ#pFBR$%tXﴓBx E#UA|ɧUsLhJsw{э׫Y&j-$VPH]*i 0dVQU1+ ` ,v6Mj^dT 2˼&ÊTX"!8 xOzu"mtDٰeoKSƤcŒl$CA@+a**K+j7Q(cUy+sZD3+9g5%e^"W?5^J`OXS$uag-εbD-F?r<~GAc ޒ?'g$8mNl~/[魠yUP$a$,t ^x&i;[WMORڏa̔Gg_]0}3bgDk{ۢ ->KcX@I_ o{!_{4[ցjh){}9Ϧ8L֣yzBTοy}wuaޭ9Oeio>Bpd4=tZ;^~ݧ[-ĚX?67tn~ksq|`4=`źNϖ=;sr09UG]>Q>aWesZHq]htXFll-9:F:Bҩ5_&ף49'6G?ǿRwnx %֊,¯@pm迾+CZA˧^L}Ƶ-y͸G9r;8,պ8gL s B''GXlVyd6J.:rUp-ܙ>ʎ9Oae\Pl6϶?&GKY^|]y((md^" s?5{ w3Q`{(pXm!}Czҽ>]H'(h'یvրz)Ɩz/"EL =M^h$UH9A ev~!RdB ]fD5}* /N@i]8wVG06,si=NlVPt-蹺vMp>}ㄦЧ3yl{7)NxGG.*cׂ}c2؋R! @2%ؕkj\Ñ\ x_!1*=5n3?rH+z{{2%3V/B0 ^̅*.r!Jk _` yܽByŅ`Qꚋ`<~$;}үltY6:'˾iDt|A78/lFk*͖Pf"`+ю^E"r+6FikvrML@vb_^V=YXc,"4)49eӘ7ϺMOYQ>>O=C~~cӗrdZ2o@AR)f-(Az( $&zID%8XGu-d0 0H룗W~"MX09W?1i6JT#!:$ٖ2No&,@[2 _ ݆'U&RX,ik)Fm BevEA1<'{y0=kM`y5SQ|)Z4jBɠ& $SERD*ȅP!4<Ǵ_ pl?xZU.Z\ U^NxJd=HI-*JkqvZ˲uQNY/C5:{_8T0tP_Z{v[@nX+lZF7:j7nmIFVDFPgk-53.S"(2udQhmqW53WHo0Ɵؖk'ǖiu-'\]lvByytrL{zplA^L$Eņyݽs>/y'].ɗ݌  Y4T¤e5hu^%SE5#n6_&~uQ~SCxU/:^y#GC"1{Phs`kf{Iƈ(o#CF:&d˜P}2!D"вg(&QBf'pj.c$dR腑֨MdcL$P^y*t%R.%עS])(x+3<JiGAM鏦HeSY]WZ\p^S4/(U4N([+"BOx}8`URoMV1I_bO`) XE SJ^X(dSPS%.+x4aJJeh\AX둎Ӝ,i@~Ϡkܽh"n@[+AC8y9G\z)7~:ro Hh c/k0FPq\)rO' eGJH r=P(P(JN9PRKJ&gl&˦t 7t+|>yS?b0PFm$Yv0bJ^ɜ؇*ȔEZj9(o]oAwYJoJ^kq[2Kr4C;HyOO5|s%y\ )3ᅣ%kET91x X٨m*i zR҆e$S0) ?{WƱe ! 0 [& f!lχyʌiR&)~Aj.j-ܛ"[%vTխSTݺK3J1&;UY=I?`0NۿN|9|('–v"@a=U?!c)?X4I(8SЖB'BhO䦣S[ؾv.P Qdt"ĭ&:NfʵoVShc"1MļK :g(@,C2(OZ[#Tɚ଩'|L[Aw/85e˰oӗM6f`a5"$;WZjej O?zp=.9dRh\]qm#:GG=:GO[m+]L1zIb+ew7בpʝ!ϛ]]njk<*r&_]vy4R/H[׍|_Fcwk=9J^Qn6?6LaUy>JBxxKu޹" PxEEV8EŒ3kשޝR7msn}tk(ȾCGpԣ  4q0Wq  (> YB=H.{v/%ꩤdYC % 2<"X+;t+9q|QO}cp_-cl|\s{LU/*k0scY@p۫9T4Ԋx0z/wPںTr8\4(3p!ME`~`ܯVTC(@~`X L"`, {l V !Ϳ\XZQ؊•pٵF{a4buma!=yZaG^yYz?ZhaQ^Ց;E"RHxb ')y`NEz]/zlXw)R|-Q% +Hܙ!bN HINdƨC'F0F_ul7WYO&;Lfw{'l?ykz3 {W8G_|R it,A?ָbr)LbikyT0ZΩ$P,ᛲ哦r{^uF˟ɕ)N U`5n'ųS#K6r-ͺY>ށJm}|v]=Zո]m\c-j]ApQIXQqZ1s#f!Є]W+F k%L^sa(7  g}HH/上U9D{^T~( v5p!:Bӊ֯S}K+ת%[98ʴNe*9aZ|6LFe+sLQud4z yӇ(,WxCAptyyG%cHVBHQC9(@DƑ)`!iN :i&Ww}D.cN[vTAZ-DaHx4T("<4G Xe3^eDFpЧ{@B4e;b\RD]]pVwx5,ztKabNW? tt]ƀ/>e<3 #c5NfH0d@y 08tQpNy)h"N#ׅZs$OKN eBA].I% Inf jfW km̅½rD/8wrqܳo#?u|'7 gl )R jJ0oe buxSiX YwqiK%8Ŋj;Mb`"|L+]w^]pV3CEb kf[v:X{[v{E#z QdLqԇ^s ́A^Sw6 wZP@!TўG\"EXģ}T[mpVa)_p 10bm-#Қ2RF10hBĈD䉢ĔG[S/#J*@mESI*ʃ2ϸ༆(G=*iIQ9fF jFhvh1uf%"Yˋ-/.x M1dro0}>"&!$wx>xX6ؖy|ȷ=PX'5bEp,~\(ȑ~0uL58l*rL5GFnFjmz l]e4r8eMvivRЖNQ@F>{{K^t0V{?m;moh7C;3å);fyAR +,i ]ep]Mth徝*d+t9 X6Gf͡+D OW%-] ] B67D6-G?(9bJi]e9sWM+D T;]eS+M4~*2\՘ʌVReFiڹS+C& XUHS*G]!JAxKW'IW@>ޓ%)_ɒZ2&CNܠZ\a%~ G1{m5?r; $Rp]N+ cC*1ƦzزjM3U q0O{fpՁ7CkaP2rdGmM֛ض) X `6US**<Ȗ%P +PNYğgtS3B'wəϪv+ L42\ MVQ* 0$m]!`xc*m]e'2ʖNϩDW04g2-Lh;]eZtutҝh}O~- Vk7}L,?eJ\'Yr/N9 jXE ޒSV|}& 3I/wGݲzt9yNwm.F*o}?צXHޭDaPW^ZJh?xwv鉨t`T?]S24 M0x!R|O)<19uM_| y UAP¢`. qV.q͂2ϛZج}T+d"WTu ?Φ;dvs󅎳eUX13*1AO?Isu]GO|7|Λ w88VLmzIB>l\vVkƕTWqyǧ^vz.;aCLNYUr,X6; Q\o1g~)JyA9.MorglLu" ľj"vl 2k{Oq])W7c :ˌyScI=jNmIKoϨLݵ>OW <`uOWGw4كjdLcf'3\ӘD+<Ɍ)8;)s:}齇XX~XL1\!GV'0J@fVDb;VV$*pw.,x.3 ou tUUlm`6K.yREe<*Ҳꆃ/w+C?7kssA+j*9xj-XPpjphl1k;{c;B9udn U-NM`P`4<+Cs.#ɵQ&d+:\1:+K"!Z8qUU9VPՉz=)g{oz>&(Á0|;l_O> LG >Uh ФPisd\4e*4اB3JjT暂j]e4J)thw;]eMpte(ʼe#b.Fբ uUf9 Mho.Q Nd$|c,MYGr-mhs Wʩ={#땄zvxZ{zM98|Msz7ˌ<Zn4J0p\UԭmSPpfl ]!\`)t qtQ* +Ua#rfn;Y0z~vᇳ/vx^#X|we,̭hg t"e!<.fm6zqxAu9ÿ*GC_E4/=ϓ:/o}Ui]NoQbYڤ)ƞ ! + b$`%v.U.DoQ(e0W„|zFU!^WoM8~bݿ:ю-~sM_V*JIUI_u=j{JJ0h%傾y!arƟIJQ.2g7wHk~>FԖ,?#&2QSXJ <T-0w-[3S~گ=cГ16JOn_Zq-x(wJQcnJ݉#έUg3b6)ѣi_fϠh^he~t~ŪxE.BKҲv2;,pWZ8oŹce}'UE ƿ?p?7V }CNWX/f_N&oڤEN>:w̧D[Z9'1^_L{p5ηh.ȹ ?nVXoF?|G.C8=gmMNג_U-.lbN`)TM֘ 1;!\]ν]:vM B&dW!sIհ uIA qƞno짻C9QK)x<zry)fI=wTo}""u2=9ǑPE#%_L2zVPDiWsYQó塭nbj*O[Df=,=oQ{puS56pWڵ}qGgde|-"m (LY3YaH3g$s1[8!f =1^6QW1*HŨĬmh Tl90] BfCdCꨫ.+g=WJ`HMSBMQUYl5ʆZ+8R{+e26hLZ+6e]|B6E f*G\MSO!0SNg2֒*s }A1^GE՜*!5  ݆PW+ V9*J D2t;KdB )ըޒe)GwGϫbt-0Ni%\*[2Γ͞"U ) Ӥ5gȮOQNak}.\}w ܁4f-5T99xcKl~x ս+tv{98W:FE"Qo@r^:v>PDž(Ny@ [WM.-^;#;0!8 N9䭘o̚]?6N+3lj ?*m_.Z.jm|.gEͤ\[2oeE඿E;{m>^ bl{q|u;^5d}ܭv`\f\im,`U!e;Nj{9tw'AB = *HK{X3:&_H? u=U7i[ZK128Y\PNp cbrd"sVX  51o*XT~J |/ߵ-b݃ݶXJPUоw4wH:ӿ[SjVS .hg;Rj(Up '~Ν\`8Hpg-\|NvǪpfS;gBQC ɐCoCZD&ªZ}1VW:DKZd!:g/w^wٿ]+{`O$?~Sb82Z2~!^L?G88vZ/S1@Wo(hx֓S5*·kIO>hݐk'5j_!d뵡̇7:pccKeӷCCac>#=H\7De)+1 PΆh$,e &BlKH]vh1 K uz) /!x]*lE6`lvw`^/q^YNk_yUҤx xl >ذ?zXj· $޻8uБ@Mߌ\o?ޤ5MJg_j/#i;3GDr]NQ-{5܉&RG<+ ]eO",86FUJrȵ Ġ7WAy2kiɫj0#jN]6$RnO.' }MPFGNTc B 8XlTՄH9Scl'#W["&_G*È*X  `I⵱&-WvD%Z~¢mϘ[w60~=y6mb ~1zSec/s2o;N*Or PYqLj*ThlB]!PU a\e,lg]e@Dz"¡Vr`1U1R}[9 =moX7[؛fk c϶0Em;vf7 ,ׂLNiI4]ݎ.bgRB/;.y)B#FJftS[E{r)nhd/V&gmt3yr{Wv*Ī1R|R_jkTvoq^oݢ Q@CT3` (N _+Hh T|,0dA¢/"dRjYS߷h&~{8'-K3b{ӏc-"laEܩC՞c3!FRY[mL&io&i |(U" yn\lSy0Qfg2g X\qM D¤5HZJ-bo췈E:]<%_goZr]=E=.xD!!¹TWH"SJCtT#]TZe v%޴X{hzG)&K6u !X;kQ૎h;\Z2l,pv{)*hpBQ߫pm![}Ey-6aք?[ iD4:ޙ؂ִ#zZ|yI:(&/ΚG/J ybMzaΧ.llњYTq5Re3Gߪ'G3yqv&yxvn2OSq 5h[fl?5?9=W9Z :.<ʱ4I 'o\q@V8d00-?ʿ]y< ]j#i'˩,\/~[?w9e%.&kR*Buȍ^嘬avBR ب[G?pQB1c ]Qq)gFW#z.ѱ?j#6wqqČTŢ݈#DE ГN:UgXI*bc:S>7J+]LUӧ@mTYU#j dARx]^(sRTˈ'.>Y2Uk͂1%oL&]+yL-*!V))ٻ3Dأ ʌ%d`/ mbJAfv'fv;& j#ڤYC!*C%F?{D.LEZf5k R>wfgIkJm-C!ž^WbFlT4ڟc]MVDii%^u̦- mɺ3FrO }*V*J4'JzkA*PV̓sbj >C}.sjO{HlMѕ )h)$s\uY,IZk&ɀTZl6 Gۧ} QXO.!2ٻ(,WhM7HiZ;j5"Ϧc.{nprjt8'"I%Uy+[lJR rc$d䒳"acY4V`)(JPJEu)I6 ST":˄oh\ d7LXjo2(Pd@>p `-ĩ QT7JVXiQZ P|d %.dp/!O$s%µ[MPw0\e,q]A5+ơ(ۯ2ER 4a`7B*USFƊH-teZ4bb6zCJc%yHO}=PI kN_+1h aY#ʞx1"fGl0!vA*ɺjs&:3b]K@A` B@8[&+Tpo]|Mp&#%JE5nX]GCq]Q@IFRӗ5Y %T0BS|Uꤣu$Qd^ EU ' kN |u5fVcᛯyk&.Lނnp|u^b_"Ϙ6M`>f)*Po'pD9! ȾfΰS>"$/e$@yQx3 nCe6s],TGWgjNAQrp 6Yre5k8x*o7?k|wu2W)Kb>kVoHYP"G]JS0 ^t9 ܲC,xҗ 7<#TKLD) e@ PJv&bډrc[x %@@!Mv%.`bFQ Xj5YjpZZxA8 pdI..xܲ6 5I̔Ȃ'J́[`?A.j؃X0"m*p(y`wa'D %-@ Dt6@k.ǿ?Hc%ϐvY geLkԤ$RY e36Hݤ᥷"*^ `ڢAu 6 J`_/L$W ]`!Sj1\`6-[<_حatެvki7zs-3h,  @8 qt3(V"0zkppSîLSp(r2(xkS11+^k>Hr9%E. `L<>aҷ 3B*y A x XmdJ e܀D;r9VpJԃ4(L PAS'+ ,X׷16m6 vV"b.UJ<*@jF|{u׻ //x`\xB)E#Hҩjd$dݜE`/@:lp?{6"\4cS$g6&g2"7 xҀcuz@JQZ^k&ɨdNrC$jd} Rtryd-IsP|P<`j7S<j+v'¨88F /+(\`<0ҲjaP/ZCׅļ~)Q2\`p rOQI9 LR1OR+[0x?p4DX6곱Ĕ *5=O&">ꘄC%IR8$!P\N]XԻ(`J*\2輩#\j^]!x.,[@-tM(wՋ7BߋO/5ڒ4CSLIOBT]D-= ?| RCܤON,H);/6{#%mc=Y렓F<$t,):UZc^V:=tпAtLwtN{*[+u=pf PKUiYK Ko!piMɖeU^LͧD$y"e+JT*9W"Qy V"@}*eoq͂lfݻs c=R?&|˺UF6t'Ԕ?:_/\\)ɒdeM;:ͥrA쒋ʼnn#Si.:j\)Q3*Trn6Ƀ]L(c 1hEhI!E},ZѲ@7J!A:XU@CPc5J%Mы-Zןa;b^ܟa6'%JZrX>8cWy%?ST?ma⻳< >5 ݅1Q;\b;ޏɑяgZ֣?j!d򕴹J./2)ԣ9?ay 5ׯ֧\ū*qUw7t{U75]Fp?d>٦ۊƁ|$W|IIPל'~ޞl}zׯ_t'{w?}.?<w8֌hфpo~ÀO.'?~ƥCKzK׷w}>෹y 4|$13]狒;ѳՏgo;*Zo1>~hya}bRD_WfO1[班 B~U3^\e|042If=C4YI~QDHG<ˢTj^&i#pZD0~&nc1TpJb#֓ o|%%åUFU JԤjlw)[% /RcM˚^ w߉]VvXG7ͲW!u|q!;|Cw!;|Cw!;|Cw!;|Cw!;|Cw!;|߮c|apx@4?_X ʭ|=+ nR`znLiLȼ [:;)iJSIM'DaYHڽRԆX՝;m _S˅%{;j)}+4&5ؼm"~U^jX7c#YIѱM97I0xKg>^5>3m0/>}0[آr8.F`S3Rd?ulr9ҲÄPKJ.?\請hJMK z4A&=GhMУ z4A&=GhMУ z4A&=GhMУ z4A&=GhMУ z4A&=GhMУ z4A&=G7 z׽pX8\!έ 2`u,>H!C!L;,:U ]P1t&Nm5@zGeBzR͵U#^ fӢ-=4g+I3"0AbcQ bePLFeӍO>FՓ5[b!<݈Ol:7Bl JE. }>ij#(FzFdskzMU /d-_xQlL#>~o{q2] ҍoS$j2:~u܇6('aRre? Ȋ{}#/ #Tr4履;l஻&egaJXԠ06 ,~I?aI;36?|ZXpྫྷe9*/oT硶wU!߳Ba -PuI{)%u0I5V/T/آn\cV?\El G@lVUnM+Nmӣn鋦@+g5vg)#~p_=-qQ;peVe:R[ǶHTd+wמu9^Kn2xfó.xdhPɖKދ2kWΦ**/[hn_XU߰b]مq WϬP>G}Ā0Ba "[R˨BHt e*'ݸ" att~;P<ǿ< wbZ@b#.#2: S)4n3tHChp,JYu`xUazBי絳yE=~]ԹKou}{qSNn'7xb(ArWHs!XK2u&dי:]gLvɮ3u&dי:]gLvɮ3u&dי:]gLvɮ3u&dי:]gLvɮ3u&dי:]gLvɮ3u&dי:]gCv<^v<ڻm+\+u@._{V9q̩8$.gguu8sCCp9'uK|6JT9E] HSEN]]gY]3u%`)=u2|. =6|)+U=#u'q٨+)E6ߣҘYjWA=UgݥEetkr+Z8(Cz4}uQMYQzs~D_?0K|vS<#5&qOEj=}HYVQM3Vg~:VzU+E)gFi"ޏeٱP%G${QǕtSS+?0Wq9r]wS 5q(\mI?;까)‹I嗑|+oUXgU=+yX  :Rj f(񱀔&zGOGVQ]u?%z]u?q~]u/i1@Ӯ=vUwCC" qF l)"u. %HpVWP]_UE& FV' ݙI 2=$),M)f|^OY^"H$AθǗ.^X`=Ƣ6V0DJW!,< z3<]!|mZp[IEH iEVP$Z!pmHDTlpuJE]՘3d c<[tL&3 HƇvů3HP w?I'_ jx/9e_B'।/p0_ԫP2`_fyձL/zu)a=q7C*mӯ8FZm9`yW[- iA}f|!rh$z,MuF? ,_`?q8*~?IV jVhbOo=`RC&~SٜMSp&}Y FN1S0I6)/?:S"uB%qc!-uf;\+ӎEn+tV &<9*(, ѤL%dy% ܒac@]X)'l:-)^W(_)1L$ʇ蹣IItJOj1 G؊ ^Q$k$85Ok6BFnqem)R)Rpd݁a'ہ30DhPHC K)FD߂xiw] VVKSKfwgk؊O !_"W[_@ 2d0dH$#sC6HJns` xP'8uBQ衁}P@'IՖ <;\·_$7xwP{,#Bp|:.Py IҌad-dԭ iZd}A7F g#-67cPJ7]=^ Qބ_X8hdX6pPq::*O-;RAtoERtiO `(40 h ¬" Tbg Y(?vCؙ8-k0<|rJ8R#ݿ j!7jlso꡵n/zVf~Mw3>}= PjA1 e6@1Y=fpAShEN@2+qv8 xbels&n~˗䨸:H J%'HɩHRJ$rNw s;׷f8Ia2X1 cG%"@ F/pJÃ&p|I0~sjVWd]e UKz3Ka+WB*,grOzljɿ-ѐCq ]r1aF: lp6HF0hRz\TknlɑV=Ln#3wU=V`*w(kit4++U4aCo8|{aԛr'뭿:sݧfGh8Zgwϳ jJ\O|f0]%65`^ro/1y[ui=.&Ԋ8+Z֭4[dJ|HNbՆg#㨷6N[ oWFYeDYۧ]]O@4N tniRhH+-r< sFoQ1s[Ƣc\WF\uW6E,!]Xnu '1bePu)|b=I瀵LHV*8ʝ7\ H% jg>5#,RFXp4ZgYK{X{x-]g^™VX{~*UhbPh^+GeH7LT+?uqa>ˉSE`ƅ"I"iRٌtAqr ]{P'q!F $%^G₷0.\z* !1:0{fgbP&JqTQ( Ȣ`;X+\$H!]'\L.4)]~_8-eA_ƭ.rnQb{n3|d:a$:7qW9PjE d(U0xa23ŃdҾnBZcqA)IBtcKf9s) s\cmŦی&G;4}eqvC $uAw5%udSV365:~ \;gR(potĭeٹ9ftJp%ȑs?ەͰ:&O") -6^l4fil"aXJbO'D_F!| ((2#&*tFpFEdx0^n)@s*f}B^'&trFw*'Z'%0laOXEK5(>Yrsύtcҏ43QCR6}, ItCcCvٴH$<] '⭔H-ٽPդ |:/XS-k_JW{7w@apPN߸VL[i)E8xa2!ieHӈxcXy@][o[9+Bbd,^ 4tzҘF3y ^6d'-],;>,ӎ>A:YXU,VyKP=M2]lK&mǬB:N爙Eck"~q01qǞY^j/P`[8>nHa<uBcјDxLhJ# 2J.)U6+m| VqI2i\ aMB@a&y#|HY%ɨFkסFxzǾ+#"Cĭu'S<(32bNe2'Ie/Լ/BƩ|PBB{Sq8?~xWr]ɕ3O_X`r B9b2^ƃh5 S̒2Ld:J1&KFEtm aJǹ JN%!Љ0bqme@0c=B0iO٭ZF_-OKҿǏbKh7%Tj˻;*⺣]ST6%J+A `cF0`" 5BJ:W1qb'$Ip'g`7Jc9`N9H!($g23KrAkRV3ϲ#@hWIKa"KEcJfE4VrV嬑GmJCvTˡ* <$ s09"Ycs)h̝o@NeYd/I"EKs ")R4$Sf!ad fѩZT8uvRg'+PV F˖_|RĬ\z 2fC;)2 5*1V= ycCW׀wD4G- &sit3ҿzMd.;$vrāC\'m<"De)~8wjȾ'+ r.#ֲ|rG`%D#Ӗ`Y0D X,ԉG=X+cu+tߣbV6G8Ԥ4br^@$ #QEġ`lb2XgJoKC +)5Q&K)ܒdP({-奋*XWg>{y?kp/!)B̀poK2(wS?N&?bov.ƛND6^:{b8J 0GTqj̊zV7[2c8:gG'FpG~6ó0: ώpgV3*GaX +x7\h݆ò,ɇA89kOzVSL|ī޹o@iѼWm99>oT>sH@p;fGqV4/H`\6}d[~V܄4{7l[4/'kmmtMKb,֬9<.n8GzyS;5zqG8^22f#IuHgmÈanfYޑ&O|,XvxtZó|*QWlu]:+wq%#e`EdK3Mrƀdqi|)gbKx? Njǽ8Ƙ_Y,V m95"8+D6#Z)2LʙAEWo$JQxk%2:Q?}$lօ޹id龰ҳ>>Бjqƹzjk&?d;Gu."ߝh p1E2׽5B2|1T(]qg$%[ȋژE^ndQY9Fm:{+Ni3DMz-k~&ة>_IFkS]i@pRNͻ9i"s_fgQq +J;}牪džK_j(}=. QL RhHvCF,dZjV)"ii*^m)mC*-:yw!I/U Y  \;~-rW|HY֑dJ7IXrҖ9~>ަrʦr.z_wSiJL~}U1AWW'Ɖ(ڈ^dvܰqy?̨W}kYE)OWZ!+cSךݬi8?]a) ǽZu#U_m xjA}%U:tJJN9Fvr!][Fi=RH%R ܆D>$@ RKr&\e%9;iLU)ƾgm|ܲKj]q?8,уe&~gӗȝ+UC&<*#@B Ĝm&&)dC&ɈQ!,QdhRI  ,+2ƽFl8?Đ#S*D4XR( + Cnnȸ1%I2K< 朴q E B+/HAdOJN}[y!FNKhzZ(Ǘ3n {kZlH{;E`ˤƗ-_e0 e`6dR``$&YdXV,seX0鐧 uQӥWF/R,9Ok1ZrǓsAIH6+J,.u褾dFXo-2QCq(}D=H)m Rò Tċ/]&eVb_ߺ(,Ġ Fp I*mY\kNGs)X` ^9rTvW&Ȥ&9Dĩ XVz4IGw'w'i?)Xoyq>Y$8}|r~H|K~dkq8O%쏽oy|:\*rwE 8I- lu?{kfI ǢNlbgM:W[Zy\6]T6gzH_!@? 0 F4)r`FVuHQI(Unb1ȈȈƎ\{RkLsf1۪!ϲ0ϲ(Y-4  }! =Q+ Pp"c2W`\r8sE4B%hޠhQB1WDeGc rUr1WFI̅9sUEy,PkQ) +p[*KR%ǵ\]0]1Lӓa~b~mww> ˻/lڡ>YYשf$L{.oپ%|z4*gZ$Ơ6:}B gt6_\uq溄1ETUh./Χ ۽uiyA-Ű{2Y|9kK6W*/ZX2&q f6Y ByZ7r\~r^3F?_M:;w~/ RlR iͷg݉W5t)I's TڒJuJא4зe;YP\ g.tYz? =pg?sdݽ~ 1,xysYYdQPn]#=c? >z6\˜0&ϳ~ `QIx.qpYm[zt IYGa6mt@iWeX"g 6 y"t) ^9 O/Z rT3J4zuy@x?f>rtfaEIu^sp\j) I gJY dhr\v2D#PkRą`Q&Ɩ.H㌗I՜4js,Be]FΖX3]QS)\ 1N h dL)T"p\8"=,䔒7I-6h 9kJ4+/Vka|&NK-m>EfDrي]ȵ7*аYZhԨcC1\&1J: HZXe|Ɖ DܫtHHul@:vBi=Cڛ_}V|$46o>Ҽi͛OHr@r͟+$SF0eHN6^ (΄%qD' ݹuVl-SrzV*+IK"fuV@ :|xfSD'uZh)o V4ҕq2Gd/5$TqΪx8ߌutsח_/7KLa=`dx:(|·`b )+c 30 ʻ^R8\M Y&oic{o7c|sF[+g#q!:yt|D?/*D8EPMJh*ORRc`.zh;=NDK/DbFI9+띓3Y8H!AN%"2f낶 ^F@P`,1,\XU3#2qc5r̪_x |ŏIbmCoظ{H]2U߾\%`%tƨLQOd6l Yis<:k D0Ս嘽Ϊd9W:g' B`PґR\ZlWe;亾vOΖZo\S#qY(#U,OIWq!jEG&hR4Ԙ!ʘNȱ~(9W+DZC!Nc`(Lt"fb92G(YJ{*2rl]'./~|%ԖknFof/V\9)u}nZd_3f7f3;c'7[`^^we_}=y,vLK6B%t=e6S;s?vOd/)$Io\]~|98|4mj{?AM{Y{-X-+]#ax)r2UR5lb,} x)s6*fB6*Z\$77aܗEYʥ~.79MQ8|ۈǧtIWlrI⣘Z҅՝O[t<^M+í^+j-;1jC6zѭt9(砥D}Ds@`rVfh!WcZUjs@ Fs@PnʤDDil~D+j^5RJ߀.tc:¾[_AƠqc;1b6^YَaC-' o6z]/XGpY1Z.Q֤ 1DZ)U@O1N, FG_u_^JU7֏Kru}(򵭻v[w WO(g!e^R#NUVcLNE0M yc&Eʏ~ܞ̀ wZYRDEHGC &1y@'ǨY0mDNx)]֥2Y[ Y`,pϭ1&dὦ?B+d=9B7,3eJ##3cH yRE=Bby*>n^O8˙\%4Q!h$}hyJ!O?JK^mAӺh ed&i6>x\8O=O11Z"rܪ5xaAq$˦D2#wul'W2zStۅsz_z%֛\umX彈W_%'7fo۸9\AŸZyرw0w8rO%+J}jPQO/GN\YNӬ?k|{tdvGE'2M.c>FRG إܴoU,1q8 *J_6No blD|UªAT2eh-K)9 /AK&f.LH 9 4{af8\{pPnxhF_.?(ѐo J< HhYƒ#WN#:$,8터 0mM+ҀL 7򯘜<$$ZH'̄G6٬Ͽ_c׿ܪgUoW-頻zS>?OfkܛY·иr `F\ V75I+DsD@ Dx4@ \8 B vRx@ V+ۏݗ{!^]OƦ3/-&k`-$ހHq^ƛlG~\op hpT*/3LVTH|<^$d1h)A zƽQ`1 fU ^e)D [$)xЭZ$k)-9ПP9[Ғ PM;W5杜rS4BYhhOٜ&䵯mGV%WJEfIrL|UPMA$?ŬNJJӂ M\od܀4%TC䊓\[MsWOG=;.7OS#Z#{t2*B1̐9IN$rV3m#G/s7-@p&Ip3`2p|u%Zv]bwK_Z݊HlMEvçb1* %B*HdDԈ H!UR aEFӖ5q6Ү%3JI)(*<~ [!H=6ڔ7hEx4"4EkYЊDC0P@&Z@~K<HÐ ES֨{?cV `w,*P~&57UP.JI+izh80??JXJk:cX n9,ӇSFG_aAywRi52G % Tj,ZTP *ͣ=XNehMCv޲uz^ՙqs'""s"1YXH'"!j=dblxזPTss2g\ #S (PZqv((Eptޅt-7g{2JT+q4&pB,Fhe:Rb&s aJ;f2 ‡ "%0M^FhZhanDkbN 3Y8PO#`OĚ[b:@4Lr7>/R₝屵HwDit d; sU3EH A{-mkB+ƺb?7  ؊S" %X8pn/6* >SC*Psip?kSMMRJـ^zP+ò!Z c#g VSO8M3LoHNk~sx[Hz za3*  EAQJ9"H !+$|+tCQiiwřqᇃ0Q}-Y{"ԇ0 <\~jK  3TA/c3&OBVRE`ҳ g|4EK;ė*S]l8iAC"28C#)( LQS;sA~ VE-6`MHtHpi_¸a}AaҠ0'1_cԤs\Fk:5< ٙ)A,Q׋V/<8OϮAc(j8Q)hrQQ<"L]ޜϏ?]%fQ)|Mxtt\-Wc8M_Oނ-w&B -՗fwt in:zg3ևɴ'4iNƗ.:icouɦV*Pudj9OәKU!,p6 c[bDT*@'*MǙ:wާ߼|:|?/0Q`~?@[M-m5 -m4mMmvS:#rKK7nT~rWxY󋏀`iQRnJW߂;%(%m@KiK0ߧݷ/GV>ރG"PrXi4OC$F%9!ZR5y}^lX@^O,}:~;rCҩD^d0ʧԂ@5Fҳ`bQB[Q!l0N+{: {mM']vv>DFSy༲KYL@Չg2e#%#"e: u T F0Vkw!/hU (sVA-m=3{Hs4}A[Ok\12R\ ӐLb?u'i^H!M)ys$HLW2Ah6$.k+'[Sl:,mqT=i}$Ҫ;NZuI%+qv>)Cm H-E]rǠ}"JD1l k%ZC @\8݅BhWlˆ>eUxd!tKMrL ` Xy$n&ΆA3k\~6~,'.1vc_xdn Tr(Fe9x-)׺Fx,R6 aJD=b+pY Ȣ3xl~{iy7I,>F'2b 5t dl`hI<1XBZ $n8RAwcﭞ>bE-ع3J8BXȅOY@""]Y&loY}'@uKcN-lvSqHd>"Em{v#{P0Bėdu(R{w Iu `A=B#+s p-0&poD" * ba|{)GStJBDiBxPE~*fGLzwdy8Zl8w|zvVXeq`**~WT4;͏MO^F@te CYeL9qa"̊aVE(9a4I*L/L#J K>~jcXYFС0?wpds:0ㆆ$[M#w2_dgeG CzuI`ԛV{zA]T/زa\eX nIDQΦ*9ߨT›ɻN=jh- cfnGl\ LkyZ0΍yn6^ @C ]{7 cЧ uO-Z\K[MW$[D4) xA &ło/CgiW%v}lqde_ِBWز++*"dYQ=I⴮+Jo޼=:8;}UⲘS_l8rT'ڗUY#QbRڡؤe 8.j"wh]ErQl4aR7L[Yo]cE&(H+csŊk`Yֺjzl~}zZ O()EapJp)k[au-RL_~gѭNΖZz,T+f1:"cT8g-q>Z!{kxw*kkc63mdsj4-cUsY,n8J)}BgJh%xo~66KP\xq,X:s]Jr!y%pI`I.'hy0)ŪD[gxq 7[Y}Cm{bf|>_h~mU;Np_(ӣu/]V['Hhn^=%歧L0C`V<'0~n:+U>*UO`"Nˇ ,m })U w{?M ,^ivV>gh {Ǭge`/7oo\j hcv9=-E1J7^^)՟GRgx;"T`*.oNBKV;)A#N?I\7 R"IJ*{~=+XJ7pUl_ UupJh >UJgUW}+UR^ \IC b$i%]$%a=\@R>LJJ/pRWIJzzp+kW+[Ѯ4WPr^L+nQܽN?|el6WE=fJQ`<}@@ D ӻ18&x?2Gi1bwCG0߼&n }3_/HqڨmSWe1PǬ8~՛5|&w>w_Q+R3Ey2;ح0BXQlb;xf6?vJ)-=ҫtx`Y" p ? 48)- T]a^?{WFn rU#ƛfSֹʇ˅Wgԑm_c!)qٖ LnL邷4S(Pz8X9uL= ҏȔLGoVͭՙ .ty}i\V7m:\63SS&+ʻg&7r2j:#jd#X#$z&d9~ݒ!-DpIyi@-<sa6iDxa*&)RR2sj)E | b:\ B2W5q ٧NQZ/oy <ɭѐ !rLҲ, xy򆌐^Jrit EHn!U5ԄLXA'S6w19 $d$ )A l^k1U4j7Ԏ¿Y̯ <׫Sϫ'?JDۏo7;!X{?^ﶥoQ_ JUTOx#]4.(QHK dZ3$de.2:ǘgF:Zȉ#uܧMdr)*PHt@62Vg?2Uaa58 me,>3gC ]l6'B7Yt? N矧WמB).G3"&m",z)ʪ- 4 %s3=P:(knήd&*YЦ2bWg?b,nմHFV =x4>h0,6YbL`AВ(^KPRBU​4Ìr>e8l1.D.$!LYEe*'xLZ ɂ<8qka9*񘎾)CGV?CNhʛP@ VE% Ҧ.c0 0XfN<6.xkb& B6*ZAKK Z % ~YwO9g[Yn|0olHF뷴/-zgizEԮG{;>ޑ`wѽhJQ + 4֔Q&FC0V*w`iL%L L`όS=nQr,Grh鱐$kQHDIHBT<_0-Wd%ɹG ˾cǔB.p΢@M;B&EΓQ[ѲX{S8pܢ u5ܺqdZRhE6wQ龌T!li]ӢsAI֢OS\.iLdw!(29U6 -=6GK1u)vkR$n:X,`dɑ!$EY&ZIT{lMnތw 7$?z5򌬀QFz2kcGk6eY|P P6d=+vv ™[.ӥޜ-WgijWXrHZcɻT LLY7мM‡㇢AiҦs׹) ~?ѕkA}lxuyug)Ss? E]n04gJ ֝1m։oՍ(aѝ1u5ݢs{bێCa Cot_&<_ts+׹泿]S iIP>e_7zVwd1-> Y,|6Lkj핳ݽnkXHo^ec;mlt,I}~/Zje`4v5F*҉̺ˣ0 uч?߽Ͽo|練f`- g"oo7?'tmkuMm蚋^O!Cƴ$. ~/ogOe( 7GyuդxR׳8_!/oúVKeB-XHоxi9Zd܎/vGw'H ʾ'l#)I.iA̬TQAs6Zrn2!*f^zayMk=> S tDJ29K< ! \K.'3'7өNgVW'CSK'L ;g7ߝ|o%jH>i;2h_ }WBC_U'jFR7HC/@icr*IM iL^T5Z: V5A| WyCŷ^ȗ*/]x!_B1#)r )(jZL 1+% ֥8F.RdYJ_my'1+D8 PM2^?f"ʱ\5qrQ /L΁7K̫KWWJw}Z!RNR/?mAީ,n \tPf-o4 A$ynw s#BZQ3LuGRGsM>h]iCBHo-!GȬb2KvF0  |l๐Yѡ>0!DRKjp;\>%P&vQ(zI/~cY>c~9S wHxۉJ)Tg~Rl26hFaE"yFQz?ֳ[>OVY8eI7曶6&}uݜ哟NzF SYIDp\q+QQ$<ĔDM*gN^7Q"\̘(gR/99*'gD"عle'8- b]Nc?0}L' u}CKo~qR8~ܕi JAȞ$B&Nd E.-6A@ RB-ςEj< X&5;Zli~9^!TJ?}@ΊN!h+b$$3kp or#Jt 1 xvگvs~~@~ W%y0:jZ;v7S_"ڬ;1٧>5[rVYONE@ό*ޠr9TB3h(d1:spp:3_.=3 ϯ ysPKN&d@NMdBqEKp;yd1'qVSy´ۥwFWocFAARSv<;1YXH2WJf՟$m}kw]=8ghPOrjGv_@/b6Ae}YFnDJ DD4!8J (\tIF4S%Mb%?a(J$SRW1p2 ᄶ޳H_ :I*Sz'y7#i--Rw|lGV󫫎j}iRd>`ߏb~9:.Y_~}P{ )n||q&BQWcMZ,K[+t]Pq9fx\:¹&ih@i>/frM{:ẖk_Osug@klG!Cv)ӉD(䑜}Cû%ZN|%ٶ O)#)0TMJ>]u!qsĐucřH+symG}Ŗc!oZ/A-Jb~ꭋv'~Q})NQquqȱMQi֗ ;@Casݭm6|#(bi=-E*':l3Ao{aƻnNML{j|:u7;:츮;lx ڏ-B؀6؏`_&R˓ir؟V,r'g1Dz ((8_uM Uuw.O+חY)BoxNtZX>p)F,iIa.d$zf<"D5$ЫzEC>oN|nI>]Z|ZOUmnɖv߷x,?Lٲ;ѥdyY<$F -e$󉩡yd$XD."6Nt4X\GBPD.vj=MdK}iI\dH)7 < o|`: 挍fALq!wU>[3FIL*0i]JF!R4r78Ysﵤ90:Уឲս`ܫ7p?S]_J*չ+"TB"-?Of#)2Q'G*Rdl7"cc-HK[VbluS+V+v"sM2i*PD,-悪Hbr@5FJ1i\E6 "f)F0}kg-s62ҝ8f|EOޘO͗WT&?uǴw:MV V0R)DaSQHY),d|tZ4Cn[-rPүR8;t4Jb9r59cN94 ʋ(L-Ɲe=˛.^y0\p(CG nxɒ;nGge%uA㬚9[]nv!G-'!U 30\ ,A4)r'ae':t[ ozqjgy6"2^"BC9xRl4PȜ{CxI xyI'-Sq ȼe/S> BLz2f3)PbW6pT8*-O옖]#-uIx3f8M3\$ld)96hFl8KrZ 2wr!d)\z+"hHƎJUs]OV6qYKQ8%(2S4GAg}B K-<89!F[\5Ê,NvʤHTyBBK6ydƥfdAҘ%p NK2Su\p/)E5߯36Eo4<`% ytv\iJ>A(r'%R@$'#J P-2d"CpLJ`<ӔL<['SAʸ9[2~l>$xjzs19]j?Yd>f^[Epnu;Qۧ] 솪Yx'W]+7gF{F5&XG.|BU5h`2suR+jo'W5olj5o ׸a^RPO7k~#.v/ꆊrZ "zM5?/mjmsswwޏU_bn7W( In3Tz r@ɅTM_f{@J5ݔ6A:n5x!Jʬ&}Q\vtAT:AHQAq[wfax%{W{mұ ZX JxC&O-s6^pڻ?_쐠xYlfx`D&I9z&2Oe4SLeA93i+TKuBY9MB5)umBNP" 8I>xP˜-B))+sb9f<"T㌥S%g+ÁҘ Xl:h`|C*:d{UTGh #cpyV[$j5x[RZ1ʝs 2ccmTDL4rA,כ0GS`A i1v@16X{X6F[ߚŠBˁiTG;OF"OVR Rm/H(7_Tj<,wn*Ӻ.?'YlM)(̙Mv1MAȐyÍۢQ"C8&N :TkJK),{{яޜq[so8hTIRRBH`>b*KD:)d-xB"ަ# +ߖ< 5`J(| rʌ`D&0g6{H/@ R(#әX#!3Lf"#u$H抇(y0`y Y)Y΂[`(4rzu2wsF\XVW9W_Zj}g7*WE4YBRD;"$`L猘(dL+5"@mdZvG @"3~ n5TpZ>O(`T$F_N` bTW[ (s5}- Zc]XDS~7|pg.q ٘&!\M/sQ,@2N8$rqUoc%Nezw<*kVze,RYrc<"^璦6eV'3 6}'s7Q;A$}9?]iNک1b ּ¸WkrrZ)B"II9E.i8Muŷ^}"Ij(mIV]]@W|88z^fP:|#sœ; jmbh)L.1q2f?C_ 01|MưJ|+l?>xcosg+2,3AK9Qv(eN/.D:_hXr}벻tP/?}xd@y$ђN'"Vģ&FzE҆x$Qk 3I\ZN-LlV9 $<]9b"s6s.Vp.ȁY 2h$)].yIk)l$, |8!Ό;ͽmFKTGj=2NWV3gMiTh۝"8~x,ݎc-tO1}uҗY"Ϫu//o8-.SꜤIA}.GÒ/Q$RA48ƴsy[p*z]B ùZx Y˜(zq%t컴؋гൎ;Nf^[y.ov3O1[aP0 o_⪫3,wrIg[ִ$AKK.|B.h`K,ݽ)M6T(ɜU% $n_}Pm͕`[ԼTr?򷺾:-#.v/ꆊr/a7,k_^tI"W-ܝݟCG՗[/ͥe17Ęs lbM17H$Hu1M&ĘjbM17UiD28MO&^>E]M{ eG1z"|E~F7 wX6dot|;K~PLJ{s~_J믹ņ~ꇳ:^osmMan<HJXQ*G>gz#_K"7yq$A!qKQx5{碝4r WUc?"ͅ]wEjh ]QǎK.ȠQR:0;@&)3f.4z;JKgY*XFWZglR =cAJ/.D^ p@36OL#J+[rVz!9ήg̢8WJ^O֊Sz;_?*:m(빶X;m竔_Py3DW>TلBPQiq.Px.PDԳݴMa%z}7(iRr@ٕ!GـQ@c( R9] yơXcX&xs[3 /R7TJj9d2yRO2#M Ѡ Fl䛲n[0T*FdMel2[-E'$*"Y׺n9;N. j7Q{ho@!ɂtD@%Lk2EUr!gs9*YE&ERt2g̢"XI$#Id8!梳HLɆև43g7HH }Afq("BcDD;ϳ}x 9'Do"J-.*)YD Ó21U-Rq&!VKD6!1"63gs??ȸ8]ʛi1.\L B(#C5J>%#T*合9T_4^b Ojcb_w1 7]n(~bm ۫%L.W<9΂eO-$| D."Ut-&yrA[=Y7ZYI zޏt"ӦX'L19?FzɆHEDK BhB ڮHQecPҢ.()N׽Bi Hod@I)I…PogCg;83|/n?^,ۮu)g<OVnHv&)UR)OsN%P{$&ds(P}y0 fjXc#Z/KZDz:dXb A4LLʙs"2zH :u˨4:H;#FI8I%A\?sH列ɧ{;9mw>rxEDl݆\uz;썏 )\&Fb1FF&Ph* 1eqڂ%8O\|ly,dLԒ'pBiIMغp*%2\(Z$ՊQ\o>Ɠ4NMVӫNu>JEwp5==,  %i-8- XErDiL v]ܗ.} yOWrz~6=uYAv5|Up =pCnH#{zr=C4?52FiV#tCJ|N)ܯ&Ph&ޫξqٻ=6sbS9OYkTO[ƖxerCFynO6mFгC{gNO=2=9'OB&+b69p!٠۴k%spdf+}_Ь`ސ -όja[h7/>`ӅSqcB{wm{rw{y~3}-3e6&,VWfEn4mms.n=,&-v.7m]{ε_.Zt&n=-_\q]|ᬉgNjDa-Z9ߤ~P尻E%A+)rP$ABDT B6y. C}U~vF *h#mc$ F)FNct)h j@&rM/b| {U}Vpە-pX`x%AQX,vWpI*@aS{h\ a]TE.\3ɢu1\o -&rqЈIl [ڋ>6KeZQ*ĠHflr޺$\=odc˭YPy# QKCV[Ediy!{ޤڥ0ϺƷz"{ -7 {;yv2pWu,'br}=7`d$!S'SJz@)vF+8ԯT(ܽ$U>Sgk>J$Mj4q9]7[S`_ZM7=΃8C!c*aڲ~Q ` 1{i(ufx)CU(ISp?$oܝ{A'S[V~[B;PW0) 2q&tf15vкaB9"jg.0&0ondaqۗy(q[0nw|p>ǖ埪 {!n UTp'$~z$^yK=S8_#z~mȰoa4G:Ó0k:;KK~$K)_wݻtY ̋>܏fUGGgw߇e-U˚nsW0u9DZ$/p"zgGyLoo?OF E?VBgj9?z=VW摶Wq/h5&)2ܱ! Ғ+՝GBqx96/s T "OAAl^la>a֒Jd : -I"r[maMY ڠ)8R|$^J+ =#c I~nfn_1GrhrТJ /-߰U =O){Gyh&׉6||~T}fv|tCuRohŒ%BX)()#EJT.Pzb0YN _,r{xbFT2Nd`*:¡Y qT}žLϢ]QRIt8mU!Q%bʤ$v(\qDIU8kfΞq6)^mq*YM-,mFiA4Fm%2Z/<@=N !2ދ/z~ew;YK̇l@Ρ8ђ1̚ ڐ"e(08RC,J7' <>x;:IJBQbt8v~^#D8g23) (a&ѫ]|]߫+-v}/xw4mx2R\%<pn`xG'v7et^9i)ƂnEmn=K5sCRnxZ0s"I d,2 )[%}=w!RLFe!<څ}2֥DG+94:t"N)Oꌏ[-o@dtcy,ő˂ !M+zFǹnQч?۔Pٶ ݼh9dO٨*gv1Y;g {6Z(1T5UAأ 6&dr4ڜfTΏe٩ʐ|kmHe/ d~ 0NvM î!SbD2IVo̐"%D %#S(s&ݘfD:B31Rjl &j(C&zD.j޶Ж8[hd߬wxE,hqy `Ҍ[W4BX*PMgtjÜí `&rgۮp4& )$Q6=.r˓6\xj8#ʼnQ!3@tgAX ( Ņ0>df BJ#'D^VnJٖ[M O_ 7p95o+'aIQڬoƅo]|/P3b<[GV4,!U\q#gr=w7 ,;\ЇSu$ inzG1臣LYOYȲWCrNN=|Sn%0Pw(׿|QrB}IaNC/Bn__w UTF7Vmi<mٵ9K%17bKhv~n_?#px8eA5;ǵK L'ysW@1wVriXb$!Ъje&PTH]Ou42Q@k\< d#1HH-kEЖz]>$uGjcss1zYn/ۣZ[}j5S?|Bqpr"̃N~Pz='B1O I SN]֩g9otf3֬2Ĕ%,Ѻ}^η7gy.07}یyȻ2n+k%bwjlhN?Mmbr1Sc} ֝cq֟c5C>ܺg:Fݪ_#e-1 k,%ޔ@E=.-m0.81ܶƖ_͖fK;6e9䁍[O!B| Plh,XNSR [U=I&X+Q8cʀ2JRQk"tDs)XhvklVW~(&|lXTJ1B})>d@DwvA߇ L9B>"֪`5U(MS8<,eV(Ir.S ct:أye Ի.Z#A1T'3YKdqFb iM%ΈfRʝա~s1ybRyT`h!s :q3u2Fc@$S;9657\a}澭Xq=rcPglȶ4#B fo<9 riiG׆#O"u&R#\j<`~Tlqsr/Sşc-bM h\FE%O:5x@>:hpN9&o+Ao_EuM/I67bTX (v:Oa]m*G43ơ+p>nAI,BR(R?pIwU Zl_!A\UH,"RJIg-NAG qJs9D!In|`ƙ sP;3D v!AL&ʌdhuvDz-B82~lM-#g}JC‘fw;ը~ʖl;ZZד::|Jwї?>J"]T @L،€I^`Db7T$52J1hښ"uI{ST)Ƅ`qR%OlOmy/Y_<ηҪCٻ;:Guus~^iϤY4XgCzi$-7;*>I9t|H|HCڽVк!, 5rvGLmRty]FmfRDɻ1,w ʰE Q9 X0\`\(Y9@G"W/94EvSwh;UWra &57er\[6O8o9G?F,do;=nÓׁi]CFY%D伸T_3N݉;auiFSpsپ+vÍm@Hɩ֔z³+1PKP_gItOZ#U!2[ VU8SP8π<"jAwqPL0]?b(F9^}p1~eKl }Xѹ?[1rUeiʝbY>l6XQ[i RICȾR_#XѴqs3#nh1x lU ˁ}kJ#Ǎ7"(Q~1=9?8 q@D h1$F+yHQfZj !Qqx9B@,X fz> L8Ti2zAzU0cPiZ-@rxƑ#"S\[|)Arw,_8Xf$yA{"lh4zm{8쪧S%"$ofGk "l;K2xc][2yz󰡖)i~y" D;vxx/nG'JA =Hn\\^Ԇ3,-Y2i"3"vg/j\ K:nZ/.θhF\qqHrXAZ2@Q)c-ErHtZQ lG\| \<<팇{7~|W_6kwQ,._"auoZx-MZTޤ\#YݿVwmKO9cYge:ٹ{סR;{#&;@R4P"3*'dEJ9{k+J8Fmse-iKfhu̻PBt~m㍏hIq3wnakʿWz#k;1S{L kzYcFWo`rsը*%Z+YN[$vu`-K"C3ؐAt'Y7bZ0P zgvrcG)^̀7}9\f[OMLm6C/-7QKTª( 0xߋ䳲S%o[9pZD*2$лл߼|Ԩ̋Oֿr9hu?Z\?Nc'5:|8:9|wEoyӋ: d/ޟ4/rk},5;>=2J1VXq\SɪAz  y}?w|Ӌ~>=һtB:*yr>?pzKZA]~(ˇ:m"s.%߲porvz2{>uUAv1bIݝJox`NSuWһ3;Tqinov>{wviEkoFly>9 PF97 NCnWMc1α{Ɩ&a0Ƹ7wo67y@Aׇ'}L@ĢIsvv!p2Yl]ܤh4&]2i(٢6DڪhKB\l1N0>]gcyDFZ45s dwNQgmRBP5FMbyL`{RXD]gWtcܻw͹xlS-c2k-'c\4-=[9ʇ!SPQTtV{9SU}ȩ꣹7ϒWeyGl|LKWGss";^].7LגħC Z2<lt*2fB#+WנOvIj kc B)Oa hT Ay7D(zglDP 9vs򔡼P; tjC>% TNƐ2{kQqɑuB;{*V2*O1x gLZzgkK(*@ړj?:/wḪakkCi7d MSɈ&yuc{k\Mli̝ۥ^ jNv]6Rl\[*U _ZOJy ېPa, d1aq֨EGfJɂG;;6 {QZf:h3jrV1EF%UdjhV3,7`TZq{2&l B HŪmg;&Ύv6:BEQI3N0\-$.+R%%TIq*nŐ* 4:@OV ; LM% $@Jd9âJ5&yKp5c-˳1O[=vLAqV+[X2/Ij]BT!Y)zE Đq8A{[_qn9WGi7 jۤF&Jl&* Ł5Q7h񅨂[ٮeh%gb|Td_ƷtŔ3jߎJ yS)-x8b# )H 6$, \.C$8;ECfrr1]yܺn@H;˩T|0:âRɊP;$ 63R_Ck" e'oK`Đ'A@2&J#I8 lHwUWnqr0z^6}߯j-6i؀eCׯo{LM`r.i?_fXlIj=_L 7$e"j$#|jxT Ld䱜fgMd.7UvJ͘[ҳӳr9xUUbV2灀) ǗjYA8W|<Wgazo[UEZU!X(URn\mpoeWwԠjh}^IJ*9f2}v4G45L//o4c h/{{ŧ\i[$[NGuD $" Mr'm~YcwoW/<݆wY}6}z:\H6v~og[RjǙ}&2w|ipiymY.?q.bbv~sNϧ%.gE;j2$Xsֹl8mOW|y]ԙh1!XH>/RuMg˷bzWOSw/ӯ?鳗|w|s`Q|QDF^`p~sScϷ>ǩQ3ի Iq5N%i@Z~~Yt^ޯW*47hz&]ʬ\$[[γb ՟nRR(F!_ #};eG j>M#g#i }$T򐃍O+tU`d'uA N2(pEiaГ kupsfqbWtn"&j/YEUKkk^JFͣ_ʫ=*Չ/d!wv_Qx#Ѳnհ4 Vpl\pdbL=Z^o_)%4wSiDMX em6塷ddZ8yn};_ǟƧ9^/=Y[%Iv,G%rՂP{-@&E<>n%Gw~j zu$rIҌUCr]_>rV;?]>lk7ڱZڪ=ölV_hrZ6 &ZpKeКίJ &EJ"eu%`#Jh?\+aCHueۄt%)]1EJ z+Kou+*] rG Z ]וPҠ>elJO``:] .Q*Z+ jUuv'+&]1W:] -J(A`pk)M/ohz=ZҬ,O^Rh>MoВs g3.d0QvXY@c0l}tprRSݳyŻ*"θ,WA,4Is1nX=.bK*k ^q&+/x Ss0h{&LE1|7[=Yf]VvV m2,+o?#شV#u>(IJ !eDp3cѥ4a`sMn:7v~# /7,zWhBuO;`Nv7[S=ɱe?CN%+'+5&] W]וP7誇BLBb`U2b\ TQZ?誏9V3l\R| Z% C7ܙ&;+ ldž;6zplyDPWM$UdFBk:?JgZj\ƥ4o/5dt%.y{uJ&+㕷)M1pTt%nuUte=t&*q[5Jz+wjLFWK] mu%v QW>(Cتtq .&3w%{z] zW}UD; 2&ǚ4$;7ܵH73o;*)p[Sm\0sWlJYNSlhيGDuA&^o×p֔۰-m^Ƚeܫ8Py.l 9@+V(8C{ Hܧ2C!pM]7FP=BQnn0/H{\r6)&7TlLK OўEOc$u 3pێitՎkohաE! t%$+RѕВ뺮ҪAW=ԕ_!$+7TtŴxx(ꡮH.HupѤ+~J(tC]V JOИvRZܕP" ꡮ,(ׁПIg0(6+ u]1ePnUu@ pHg0ȸܸRѕju] eb z]yƥ4wnKWK:] |J(0wG]ޤ4w%A'+ RѕО*^;ʮt0B@Xz:C PعE=.ң=74q[{ (gf,ˌ:0S?.NX;Z:ѝvcwLބz=WOѕJEWB{0ڲAW+D>^bZtuŔd]QW 9"%AJSKqpT~T4:~>. uZTjLkT'„RîiBmҕ;S*໮+][>At*F]#DSqsg_dMe~"1h~Jlԣq)c`{Hp$s#LhIw3!әp{+o ~it טOմuDmCS\ ]AW=&+6&] өJh]PWcHHW ]1nP.] ]וPtC]TMHW Nn8u6LuŔZAW=ԕQJ'ջb`R*] .Tt%y] Ճz++3A)] dt%RZP]וP0G]93%] MgJp}2AJ+|@JHWl}2\oRѕWa{@Gu%`]PW$+L3.sgPhu{WBueUtzU(0q ~w8門L0V 'U ־r{n r֠@֠.jUg(t )l:rB1W?V@cVN.xL;SlGIb-l~Z9LHW &+%HEWB{Ƶt+Dљ+ѕdzWBkCu%Aꡮ4j >!]1!׸Tt%+Zr6?"RCѕdBk}u%z+v T;`JpON|J(`/ueRQ3w%Sѕ:u] ezW}ԕ؞{$Y̪ݠ^M|q_UV?F .tVVָ- Զ T:wMN!RnsQM`*o,nAe1'VepB,VT*4h\Rcv8.;2dWtPwgaUeIUB.s J縲8&bU.CUYp#qNZ zǤuUk(zsJ.%F QWKuhwrz8?;.ծTeΣeWX{RrL+MrՒƲڣ.} 3chk31kSy̹GG@.u=>Q+3a_; CJA^[ֆvRiB͵ K #B\Tl-$YVet,ڻ,ٚBqU-{˵E\WT}T˳XT&YycX) er9[2X [i,s7[;b)*"m/|GGkDn:rԅh- ` l^EI\CBA9\kSٲBKǝ2vr{ BJ=VR.wmm[W?_9 &$/uФ@RN<@}V5)ՒTb1c^vVu'0sC@h5"fB ,g ȐpHjsAuɑWm) ,eĻ`46Zy9x115 (_xXk琦 ngdfN 5  *D[W@m(y 16VmmJq*XLđvEƸе h%c5?cɩb pY0THw|2X-Rߜ59Ysceh%m9 -APHi*X\lE?ө,,HWL 抩ȔȆ6 2I40ᓊGq((C/^e2 sYfP꠱tb2,ɾ^@s|Dj3t+)f20O[gDBGfEh*nF)K<;Э1  s8bN0ts\J )Vࠤ`gZ9 P l:vv[**dX)@{d*3j(Q(ʨqB ڳYG<@W/:/TER+Օ(27 ]ZJd5ݕu$"FMhԈ &1bqѳV UQc%Gii"j$$= R N5np{p[5$Vc+:c"&LLgAZ1}^Ko+fUy' cD%eQ;L'! /0- r;t&^z--kjΟ?oTir¬̷Z4. ATZI|tx K8,o9@G**ZPL.HZMUzr AMj  (ȃcH*HeV** *VYQ_h= A)حLQx(nEe06 ],TG@''GbiVm))貒!w22}:XAOzX!?wu2u1qWCj,זXkxh(c#P. 6piA/hC*f7]/룫o5ʪk{[I1 l`BUBGmP D;k1#-PtAW#3K:vU9rFP4F,HZnkT/8t Dڢ xd)NWAqX[(&ARQbyc"CÕ,=Byו9yeA>F/y %blg[L{mkϢ;K4YF%4@Ј7PrVv-܋*:,W]n%LzH/kIުC:[iC:88Hc>yX!贙W2,6fZ暦L@Y@ԵEWP7]pGL-FT@`->2\;5,FhZ\ Gɪs6C(Noޕfܵ5h7LJxKTM:39P/3ds_kh+NJTt  $0 DUZ;`V&uCc4P!|-WĊ"PD W}\A' W̤SNw]u6!3 C ;bC!DE #HbQ:TCOW=ku=Zsҏ!!s6,ѳdVn,2aFZV=_85'EiLA I d:"USEc2˧{H:w \-#>ڍ7@KqG^k DO@(:rfF8aB3^ bN F9@>񡒑8w՞&3t((q˶ %.I%d~p4ֶsMל 9$Uq"&r٥P#Y 6αMI0 ]XwQ@0[4&y]H] S D8Øg6B۽qz i+ՠ@Bj`HEPn8}wG׀R;A)ږ|aL~/0%f_-wbnJz r9oNah;)jlפW7 \ݫ`@o7z\rjŕ^oWˋ ~A/5޶_piW.6Ǘ{[ _5/[7G3Kh4we;{sqlr;0޵}'T B};\V" d̓qY bv6N D(59 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@ >Y#`q!\ NQJENgMZ@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 l@(Ϊ3r!`/8ZZEkSw(==Y:ΐ@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 |@FH8gB\}h{~l|p"כ?oOkjAxކ) 4ds2{1wG|_f.6pJ%t3bԍfuM!?Fv{ #q.~TOۤChObG#Ѻt/vz/v !rX `ÎoS ׁKہ7xC TJ(eX&1ayX<5O1J:ħ =\"ҝ犥?atOY)\KʅC Z:ɬR"MȎǹ\ cp&#aOGSZtU/o8W_f+n+.+~+v:npĉo[ŋiiBYΧfvxWu&y;uyl³ʮ=]pHAKzeX"]?r?/Oʯ-YwGHѱsբ[S;Q.wW4h`ajoM^rWjZzI/v~b7GW3>TCI1B[RR{bB]yL]E=Vvӗ8#2v adUaa8 }g,("jm^7f|qwH7uݼۤݝbZ)mm/9̥$oHSP)y&.v&"t&BL(h)MxpVh;|Mmd]gÈW|$1 jwˎQ[61؉O[b]T<IckEv8kֺ]۪ ):2!WY,&(% !ɰBg<x8۲^siT[~3"rBDB{3>{QogR2Xy]4rP$aV$u("b;"bpF_^ u.nzJkݲT\qQ..ޗJAVC-9jJA&aLHrHҎH3VM-&E$\#pa8eg<aڝ4W6=ؽ#?[cVbŨJ#g)`R`|/ceB6U:xf_cl6VgݹФm 3Hds\eH6!ύ+fi% !$@uwJ^w_y 8LBR{x a݆aА[q%`QZ;a[qb3%HQ :׳gaTŶښ HV8eC FGR&(8M;L/Hyi9-$qI za3*DG% EAQJ(cI2H8fp cjB3ָɛO;,?U h3Ô>Sag(!՗%p 0ůW7!M'A^_go#Kp)H7+pR@C"2bfv|1X,AXͳ(\85! sTvr)lEu,Q0ua,ޯSo'1.Hբzm/˜+OJ9%X m l15BRjj' mi~tiFեʜE<*DKMëu&1 1%WĜOIO굥rmׂ#d:,)+`Cc$GbN!ԅofYށ0V V0bŧEЋ1Gi9*AGm'5j\ uVF/ӹK]-B1ERC<Ŵ?ntЩl4n4;vW?y{y}s#Lѫ_{vo`0$A?_D@1ޑCOZkm149g,6㊜S`zTȭJ(}_t0ZfwU'rx 'GlhQ\Jw+JtK#WwaN'P_nZ3}$ e)'€hHt+K1DbT%^#,Gz`C*gϋ8L-7$JTE|; 6bu ފ)dtUk6B^w9y):&un1ϓjI>Alú;g ?ɐ%7iQh 5(w+L<R?Us58Q Ev1%;c]pk%5 1K&yGsVa /QmGsj+, F6A`R.k%\"If(9wG׵>8\sJ'^.3g˻D7M_z<;c-i˚߾ g2}x:knTJan)K! liMt8[:)xQE;T\w:6?<@Gxj/kϬeFJ$GDEʘud@$8f`"0ŭXaXH 2GC*4‭iU"bK(F[ : 9A s"nSdʔWtI?|8CiHr|k>>}z>e^l]5`*۫^UB^O'&NO* +0Yk6ן@"Tz+"KW2Eet&0D򰨥q]~T%&6 J+V4Rp*YBIl;aByCWCw!R7p0xo (}\_+ڟX.VWJ(K d?îa__yhyri8,m_mJ|@ޯAE(Qpư!h"jAh@bę @!dΛC'de()V2""lЂFQ4`pHFΖ js||ڔ)ZwV1َšMm9h10JB&Pg~krƧ`4ZBu1< R5V4Rĸر@W,x`'5£hWW _b@؜cRp ,=R Cv8o9,{I"5SDc$% :,,xGI4`i-,zk5|T,yw6 tw{oIM<-6Ljr1y.BG>,pV:1H(*&L04!eQ2(&#PzœGL߫Nk/M$+>F'eH!ʃ`xDS , oђxc"5O͓uƉO{t~f쥞>bE`!T'HH>%}%WMa8$c.eFnaZ{^ʡē7nHy/m$:+$1H YADhPh6H$b~"&^_Og86QZ࿧#8)5Tjk^8fƙ>.֨8.֥(٬jua+Ze4S6h]x<97=|Qk )ӵn8p*yb;;3e7d ~#XSňK눌$ q6(,h5x@ 8+,J/q^!Yg@j ½U6\sM3(DZZ4񫣴G/~ n:uh9u^zA誄W8HA7f,ROt:xLG') {6"+?<ڦQb ^B1881 *C2Ђ6J³$izܡmyՋZ?le Sky9ZQԝ'lѤ}QbV=NEgɢV0@E8kA@zUn8(|El>=9yǨ.;U9I(%Kx91WOd m\z&䥊I&)c<&Ѿab-1j*J31f4khxj"M kRQ@)D= U6R{?ЈճF2HɁ >`ĖhrDKΑ#Hx"a&؂lkx[KUhl/@Uca~KjL6Mn0pOu+S}jOu Sx]{C.J5'YB>uͅk:.=N?uUm˝|븑W{fsb~y|쵲2aμ&BP5; UTsc%bDE-##Aoi㻉[|7bOY/{.R $%Rp*$o)S ϖM6yH"2:Dt,<Rh# qƙ&V;D!0 šC`H q? }KiOK ΅ysOPFiLp4K $PbNG>RPlksjuTxX- RHUIVPmTM F. cz[OO̥"E D;6*(}ry>T1jBE#'j˾۷vt@y&?z8UZ}7'o1B5m5!|t}d6$k|/ ŭ(^݅^pQH/9N@'`mؑB'Ҽ Rc*%i'gu`R5NJweHҹ~7߲댟Il6Dc}(zfGX8?Bc#%xgeDUdZ˄^O &dx>i0͌g':nzPvh(Xj#VG#$KڃNRbJئ㟚r ިv<gi$^gYϜ>EY>W=y-3kwHdqʓ]^ *y;=Ls~@"}G V6 (g! Sxϟ8b'?]TO7xngl-'GXd+')n␱Eʵ'mOoqV[ͼdiN9zˣ㝲V3!K\d计w}Cz-|բ'%("n_mڅNXЅ\r=/faj7A +D(8C_( 5X[{n=]{TRROَl7@FTyWkJ=:Ab"&@,1AX'-:Ti~<ێZ?Mg-GA8XV2cJBwq0g8= Xt1\VU68??i9 w!Bx6|R- &z!֍=ەDJqU;e$qV| 29ɕI`[# cC*rMQ߆hH˷+WzzzHtcJDJK3MYs̀%t< WCWsDQs<96&abna\8h܋KEUJ0ŇQ~hb_TpOL3Esp*Iħy,u5Qr% PRj" 3m:%*xbAsCX k،kp781#֗"˂-0b~[y=z3=g*Gza4,+$o7B*LX0uyJ RDI4*#-{-_ [AhكnŲ(bY% W IcpP>/z`:hN\X%"Z7Oe2!(0&g@N%eLT&$Q ST7\j 5nuL@ȣ+o\O&vQh֕|*>zy" $#ִ30ޕɏVȶO~dJg8!aJw03tBBA*43+e@. `+kv2ZВR%HWQIA;DW3tΨVҶUFTOWgHW(Hm6V#eB1X%rQ>\ ~ՕtUo˱8"O?`2U,!VQ\ɘ+Ʈ@_E͹iͰ()}0_#ގrXJߍH5pYb< ƣ| WysPOp>2 >`o>0(a|-0~o;|kW2Y_czCK,W rd]mUƧ8`;^t9i^cj^ho.4HEE}6R0Ņޑ 4ϖK-U+>[F{lX>[}6~UhOKW{Sվp%?b}bgp%R@F3]jlOWT=a]Ȱ ѮUF{tQ*#. 6vBȮUF tQrT!W+ w-?I(ΐѕhB ;-'w _tu>t%0%gs֝ WUF+[ fWWHW P;6:CWtG]!jt d?vutAN+ 8L T2J #]Pb;\\U6/ M#`!p M#Z b(ii3]kNZڷvqf8Q_ir|-r㚥;) L'Uȩj 'UC{Pب*ݣW}zhSf+ ]!\CyW*tQ*pНRWg5HW.UF{UF){:Gbj 5v2ZfNW=]!]q!h Uw CBWVjh;]!JIOWgHWB/h:DWX ]!\LcWd?~t% ]e;3儵2J{:CRR(:DW f3 m+%%!3+-Otctpg"T~Q2 #]PަE'-ȋN*+sVZl]tG16q>$c?㢃g\kvn;θ7=۟ѷݱou2/܁W{rе GeRUOâ[U[F@3b&ETT:$^4JIc6"2ȔFѽs*]WmZ kd˻ېYY{v{" ~U^ ]9?Jyҿ0~B˥WR=,?8lJV_y#Loa#encW,p723IZ&\F_)ݳͽKiN;wd:x]F%jʖ(܎oFOW춡ٯم[ rv/szmAȫHLkJ=.hCL$Đ8߼3&H e0Q"X2?%\Y5Uɧ^ k `L鍊ˣpcmn׏lQ[w 8; э(;oc!~;x3nɶ6/U^*wK'(w;u+6{D>j0`|z<8(],cs2KXX9USY$q5bB׳򒂺 /2KL?JF4'4NMZE|\/ M#Dl{ValPJM}|I. BR)ЪKz[75xte>e(yuyzvv w o!-|zw _t 7G|#c>b&CQi!RV5h~eW]ߔzjX\d ZF3JJM$*̙C6 t\J <nZh$*HLQ^mqL..ʲy`V p>yu$2O0X[IM1F"UA0 | .3$ӑHEt^X%"ZZ* A1a=36p*)c2$9O?",Vyb8y ]1YWSTy>uT.͖~?Hetth*oyy@Gs=oV̥Z.OUOfϼPWjtXig>\S4(~ ESKILm66+EZ٩$YV0#|Ç[Ycl<(ݢ |Ge´GeTa6!9H' !IkaD <=@W: FMdiT'f10q1RWXpgj{#_]pb7 q.8#8 bF7#i SюGj-[bWX,;㌧-Skrn>j 9ZrXMqC4P?H(_*LCCΥ Jq}<ӏ({пt2oZk)TR栬>F#1r'^q㍰A[5Y FY@7=3(/c RQ0)MDp ȠFU6jk nXd,% ,B"jD I-O e磱XΉܵ5zVvoVS`!kN`U^bPP RHiSFH uUfxeq5iHKؾЍ >q)5Ԯq^nK/.O Vݫq^):ҧ|cVlj+^㤕tpqR*5|[+9GJ9ܬjͥ@R *x8 Q8a pHzRPZRrd#)/G3pCsa6iDxa*$EJJfN#{6yO xf1ZAd;+uҞ^e&΁l;!{@{է:_/۩}h3R(1I˲06zCWRFtYtqnYD2m9#, [Bz2ͅ D53v"8 5],hҵkWwCYd'=6Xd;>vyk;QnYZ'h}'N^ol}`0@dB &͑DҀt9`/D..θ묦%}qWE>∋4aFp9EJ6"DqH%StQYƲIm#.մ/V`žkyNZ7&PJ˶*hK(_X>.-e\"ere|~5U٤o+U|U٨mmT 9^_ g9ȟFبUPcВb1Vk:}m?_T t7RemJ"\q+QQ$<ĔDT1.ȽJ0Fh0>&gX\eVq<9#T`ƹleSMSޅ\wS̞ekMDwÒz}/W7Gn>Y<~ޟ$@ 4*=IM&&+Ȃ &^Nae]q(͏FD8 "]x^X&53&BG҇RA+C} 4+b$[ K5Jt 1 xvM꫉s@#<@-ՄCKÇ:_5T=Lզ{>ͺbOzLt*zf|V KΡȐECdPg x|q!гzF֐D9>aB&D&.kLj-.VY#9y^  p݅YGG<p2ryG3^crJ<ٹ!Q@DREŠdNY {`[c}ZlCC uTPTM~2GY)^ŀ.4yuȍ^ias(&g^4WZEdI3Y$]# +OIcTJ/K1p2 ᄶ޳H? : 4-kI3YHnAsfWHn7M\.wMҷOlyASP҇3̔ oA^27UI?5b Srrf~V9 )ACVm8L]mlkm4.uGU:?hdؽl|)!mͫ0}}; &ۆ!tȠ;3ҧP7y#ʅ!r'Jb!٦ jR(=)0TMJ/`îJ?cřދq}a_oThuL;P@ʎ_4C`OyOaq 8MN:6箍I^/_{ҎYpb &쐨6=:v3{NkZLo@hwEwmg`tO\NkI*9s3AJ-% 59kBOmU l] kv,uQ7`̿{U=}k<^َ穻y]_Jń9ovѥO??mxhxAklR<E^~<<w/IUaT^Ox=fUy#y>;ҥxۧ.q7b{$vX򿪺,Gkt+%bs0J5COҠXCF7נӓCH W@;RW0(U#ʇ9[ EY`3 7^3zge_LSQ/7}ݳ̜\dl*!ny 7Lȴl,&h;yoIb{n唙^#5 p8kdٞx?ڒul^Ĺ`1 *v3(,ź :(5" U$!x˘\@{ 1&~X0t@{cl7/LoIjA!G#o)M y͐Q$ SNt* !rYQdsTФJ99ZXiETL#gafΉ'rmqPNh. H4 -7I*AfII|YQΪsv!Ҳk/`K#sҠ\ D =uٳ"): vYRa?&Pn~e7 ćC(޻ = dR%2N3H d>ZBp=yȓwID ļeCʏJ0cSɓ&F4ڍQ8NɮY!7MCR\(/U&4SlFKNGfMnB̭0\'3 ;Z[vHMUSƇo݁p1,X-sA0$,<2Δ]Z^kO>( !GgXWwZ:de됿$Ͳ8f]þgZȑ"6@p% 9 >m#əx_Sbm% c5M٬⯊7i&~AuZHTCB?nBe,[%!uqAIY5$>V"؂u}Vfe.ogӳQwSM1QeK+' $b)>S@=azFꖑeK,ؒ%\Jd}Dc5 I 2h%L2e:HxOyf["i&xXѲ~7l5h;^ zDߚ5%0_Lo',د!9{uo?i+<̃Z)ev  B@JdEd?!AY qO8Ng,w5ghEwUB a\W3^aٯx(^h!֌dyuèCofyLh'^y],ƣBOgM`J>rF];W+%Q5+בzՐhX> zirdjTjJAg"HH{驍 K&+ov{5QB0^ըD/2{-5*)YBXe_2 X[}ɝVv'|.:W&]unUHg2˗<'m#^~hdw༓:m"4ڋT#  6{{iT.g>=u3PYtEW/s[q!h%9Hg:@bu ւ 2+`|9bL`MTEd+e5#ƢBpqP9wRh`>^qnf>$UBgIEׂ` {n>{LG9UЧH99kM͇vQ32=4H55w6&D#(c_,ذ}qṾC (gZw eNɢtV۬"2̈P:T$Ud!I)Jd LڤZ9Y(C \.ogZ~M2oIxXυzÿm&c_Y v{e3OFx*ɖ'#jYwpoO^H jАo >,~qAF݌; jRZ0j&I M2hY3(wFQ(y@8;G5#&)JQy9@mJ sRc9wΥ'h϶4rHJ JT@XwJ, "2e0XJS]|uspT߹15%9r(S(!Xr3DN=zի 1 &CX|c5"IZ͛@x_HIV87=%my _\) .)y5aJ ŜU-I{xmwZ9l ́dth{^ލ6&:0b4d=4oje2~<<<Ȫ0 #W,<:EW(P!Swʁb:XܩQ@:]ƔE$XE㡍eb|A )*ص_g X&Nەn7ÖAq¢vMè=Wy^7>~ܩ:t[V5IB`!!,e)#*"PE)Y Zj3o'̻@o_:M\qM/ );8Ƿ~L Jك _Ƽe1Y-<\#p}èuԮ?J:S拖2+) B_DJ{VYVO?! om?&]4Ro;/jYVXk'A+W2mP`ɯG|:8iN'\W8i̘bom hXiowqp6>HİfxZz%;7yqS^s.-I$(B7!L(BB 56&wk$M [|ktv^žFN/O[c~}OH D,&&HEEh9>Bx:cE.Vh"BV\dҘ$^> c=)֝ zg H: v;]>ZjZgH%CBIҳzK낲PsmBx_GMSMs4EI}{rvwZVwOomt2?qS[1{G׋Sx'ϵ^DJڴh}i_]^qdX*[J[7ܺm[=/hzG6wT]W[6ܲYͻmy|`|*[>ݨs>MQlFkű&/Ih\:eY,3%# Ju. *Ku]vT/]~4틌nOsDRF\ 11%Z"IEZkE_U#[sNS1QFr|f3 !s*!yڀE#.][:#SȻ̒ @ZFqJaI9ik { (SAA:MI (Gzfy f@?[,.*XQ{Xj,jR&,%oPcT>l_'O?57Md`tFu 2Bl!R-boγ0NxzWL<{t\H;iWhB""z8;?lkw*ry⼘-/]@8Xfh_.g< ,l)?Pr61Usɞ2Yq턵QoFd_/aݻ>ePv~|ɴwe(La7/eYf,[RpN`ڄ9w3j~7i5Wֿj\6\tOy=si3]X˧}Ve;O$ȇX|vQ x5⃸ JE Eh,"dBTɠ|0XѸg"䧛×} m 7j-GqJGc-Yi[=l9|G.,P])lr 4 CfJ3,lbo;ٖ3P;wyq j|~Q`υfCɛtN 18!ʡ(hd>|ү_10*bkL0DRk`'0hEk`/D|,ֱf\xYLbu"AKEh UZm&}:̧=6Y3w%j1Y7Jvzx]7r}8Y"x~1-M ëEe(CDd"|=fX>SbQc2Cո #&hUݻ͕ꊿX^Ν."Iv&)UR)NN%P{$Uv(jBP"PuSz`x`\q58bͯbIB RTK!m,ɕI9ORD,[VDAAݱ4RNJNNjLb=<^ٓO;tm<>›%mZ/rSgWjt!Őd(\, Be";,ΒC[{il>ߖ! 1*!]Lg5tMLTzba02োyZ_iVqC8]nnt>>}p +_]/U[^դ|Pw'V Ӂ 3'ȝK'[lR #aAj(]3t0y+[SqK?-ȆO42`!ly0C.t/fqz&ȉl%~ZDu7M$zλiﻉjϰ)5xIeHJ4n|4`b0#q!}NKozZC\E]6E?a@o72vK]@#^ҖeYL^/P;yŷHQgagG;tUn [8Q#E֓񘼛v3V_mT\X)$\oep1œuv-B_hC=YƛZ=~L^AʠTOG(  !h"!ֈAO 8MƂyn[qx> g4X7 bZNmJd5gR|zJC,Z'eK>K}Βвzb/,yd`B(mbP$ 6DPd o]uV$>G< .V:ç'^oZײB>eVk0YZ^eE"۶7'\^r^+v^`_ц{Sk æ4G2ew\ͮ-^j^~%;{u7ӖŶ] M[(KcT.^F0SNyB5dr)BFwA/Ov!5smmOaވ/y,\\(z$b!2wx͸%d%]yP} 6:tj~:u;")$9Y%@FllKrQ /"-'l8kR6A4W$)E>^/F x 8UCh_1x}|KM|hDdu9_qMUgv]SEq=._b-"U:R6 a\)bvwJmLVS׽6,kȂ{;@&YVB)bګl "f)"]Pu5+EBv;; e `E&6ne-VPzZ/IIP∬3&vLlaWٷnBEQkW BI J$r%(R*Rj0iM \})8wـC%c %2G͈!AE`P@aM{WVoZ0s"I d,2 )[kRzHQ)B2J0˽y336L냱Vu֡My]XS XOC!"Y ;.p$j* 6y`gL?tv<~qe^1.3^SV6[ńQ(^gv0LMCg 㸅6Z(5E小HD%Di9+m1;U|Ώ>RFTҶ%01)H)BUtt1Wd2+cfWelM[c>MDڸhk3cEicAHNb)ZMlwhTVEF@ x+إX73^o:b]޲̓mV*29IkFeuwEP`C tp>j!SqP'tFK/0۠: > b)te%-.tF,#ecԫ,ZKNe`ͬ$͆Dkp)*KK^S2: R[C"3eoKRElBR$mA%)R`X2.{Bj{e-ull?m{ I9 5Sm|q+b~ .ɨ~Ӣm4趝?S$[{$/w?̠҇!c58&u<:G'(*>MAYJBb M6 aq7_s_g'T Ek)0?ߧ!S~Bt=8 YXV?u-Pxt|tWCКM@IpQijnMJ\ڌ~u┗rF{K_fw1e~ ,z:` H`v/Ʋd=*l$%[vCw'*y"<&u>ե{U|[QV=X>x==  c0z.̥A[nct8?_OK8;inI-]65Úa6ᢍ0 GJ:h8 df?9dsN6WE|kep5R&18䪯'^VFG?xP)%lC ߫q/q: ?>~~ϔqn '"'ۓ m[Mc{;4x|vU]nh|ǔd[n8Cr𾻙ByA[&/Ȉpeu=8{WeQC^(}WT!/ &s=Ҍ{ l#t' {Xv6C38YT&%!pƑ!\wʝQY6'ވ>7;ޢ^mXY$Yv?׿ƬC.)ԇsV Ђ$"]Ø"q{ܦH\].{DQ!AZ#i Vhdo;;ԼTr|F=|z.s~4L . WT\]d9`l{pP9ϊz ڜus2˩FWseq 8Kbhl` NCK 0.ZlZs8,=LvN66U^Xf8s<T@%Rڜrjl#r\ДVv F'Pu.*-6[\HII'F(ɢ"-NgWWA? ?/,1'/_j8%nJz"Ź rYmpBB 0X*7YRep 82!unQfZxPQ`:RRƹt[$)N7@sp_cBbkô҄oZg;D ]hczIA=;瓛~?fӺx>:BSs3Ί}w6!4Y[SʜjWD1_ R+iѴ"1`$VKQc`pO1jv],S5GԟCI?=]^,ߛ*%-<׆ё?~ H4_ECi "۶ :|1_FX1 B2*Z驎7=6J`FЌs kzD5Tz96Ի=aox?B[}sWËv2˾.ˋ %=OWeSg>YɽSmo$O%V}-v KƭT/Җuyk+m"YUXS\ ;6pډ4ªաemEY]e02еvH념L.Gp`~t[ [AcpM9 ʿWE E M>]'>3\wOvPᚸUz OO?[B뢫ly>,lnQZ-]ˍWU]Y8k`fO{Wc|gz=N*6OʔJBwK=,{AAwtz.jvd uvd4),PV?(Fz!RgB10H^z[$&"gP1I|S$z.j@ -eNj" Y ("NJx4$hbG ۡg`3GXypqȇ ϋFa[f;3}fLF7åsdo87DV. *pAKco5Y=fGY ;,{,{#UcBPqUT"UԸHIqXfcLp>^Mg6ѫ~K;&E3p;~-zTZσ 'qRq$K O JrqT :v>y.ӸS bx@NsIߕJpS Rj Ҁ:D@U\jCc; lvLɇ!ru G]Yw`)%%'\ܫU<*Q(0ѩH''B1orP\ >I8iGةFiy2O)(%C1KAM$\|p%A0g5$&c4O+'uyޟݻbTqšկzbD r'RpQcTQpSJ&V}{(͢/*hX6CCۄz(jvgYw%Ԇ . |PLFo\["i $H 4(Kv>V!D(wJQDhj`m#g4 4ȝwZA3eu2HAr"J9XUu6qc#w֟_0\ =҆uzuqvβ/A 7?L%'1B%1ɦɅh|M] (? , UZQA{cHW8>MGWW0i7 =yL ŽEG'Ʌ)W̎NcrWd{@z ݷ >t$=A&螜rr,\K[U$[D5M ddTcH~ފ_83{tSE8Zٹn9a钳̿ҩ_@n 7hhȐnYLI:~Tcjt+e54M1,kx BDc)1; (,xqs_pVɓ.cz2/^K2o BPӷe A-^;NԹZp=MA@uD T>u[*2q[טqФ&fRVmhhF*=w=X"Y6Nӏ}O?ӑU7]QtWP JXN%$sHILXIOSAxh^yu>[`61N19ՍoꦥϿsPaaOnH)hh<)g,n;X /\,@dZDI*h IJy Y@<y%KXdG[Bx;XN;O9,hAjI62/ms[ݬ^3VQ zfct&h\RKI#*25sN ׶U#mVkp=AXԨ ,oեMOͼ/]%!ڶ]/xn37_)@X(XAwD^Ď>;S}b`ԗ$cFUNzHƫ ]s7X?c 9(9+]&Wb͕emh\(g"Z2dV:.yR𠢷<^Ikm_eرI> ޴)vhXj[r%9lc[i&ZCǏ19^1;ZOL,J"1{t<g}5h2Rݱ`/wC}דX8Ԥn,d}\lwAi ′pHM^ nsgUq9)kM-up՚hҊP:IP9`9IcG%MJT&OH!Y(Ƹ+yJ [-tPpȅ` -`]g%GNXNIDQ&k)ɮҮA})MU~٣-eGMJ8'v\AқB;i8B,xdTp6+.p;iN,mj4x]05;(6## ˥:@qzb$Q8urA[Pq?PD0kJo!G-Ɓ,&ddg~BS=n m-G3懋J.uRտg,́V=)$IL(b5SxS]F4-zۼF O/W?R - NhEH`\,6_IOU&y_i޺5/GkksD7ӳoV X&Ys?blnvp?\O7{ג4kI#] #ڇѺu6p'ZYFËBOn\&uTdۨslVuR"8q#M>cz(WEl#^f!^-ju6WqpuF$N{9?=w\7oHV`Q+ $(z˛~Юi`C re\[ƽ>nuLfqrx_pav0%꯹h*m-&!A1*-RJ~5Hry UK[} 25"Ċb#AsOMZ܊֭߄V_9Fփ,*<';-fTʲQB8.9MLqc>lX6Yz۫D%"["i%9dn3!F@hŦ, ;Nt=;q{fwѹ3⬳v^mG2ur^b,;Fի1M%Y]^5~5)M^*ɻLKQjͣ=99{)KV8j|ZjG%RIL3B(UHRF z.1A֐>(*-$}HBzCMp="/友pzԭYlKyTͷ_7u~~qa0\ xQU3'3{BTo}5m}=2=],z|Tnn-I>9\ +q{.t%ۤ$IEɋ:0etB-= uA,AG3n9ӊ;^n (OGϬB)Ipɲ\"ߝh p1E2w5B F>XV*l+wYf-E^&Y`Gmk,tFX:S uJ['>zEO*:YRVvW6(m 5#>g#ٝy7'S}E<_\3D]fζQq@E̾w&zd!F/D(DXy9&}-CU>,i* lwrV(I')Mdiaz; ,^׾s܊7X[K͇Aq *|2Ͻr_B䡶LZ:u 5TPdœɚ Ldc]Iy7€ -SBCndS3Klb]242RD,p"LǮ}:#aÊdxtX~lecnZG^ 癣7|܉8Uu[~+0c63Q| >hO0TAˆ9)=v¼sGiHK&^$k{s84i[R7~fW7hhxk-7kwpƪ%,f2˺Wg1jZxGWӦjpQuW"6)sZ}!빝ssPZ7/Yr'tl@յAؾMԲ|m;'ۡ畖C-n?Թ.{nYf_}9vs4zrk.h+i˭4mrEݟϤCW[_ ;`龘\Ť-}s sO感Ð4 N.*5R("wp(PgR팒uVxNH8㷧hx|cŻu:> gav8u\[ X>Dvvq|Xyd03{!f6A2fSZ~k-Ե!̦"Cۻo3n5 菀U>JUpXM]1?O9o"7r{vٻ|84l)VRhT1a^%ua>t,m2o^UTvM.ŭqVE=ﻢF׼} c/`R_~i99E3ZY gښ۸_ae7 @7n&[Z'/vpi,S"JN?EdJ4Hy\lq@;%'h{c&R١hY(Zp hz\Ŷ?g?1mwa[v={"ux]S< \4~ZyZ{ Ek)4t)$Ȫ*d>RyQY*pSFCm!e:QIޢe,Dqi\arHx- ɐuE[fQxZ_|_jR;K4ݦW>{)6FF뒌s=Ht.DHc4ŚzS,[ fݩLDHf2RBR.:<8!HYq,dfF'cq]Lȶ֍.9G0 mr%SF@ѳ$c{.To,a,[^7ZGgIIIb"cDMW#2;`V2GrTHJVPmTBVd_'67iɕ=л//_{m_~sgR0gq7- sO0W۩G [fPA]dXκ8 6h&/Yn"D& R}"aL,g'%zMdc.5h)%K6mG;Nj3:s[梖-=r(=v״c|?̻ߞ=wHwkܥ4N=;z"v佲ɀCafGNÑ"GpƛQSI^B}o4G0U[T]FoE3x/B%SkZrL`!"d4!;]^[D!*[Nw(YCB2+$E+*;(=u&в Jyѭgz%xҗ-eWL*vdlZRמ]Wnttztz56Zw'\^ bz{wϿ#eAeѕ,1@6.u/y"EdD]dLF"ngv-1ΖK7IHnRr@ٕ&GHAJ0Hus;2vU:4X;Bcbg=ai:~yg74a41NϞ9b;Fv1lLXA-{)y2t {P `j`pd*maN8_zBoJ"u]J;Nyܚ㞨Z Ԟ?Gde:"d%LkH$e!9k AuERtZ"{ %,cM&"a{cBEgة&.ۙ8apa ۂ}QvGo>{_9'Do"*-. IY%udjBw5"ƤMc%dT9&dH RbOZ!d%cDL s?ȸ8]Jۦδ侸:Eb7i<";e(#C$Bi4 QL! z\| \uXOv'U]śǧ~dNPEIܙrU\RnJ v-V)/-[I8ʶ۠|| <q?p >0w{ A`2VHڼ3ouEQWⰶ1a Br޴&ب|?~o~gӹ59<1F?-MIJ)`܁M@&0B+XJ3pՊw Z)}W \1GV\UW]*zR:7W%gdͬ3pU+pU[ V)W!p &G6|}pNU-b+f2X7 +O/kgnUbC"&"3gk@{S7=ؕʇv{뎈~sp_[]B^D[utW6T D)l9MJ @`` 2@^ʇ V:wdrns]_ \'{s\Z '=t$;w/C _j> )fR@`OV_D,c[ 3nb/߽{q\mdW=+>(\BOrŢHAEЄR]ZU_g|W#~T3EJEkI)9YPJJ{#RAG!ٖ "0I"rm8[CP&hDxN#E/W*Bϳf1T.O]L 0nS I _\) [T0S uRSiuAdJ2-JPǼ5R2Hjd{0NY21Lvg<3JKZ-Q(lplwA` f; JԚہ&UBT^`dGPP∬3&c;L n'}kv7(*_ޖ*^ VlR4A4ִіXH!3Ut%*Yr4Tz>=8wjM̡hCQeD6$(P 6TTO:ZlI#Sdx!͔bw8V~^ 5UpH 23) NcJg'r3ضz3R4-(H\ Җ07c%(Q j>SNY ں[aOޞܷ]};6EсM#h#ɇEH jꏬP8i6)x"dʰǗ{:jG`c.PIԿ.kMMR[{0ͬ$͆D`e"X^tH0!ukHZ lY6TQ00Zη$mAPZΰd%S5֛׃[1?.ԃ>՛:= ;Hgw?uvA}OrճWړ ?|o# 8"p f GyQ,)s~Z|-Iǎ@] B흈#k$0q}ޛ17k$eHBP TjueqAx\? Բ84jvҾ\ƿKT]׫0N, C+qs8.ч؂ەőx2异5rpG4~BiɈ-rKQ1fm36x`:Oƣϋ^9<9k[UV\ھnv&Ԇ5?&!) &ePac_d]?*ᡆuÆA|`u ?|?WoJ_۟^3J<)?X?H |]5-4^|vM]^up1/ր~980Zvb-&{J*\ /b/s#ZqU*qn!ĩubđxi_ٞe6q'I> u!9>r2J%" UZ)og/Mzk/6, o O,};U' QUO%")\%5 %PފI8){:lUkg:L^ﳳ!:W:]u@ h(`aLWXF-X%\-Y&n/n<4goJ/E ]d+潨 tZ.V5Daw-ݰӱ݀]Os}RAj&\JAڣV, *{xH!󆱞 |U9p䐸>rdB<$N3,ׂa22]/qv]x *SD Xp$EiHWc~߾~1#C.n;0zωWWv-U#oK˷REqøz-"KO}\pOՋg_ރG~c?L_YESS沶D6|:v/ZڧaA1MlR<#$('D#˙V6Ec,P+oPd*d.*a ,R |à' 2=zgϜ9XFޔ`1=X}٠9TZg8/8NZ:f{;]kLyxD)322i҂칓Y2=PT&-Q,BYë00L\s֘V~D#"Z,W?!*2dC "21%I2K< rs[V M)Aޑȁw&Joq F w:xiɨ O>GBmb{7Eۻ%U[uci8gp8MJdK8!^ cj)s#dÙo*$ey;5aN~]]5U?~azuNyCy tq~xj-_]^Ս.7$G;F?tB |Ji #!$䑼iI5&r+m#Vu:[l]ӵ(RRzAr.ßj%Iw~՗I/چMJ43/<ͻ⴩&%8?؝mYh6/uv\#!Vm}Qɕn>CoXu2c=4XM! q< E}qޔ/VꆻH-j kKllcɃێ@ d]w"^-SO5~L7%6JeF)\߀HFZ].vyF\WϭS9<6ddNͪ4y2$W㛟 u8wKyZKq׫#=X<>$d7uxs(]W/c'2\t12e/@ I'`!&d4 | .7[75|;i٦v0atG8~RN6~Z} z4ɨ!1*AV E g#yVZRkK׮/XȘDU6! kKZe9V rػd M&8}΁ /+|m5B\_\1҉1!9oA)$y0"0Q1TscsT+o} KM;N֛wm v8߼f} V#VuO|"d٢IDSYtU@JM4?fD89QlNl#$K/ $O1F$'mCRe. !N r*y<˔ udCiTY oPw faNLU>kt3 P02"EɄzK)x'}Y~`\gpAs*8:|].c *Gz٭o@u8;{fowm5܂+Fd:R3˙>V_Om.8K/bѳM6607?,/ѳΝvWۆݍäPȤòr|:wo旋~'q3eDLfoBmk f]OZ|OvF! A%IUj΢$SRD\HP1q\;q⇒}C;B`y yϴ%KK! S^ l5h=f٧0+}H&X/'1yLXuy˜up!rn9']:GzK!`bN8C^px!z;hXRW¨qԱ!X`! 0'%FkQME]肣 7ND!p06braG1 V]Npt*[A+^(9'G:4c0ahu5yG$PQgVI)RwSJ19G"<& -&ZDBP6b6sZŘYgr.| .#JܶM"pb|~ 8&(AsubIP{0bpBfTYh!&Ow @GCap-=Ne/OTvR:TRۛK5mA-#RE _+Rm5N- PqSY!8CIEAP#Ux5 .k. %Ew<8iY`GM|I ӌ46.N>[yf:+\B/5_>;$ ޸8xGp;}e tpT÷gOc3F''R(bpA%O7}aP-,ըK߼=;|ߠ WSw?9i,dg09OM݀ Je߶ZXK>Dm*F?=;AյzrO@A>Nǽ=_]ƣajAk7՛QfIuw.d"oGfTSW!OO ;7kı@8|{X0%v=3[^PZ{KivRv{긺#|0D\,o][ c\4U^Mn/a!S&]DQB$wƧNv7`MKQ~m iEdϏ$֋ π>Asƛ`.s5W368y'R䚑lnGauި=K_dϫ->F/]>8Z=-i {v2=|{VjFe `Sa^0_eku'7ێlŽ͋vfj@1Soml{ܴQdyk-\< ,,u%Bbatw, =x{ڰO14u<^Ct6S%W{CA0:D`+zTV"YKhjHVBHQfn*;!Q}'H1)&͉`R'mФ,Ph5d5dt1~B1NN.^oiゎ@\ 0$qm<*-N \PΚL,c= #SP؀?  2O}FUZEl N)}sRl6۸9]f6[ŴvϕbDOnde9/kt$y-r_U{ʛI7%N@'"Y.8Z҄ uKAqYeYFjB 9(NBI^  l :%4$R*&XLt|bXXL3BS ̀{%*/.g|\MN|c|t%>s`rJ1A]4QK%72L+Ueoj oqrg#9YM.0.V0#$&DҥoҖg3bYX1ŴcKR<UI2D1QQ#{57=8Z D$iA#d )`yT5!/YdK7(D ꨬ-ٌMP)'!f` "ӏmFD:  k5>mp$#h( qL#9?]$V[S/-FAIGm*IEyPReTH(|\|8qq2+ާbZ-.¸\pq`pIl{$qcH)b4$\.iǶx 5aۚ|\Oq7IIKgҚQ47;$ w#|ُ|?͇)࿣s&p-L3?_L G^Of'\&|-i4ҽ fep9MZ+j7)ByxvH& G[*}w7Mί[gMf46 ?hȩ-"hHYhNf?<f\N58'L_՛Gojjq ^0ٷsNl2c|A͏Y?^x|Y>:P\c>ƿkcLdf%U_S]5[[i`y͍5H䵭TYc:[{̔;y+,Lreub}9cԊ<6) 6$&u] S!Z+B3S&<}s4S_x RrY)DB"X(I+#B© iMQF|{K!5\"Ӊą(d Kb@Z^K.? PŘN!PQGn3h?cӰOL>cr& at &bMbCWVl<ߎO)(@Q.!F5:^eW<;QJpi2' .fګ= n|۳\(v52h, ^Szk178,bYsMM>~/A(ϯkrB✤|<9?ǿWffL EnUUve=^ө(s0_3} FFj)w$6F YhIcksģ.G]'"KZ)v~.pe۩=p\eq~*p5wB)%\@b@Ƀr WY`OP\JS,-UrgWYqA\@ȓ,.UOB p"Jzۧߚ4*]$%~|Y>Bbꬾ{k\dF[Cgj%(J:5;kxهO8g58:0DH˙U4JnĨKCXDz rN2˺CH t#Ϳܤkzzpw/'c;__M?֣*F&U_>=kf-%zJ0zpgf3Gw_G;bf!o?uᅬY&TPsY-Ns)_fhGG=**Y%y- ['MB׏p }="OAyآM*Anh]7]õkE0dz 5Us|ʬ;Ko(nOol3G4OrSf x)С'CJٷIE;amnvQ !jκ?{+f3Y0X`>)U1&rֱ=|^2{uъ 3XJ9{][sG@`\pb(gP-QdKr˃m*eG+kRC06gp9n "N0]4$` WbNz+ÕA1DP)Xh`8mz-+4dXT<9cL-3YS|&/8I$R&'B\]dX e\O"WjQp "IΡ?K|1:GP/_PA*uʀ54V "}$(vd=*2YhfZ%kalI3bKʭC^Yb! <^)&NF9׌1T!舢@-blÑ7Mr\ `P(ceԋm5vO~ޏĕ||r qMgk|Cm//>LYx*?~P) rfk[n$G]z#x~'&pH ɖwXTxJEu=[YUs0Feb=%L 6T 2{U(k$Йdepl799КspdT90W;8-Id+M\N־@mOsEf^107b(ՙeRδG~o5cN^`5ÕlKnzULL/c)&VfRԡ PbkA!1Q`%<'.tAX"RzAe̊c,k {J3k>Tg7$ݬy>N}8,kK )^~?kO~Z/1bxRt+q&iW,RP^U+֪ŒŜy^*g~ILLgƝ%}eQU?[ZRǶU9[.?xbK=B@cyUFsk018'MPN(bVz1q@7qjܭ]|E*@+(۱}o?.Jo+bFخ>&Ƴ4>ӸȃDz&l[?G}H=6ŻnWf .xO`߹t3 ˑϦLjGw+WxQ;VѵK߈aR]^y9Z^P ͬBUu17%F8цG"taUPրvj7+6MO t.z%V i҄=w+:t X%bIҚ Iapc$t 1h2PRXkqZuc{0k;n>s:cϋʙ1fz_]V+EcS{ J_kC[8+CS^ 8)A;A1&Qc]e:I;~:VFƲ1R1s%O3.`r:b&K*ҏR)Dlr{>U=o( އ9(k#1r'^qKn*kHo i!LAycbXgaL)o2@2dgS[5qk }-K!``RsnrLu͙H%o~lEߎ9)p>1uuq?t(Vqg$fKl&[LƠ2@ *.<W (xӶ<ؠznܚ{jcUdm=>Ei7X7nNR en̰1FsȄG;4X{64R^kd$XgLC(F6>xLZ^jo[5N`U^bP))#:[hfo,nP!ʶZn#زrFX*vL6wN9w_ڹ8БdN(ldq-J68I@=Hq|Bm󮕼 pn 5c#SP *JڒlG ʨRX%lH{Wp[E4JkYJl$U[ £!`DƜJ?lӈ<:E+$EJJfN3{6^2 | {r)gKk(u :^ikUgS.G1%hCHCGThb@iP8hac(dE΄\6*y x6L܎9ڄV2 Ԯ]KOqb&LvW[|4b&k2`BLI0Hh-!8F|-a@6JY7$K]FC&dH,:Ă≛G/YFT'j̪&nؒe ǡ*{D<j|jVBjf\2+3%2DZhMRq ]5%]#q$MeAD<?J H&bI$ J8{\&z՞錫O{մP+E>/nxD"}*p"c\]H!LYEe*'o}dG(:Er|WynkngsAGEaɟ$2 xHߩ]zĎq,=גqlWą9zPZ0C"DȬbavF0%  |l๐Yѡ>0y +9KQ\{ﰖ8{2X>!@}_,^^-NǴ>۴ͶۤϟO'Ǥwj:YZX1^}4~Yɻ׾#Fxk23U`cVBAKL*~j̀0qX3ғ7RemJ"[i*%!,gqIԄݕa#onGET 5ʌ) 0>Q[1;D; ҫg>ԻL'R@K] 8݀.rv~aeo'F\Z_/'1A Ȟ$B&Nd" D! hH`ǡdNY}/[c]lv_Q80Kg !+Np7tؗ;wIw6,BLhWZx"&!XIvJ d x?${Z"=H{7γ K ;,J{?X-i)!4Q*)<ܔ Kgp Җvg`?j~/AdeY֍,nAY,nhTq} Hy,hD ۓ;Q]\q-24n[i"ݨ3CM֍6.ف}fWzcvBzwh|w}~}J+/MIKwmXwyD:W&v7 S˙&h iƧU`^3n|kn K ?ft|-u+-k]Lr>lv}Sѯer=m_7EBa?nkN&7"sCw]$=qBa5b4^g=vtslzӰlǜcwK^vi h82}dV,r'g1 ùC^y/у:X&]:~E ZQB*Ͳ5I9X"⤕)M!fAQCRqpzU^/YKjq >'FRI~4dGQv4rmkeGwۊȍSl *o/u2? K 4B|Բ1Fڰp/qȘ1-Ӓ-c'vdAȅ$'D\;; (v/o}:ܾwwk$CaKͽhmM&+,p"״Xsl-xm@`k.PӬP;0ohý~r LrVDq6~LѲ"&c^:7k YyLSn]#:thi1c7ve{@ 1NqaLZEBNzBQW(!8؊vO$렌BrR F&ΓxKeL$,KSS_kp>oW!vhy7"t0*@K@#&Wxƣ5iH[*Y}*>uc>;r6`˻_E;LZiIq=!>T|u"2D(uinOnI{=;e.A"ZRjQQ/@EdQz8F@UiO׌3e AN3<2X% 2)iO8[ֳg^%5Վ{z].Y%MTFS\&Z$=MWTLiĂqUm/EDOXU`Cżw,JKHsQ<L0?OZ+o7{ÒSS ܱʀ m\cM* 3i"@1I]OG6&{nތdxwL4)C2 : / \[ƕ}%`3ag]QDd_-+X*.$ޡƐ`]NpQd0 $qmi[|Aq9 x|WcmarqhKBٲvc ^pb ty`A.>/*(?6$0e'&DS5o=dd Yiz-%(`R(oKtX$O@i ߣ&%>eA(hbH$K!SE Gcb%ɘR{4Q>piԚ(c '74΀# 0QRm[m1A}ͦ] ]Ȟ=2,]¤,PbSɃK%mp:x-4+湣\Z魩$Y`*7d>XepcچBS7LW"o+b`Z%\fU ϟ‰X!50Ј@}V;>6bWd7D;" 0$p+PQiϨ #L+2|}as"<^"qI`c:8x$(O?wE<[sy#90"H{"[0ZDGrA8UJIZI2 Fd%dS0 1?&'޳C3xz A_yjggZ; u @[?5;JOXU0SNa^WHeG?-+XE*Zq0=`Pd}rdV+)7sv5̖́VE UU@H.ԁhˉk mԛoߠL =::uk*.\M"uzؾY|_BtJ *5x T \V8( eMK'oͨV*_iYXZ~Z|}ŷwof"`Ѹ:}[nv#8R'/ ~=Ko![jRd{MB]jjgYͦi< zpqךժguGgZU#պ ]:i#m >LrIU m-&-RJ=/U ,^6/.W@Go^<o߿77ᅤ+y 0) BӃPTm۪[U͍ءjlwW-Vʯڙ[+篋/?~;ߍfZ-+r=kGEF1YO9Lڎ"r oT^Y."ZE~*/4?O5+f2+l6tpͅU]+@2KanfFs^?r7ڒ\9WDAPr0ɋ?Sgt2_Л`>3 + 3j9aj}JdlZ{1s'BpcPE4u0_MVj#v f(\ B'~Vc6 @X0B8o*H1PY1utq-3W4 9^ m+Rr})Uu*&W3UJX5XWgs% TcM^HCNW kBFp<,]5B+ ? ]5B]t%{w #+%e4R4tutŴ1Opc{t-2Bv=]!]qpyNt͇]+DiIOWGHWUFt%$fCWW\ jJa`tu Qnz:-tByAPʡ)DN[?,61ԧ->*R酨|UoO0ػi/@כ2[\4.߽Z$!D|zqV5h׋[b=`k&$$tLTTĖ^z[VL2?b4ʐ~w6\Z=jb$`2ÜL '4ț!~]pj|",,P klR\ ]$N㌣`L0{F0ٚUNj(ߠJA:泩>}}A;Wk չe}pBc+F4g2#B& ]!>y8]!ʞ8}c [+ =Nf( JΈ~m>8 Z0;$VӌJRCʆ&Ԏh:]!޺:JR2S s]!\s+Dkd P {:B3fDWŊ|bWWejGw4k't$teFpԎ;MT;MOJ(zSw~Dz\m;6B9ff(Yb5Yuߡ\3j2+lU6tp9t( "TNdCWW\ ꂖf(5 :pT#l *Es+@:]!J[WHWBH®~ 28j&L'cA&)\']B7.aѫI06{uRe;tim.Ô7^u}ur 4u?|Rz$6׎Q tY5/ަņ VCNNO|d oG3$ḏ/$8<j):BjiiITp@houOʁRz:J4=QJΊk^ZuH<v[GZ)C\4;o̾/̱giͦ}k7T-M[*'=-l.R]MY9O ,Zq4)֥릭pm Jn.K7y{ZeDy2NDUEB0~y??o4~a^|~}ꇴ:5oN׵}鞨(=pW9&kt8.[J&b6z5sLUc)TQ*㔶sե ^XWեpljAZRzWWjeߺ(:R~x<ϘogN]e,Z0Ę}usP:*VZi FJM6;;SLZWkHcԻfmV)UwPRyT{eP Tg]kGfhnf(:&.\T rz7T49~o!S(hVr%;Іp#Ы kjաtҔsRav ޅKBcWYjOlQX~B{*R.գ` &`֙%E]htGjO` ݥf ӘJ2P4@ZEs l")>(%@ `->AA:: pqì AཹV)(J892zQp5(ϳ,+RX6[G_PaH6v\F+^eAUcܳ.RS+եq# v^n!G5~׍!!198k_JD2zN `<8Q4鸐 xOqrf3RU@ ǘ@Qc iD#N&YP;_f2񍩜]aMO2QݸZ5|@6=Dvxs:pqPmJ`"8*=$]I *#d`5yL_ 5P2&@ݚ2&@vtm%pu~V% 9ՠj%QwV5κjUNa2 kaN3_LzRčvt%iV1kz0+t!oP NC` Yk6ՔWλå!`PrAwʃf g|H28Ĥen Zv Kk|mu<ήT<!GR  R Gå^B}s{-nlKFkkxbHcSʙ']D0?{wT~'67144.>\N}uXu\XKvM)W 'O/l\K6^_~}|1ֶޞn?Y, \6[|҇t!ߌqYjݏ{8>mdɪxr[K7wO0ý?c܍u]aW@?OFIO40SEd@j97py19[=a(_b(34Z@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $IBt$W.'  J@@yz90O];(cVFS>4ZfOVߡ5G^0+eO9Qf?w?~bݿҧ#DRÎy c8vǏYl7P+eZ׼{+j]_@j7ej[rJizFc<-oW>q$~Sb6w ?qd.?ˠs|uo]סvßt܇p{:V,A= IPOzԓ$'A= IPOzԓ$'A= IPOzԓ$'A= IPOzԓ$'A= IPOzԓ$'A= IPOzԓ Y.)'Z8L@>驌^#aGvIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$^p.]>hZPpY-&  xdP(IF@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $Ip^Z^[KMǨw׷.0Z_w* !] K .al%u~1% ?ۣ4K_:[߽sCk췡yU|]E.=4~0 5/\BWj\ qҢj~1tF@ks2 +&Y]pj1t5:Z ] ٛA^ ]Y-`CW_hztjCjn1tq9fp@iPK+ǟspZ̙VNW%{HW6*fIt5?l?Z~jBW/*ì˹j]Ρ6<{uK+Cv;ɁqQƺn፨ɡq/;]`?˛P敲'֜g[]lV;M:T8asbk4$tǏ{Tnρ]mOvUr_]u͛Ow 56Z(}c7eYco_݆p~:C鿹Ήܶr >_G|ܽ3kY [Ӿڧ`3s/l~t0,ے<0E7Z.0VO!r[c*o'eҥcSj%O6U}J8P~b7/Z8]bK)Ը=Oۋ2eLR_[*9O]7=QuTBɶ8E6&"ϱȏ!7l{q̩S9bGu-fI}Ezg{z5!m.>JsR_'cs +2m1tEv)ts+_ ]@&Z]0+p^ ]?{Wƍejjqv2٪aOj]N+Y%rh*t1W!Ph$ Yw|5t'Z(nB ~߳%3VE؜{ 8!h逖P ۲E'=][ܒ6Ul3tU ]ZŰtUP rP0v3tU{& Ji{zt%}5ֶ;}EtEhPmRɞ ]W!*w \ЕOWR=]AҠU Ҡ猪UtUPZ+$y[֝WUA~*(Mo EȬ+ ;tUО逖Q~f-Bu 2BW=m/+B)yo EB)W\tB W(@E'[)+Ǜl+O ı/@Б4_-vԓZZio9kjX5SFom>Ci _1uUTKԍ63OP)ayu0fp&"ë ׫/ #/tfEwޑ9z+_WC;t$SR> U3.^t_(?9?k/dSQpí } N/^?o.-byq[o_KKsLeh8# <"ɏ@T`y:ΨMJ#a.*m:~20T+n|KLrR&#H! ]^x/S(El s_H(4U1_mu.YE\\1:E ›.gQZ$χނd*aR>(I NVzA(N ٗ[ˬ_ddD6ิRPZJ<d%tLCΙj0 iE,՚0(R'H2h,Jcpv ))\hsKFZ$dYX, 9!@Ê=&|҅B7)TJMFxbA3#60"*~KO(RtYGnG nU1>֦>bQؠj-,%:O2N!ޤdC c-ci+su3~e.I\G)?OZgyސ~z4r`~wwNt{_^wA鷌@yDps8$닅(᰸?%3 yT+3 l0rDe*k׫.:lfn8%.jI}[1k ܁a%Ġ)wJZ{7 LY+TrRp&,PO@mjw`w==L{|ֻ;4[_\u_e4XTlQ1،!,lQ )j'2:+JtQ*KbT6ey,*1ZzJKasV))2Yf.-'EAH5Iqjݯ-t*eNˢ{7"WQ=H/y>=>o-&i6zeSGׄBxT*MTS?ÿT#}_S$ et)z#cཏ6d|·@6TV)ʘBXU6y{d,JS WjMO͐-*o޾9}P:> 3W ʶЇ D]Yq&j̋|оzv}M6CTV9])mlHW>iğ)3iv;QQXUeȰ aNdis6蝓*{Y8 ʃ !C8f}PXNhʞddg䲌&sF8% Yذ=''ޅ_df/WķŮ [Qf}'}rwyc5mC9,Y*FO6Ia m&s<:AF@ҩYU@$ :Ta A@NEt0Qʁ5YA?dziN~yMO/F7vm+zG6G8veY[M*NDSёd*b8*:Drlr\x/vK ˑ@& "#3щlNl'-c3*hehTwڔ\JFg f7wڄX4RZO_=j 峜 WqFwpe@ydw7Uno~44Z`nli^97cg'G=f8TCʃSOf[65+h߭Ci/$H~>~GU#&v8(URJ_)BsUWP:Qkʪի^eTYۛ]/q8tLt:B1|_=64UwmX\ Z%4B R5!]bIz*&9s[nD+sV3ZȬEz&Gezrzr-nV#|/#8c̜xʃsμ0w1ah5g1a(mˎ,b 6U=~]GM,kJ*J +FW%+44<ѲhӺeТeЊeIw֠RD"PچAB"`יhYvcQ,BJD"Z{-bVdR:! -X&s5gb Ci=~|S<ʐ+nfzG6kGRߊ">E\v79MoP@M7:Eςa $P)lc[΁)z;(-Z?EoN\E2Bba"Ό5@*X&Mf`ea;lɦm:@EÊ̛OdcΣ(7C-t&Eٕ@^$c@ނcy Չ.wq8ܰ@s}<@ we#]9|+PGJ0,ݢðt+b{YJl$5J bH;UN UJDǽq[7nūek y6޺/]Ӥ}-Qe zR{v6H Nd:( PȄ@7br"𐔧Nb@qevD5gwGMLjYi|_6L5Nd[]$[{UhXhxE_XywC,!db\q+rf g3hn0l1{N;HDI.zL m(!gr)n\+ffJc\X8 a.Ğ OʅONe|tLLbdǟ'p84Lg9cBo=RBs]4dL`Ra,g)Qqȼ]o9W;Ge 8d2f&{/|:˒Fx_ՒG˲LJAj,~"둍=12lrŊvDD02Wt@i)rGl75soJqǎ^j7ui-UI2D1QQ#{5658Z DY'$iA-d )yT5!/YdK7(D lzȹk>{O!f`D,""-C7r|0nIF$S1yOŸcW<,`|7<|+Xnr뜵_Y cvq_kѫ$D&TҞ`z&(ٓ+]o4Hg'Mi}p\e'{pϠ,ZwW!(.* VMcN|Q4;YΙm'72é3$d-0+8#!q͘p4y2eA U:hS25 >{ *+p ~WFD(JI"NJP1j'!9)ADbsn:+z}:|/Ɨ^%^Xρzs` C.Q<~hX>mtts؛ (A {U"O?C{|DO'z+P$ A]tiA;bZ}iŞ(ry'ZG6*0΁Sr,4rbWQ':W6x&^plAvWw}cN?zS;psE :* jPdxFd` 7kfeH\^Z|6ZW Ё{Qxvf]_vDW.xˍ?x6N2,UEhӳѻwԼ tIi}6UnQn_d¯/~]v3ں.jn$l{oN6ͬ>צnξ8+VOgY֊-h8HH, dcSFXp\e xC >Ipvszzcjn]ScBۙ0/d/nFnzHHc뤀(3i7c޳mko7VR|/Ֆ=-.ʾ3O;݃9  1Tdg0p:EJ^KN'&P6=QNyiԑh\bS,T9T+ #~*cSך;/X ZDpM=}f|>e q݌^#.f٦[ ˻;ʛ⪣m.ɉ[H@T+Gb$z.z:-\J|In1$>QzwJPjAR',O((w&,<̒dbIC(2rDVn a:*e޳$4tJK"'rV-嬖퓘XjBQsƛgUBQLrirA9S%G *`ΐ "p[ FdG{HY@ -M߱ T)d$11f<:UN줕w,J*9%zƫÅ7˫5Ad-}Ds֨Цu=bW8>[) ]jrG~]բeV``OI3?f =X-Χ6ϨJ{%23*wqyZ5ZsWO5Qo'4iLM{>PLOK8FHA8rGr˵H$Q h9d|g1j֙&y.4JQ#e &'V& 1_^p RlOB}of|oy-z<\p9*ۨ4S"܁xg5 n8\7mljxdy )WO>)MB7ZQP"EA, WcPȸqZ~D{ya݆""$b+5PHʡII‹ۊCAKf#:rIsy.Gښ$ T@vp ,(?. W{9-xa:A*+H;]Q m(`%/$ HIԜ#"H8* }f *hY!q^:߳Kar շf֣ )I?sXui"|jo>8ҫZ0,N?8ZPGa2 `TM FC cr&XI)v6gnxy0#ġ5ϖkI䍣H''Jǻp6Gu(sPbZT~\Ocb,Ч0/AnQߚj:<&zTΨVpeTP4q nqztFU@}ͩi]->m}bvM`Q%&~pp|X[nvpG?~k e$IGqHgmðalfY>І8+fx4\.j`irQlu]:q)#u6dO}|q2lƋG)%lCy FǕLNpt?zeo>;_ppJy4 =*A mJ -n|qe){[|(cjcvW[n-@~>f~|NwT/hծ;8rѲXU$KA<ŋ6Vt$yoA{Fx|Zkb}f4ůl^,2]W[Vm>9^d~[6y&g[albT !~}$2 򐠃5O+8T'lH9_kD^i#=`ý&8n;9qڳv-NVCNҾ{C놼<93I?XL|%d~"Zfgi;rvc^s.Zt_`N9S)ᚲ]U u$μ]v?jc>ISw\!J(&T[QrW@P[^J_PB@8͗ JzW;ͰElR Iwl?R}E_|>9BJ ~wf?WlSѽIχ7M'! ,0ZY`a覼B)5AXXp&I!]`3R\fte,SוQưj |t].VteוQa\t5]EԐ X *n؊6J)EW3U)4+ѕf+E+Djdžte<@ҕgOҕ2M]WF]SWHno8swfraP^-8]}$tt:<2:\_w߬.nn_<ʹ{AW.?E:\toS㛋뛳% ~ԋk/hT گR> :Rmgl)F_Kܴ{p}&ĺO/ f|ݚ侮ɵ/(~X0%Ռ~67qQ88JX &p&[^85+n:H32\lFWFue;_+%])vteǞ9X"GQ=` u(K?=|lFWKdWFd2.|А8oFWЊue]PWLҕѕJ3A?vej "rC2ѕFlEWFc>]WF)Kv5G]E1ށkFW]mHSוQ[t5C]%f- 3v͌]-NɠQ2>G] Ys>YܰrvN'IEw7{Dv72ί/·YZnu⎨bY~|I{\v\7W' @8"*<#yB̵\G{%ΰꚊuǝg}ӓjywERˏe[|78_־&ICց1wtJuKn/@fk,}d(]tkBׯ%2KJ> 2?bK v.6hy#F!Fz:qA'F>8g- ]#MEI04BWU9nHW]nVt4u]F]=4jC2`JqѕѢL]WFjcCR`v+enEWFy2ʴjU J1I3R\]-O>2f+Nҕ'׌6ҊlؕQ/pKcWA .5];J -hDp[ѕ'2N0OCˣM#3G4<2Ml7&x&[HH8v>5+Њ6M^WJ ]PWޥ ؐ9v< W])m$ؕ74Ԯ\+2Z?ΠQNmgѕx ҕGߎ mSוQ2Mt°;zbI5 ԹH{޿dD]3Il A(-Z4*|wL԰1%`xSƴ?&@4Tgl<1.;Co&/}}]xWm2!vfqs1Ӏ /T)ӓCfM_/mI_勋n.7껫|?/..OT_ME]km>WOMٺkKۻXoE?JNW/FZkv勏Bz+1<0‰ӠH3 ⱗic=apAX [BKR`q].Vte_zue]PWq]iE7(!, QjHW ,ѕb32c:2ꊈhFW\3A=(w־.-eW pkEWF{FQEW3U Ʈ 84ĭ*s+e}a{vNgp F;+ Pu8])pt ѕǮ2,cWsԕ` 'L ><;E9!h2H{i(I 8a{SH@͜ 4"L9&ғ0n&P\f-N~ZQ&>u5}'(Z#mX8&6VG*.zlCА 8fte1+jC]PW>@-ʀ])nf+(yɮ+.: ,ѕftQ{i2ʩYtEY+~ 83veZѕM9Ϯrj;,z]q @-ʀFnw͌](v⢫gUcjHW]H+RZ(qbrcC ftѕ"L]WFIjJ5+G=swGueLd휵7_O=r8!8x a&#&~4ra3g.ҲC:FU23,Viz[݇}KNCK b2\nf70K鳥V}:q)Fj0{H#Gre|4BWicqt+؊XkeFRʢ ⳞwT])0Ќ ]-ũ(oճ %o:}])05+]-(墫gfCt?\x.P+2Z|g(iu)5+NQѕk3h8(9- ]CR`]n;CFi2ʴdWsUr%]pCٕft_|ן(]QW.tѕzhEWF2h]QWΓ NytoiߘrߞyR+M6JVΓ8$l0;MMGچi-:ل,ģ>hIW;Jq5+b]}AGQ}~ʉ^5J5|J&ͤk֢m{}]PN7χҢ} UYo޻}Um{ɦ7#_h7/wûB˷7$^6._iJePB ~)%s[g\n~Ћ&S~S`?U15Tc~akpEꕶ뭸WY5kjּ}0>qWݾc'^ʶ|XXNzm*w|a޿{?i Fki|wT=#*;k-#htooo6nJ (ĭ۾/ׯpɐ=JF!azAn5 @Foo~j|]}@{5xv7o owZQW7=.k@ Аj$3>& ԅ!;F@6Tף>?] z"%\]s]ң/jt}屟uidp؏ T18Tk@8:ǨIwsuh3{iI8(6CP\0ZKE+Ch%VC]]{i&Ao4K2be"i&C$(UmZ9>6f}f1+ zm+־ӌ 7$م f#Sd4J睃e m{tHzuA4v2)A䔵m]o^oY1k>ZѮ%(|HXV[JTHw=R4BPC!gCGMzq=ł`"ǜ5$ Y<5WprsF5f4\^X^^YpPBF{@5]޷E 3x/;f4hj%61!5!a)vzoq9{͢dЌ-1Ӥ9_gi8 (eꆞ5XЅ.~XRhYP/yP*ig mm[աՆIYj˸Ғ Ajк#$ ^!^AD *^XQoJ٠wþ]l+LmCfѾhTǾݻC8Z* Af_>X*EIٻ6,Wݴ\`wa|u0g,5"g>8KٴP&() ElD7Lcr^5 5$DjeDPZl,@  ^ B9TE5[X[mHZ\CŌUDZ1&"9ih>(ؼ($0D!N&crOLvl7-kN6Zp v`c{{Y]`!00 ƛAvFætdUt4CҕjIJ2Ba1%OpHv9k=/sP(39՜ HDNW2^i C.k<<@= KH`Ge YLZ[]x $nGfp6c`Q*YȩՏDC}^jygE3ʃ d-Š.k Hb"wUcӍ^fno]@,M$C.2The1h6A@F4= R=A /ѡB-10 zv5Ԙ jA%PKcEX A%DEWHPl5CkHhmB[v#hX<8j*He#kvqAkl:% EBiv%&HTyP*Z(o*"㬪ZUZl@VHڳF& h%A J"-Ei*5.-]zrDu 2促 %0lӨ{ŷ%e SLЅ![[&w'oAwRHs1>M!'7і|˥//pg֒bl^ 4&f3?t~?ڮ16ziqq|ę,׋/тs"B4s'm+.bAgZۧ{n.m?VċC+->7߷ٻ4DWmZ0[ 3֎[&eB #WE#98q@qso?6@RX@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; uIRv '~ DpG Dh? (:E'㏛g'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vBY1H-8N 5b'zqN @BN v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b':QQdpz'?#;R;v` ; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@pkS`,>z+jJ^moۛ6V}ge~{| +azWRqK0%5K@5NC) +6J CWWQڣRLWCW;@tAa0 ]Z뎝g[) EW؋ajw"_&O+0] ]T +FqkWRq1xt DWXaatE(=)ҕ^[5]Qw$";]) +m@tFa9 ]ZeSh ko iںfȻP-8fk[jKܺͶ7e΄Bf\7gI3A|\:>| 6{wnUC*ϫw==W#@{W008_a_x&vym[\T(jz)-PJ~Cǫb 74-Mtc3-ogI[deۅٸ},bl~??G&Ճ(w1-[nrxU(=_-~_w;1YngApHg՟~e7]`g+(tEh9vJNTʩ pa"Z;]J#N袘ѕBH; ]\5":;]ʭLWCWV(o@tE,݈ߍBWVԆ HKnP+kQW LWHW^# 8sepGOWRY #+ǹ2RS ZstE(=/"]E~75 ]ZyKR3]$]S[Z > b;}{ۘ%2?Q=#PiWc1|bZɤ覘j1Z1Ñ\H?Ͻh/FUUy~(lD7Z]\)r-xqZ1ul{M髩˛lnq|Z \,o~n~X?M1IQ⤅IU_ct&s͖Ws}3mf)~T7@>Ыtܶk$7|;n ob݆SΜfl D:,P}֘!TogwWwՉؼOӔi~tD\KZ7eI%PTF Q˘SY嬓0QlU؀| ؜=- = 껴hSm^Μ MR]cM Iv%꒏GJkk!;tg8*U% f e?ZUڊ2]~~`7縹%bbM+W}(Q2{_Ám_X{KM7| eAK+a-VS_F)p5 ZqzzisN\**T^{JKke EJ鲧^]42۵.]tAY1iTJTmLėoE(LKݨ7p S7|&`"VQSQҧG'qy *Dc4ǒH]zEr"PjSSEevj)sM+w톕s̵ufXvչt2\>H}yg}}wmmRy U[y!OJm^ݰ)%TOdJM(Su4 3Fbswn"!d ~;Vi毻 [$뜿M/V!utu}bqCnu!M׹F,S|/~?<|~3[Acp/yo?I1dzGzNjF뿺׹LkB?h~q#ŝNX,U: dXlR.755R1I ;UO n1/,_Hˍg ha:09ypTl-yT/zQ(w鉿,f UW,SJ9*zPt!4QrZ1nA[AhQ(" \=D H%i0kUŬXOV#ԤՑir]l7,KvXrlrط(Oi>|Hs[puaÿLƿF^]g[bzpp(k&0(O)$YO)C A#eJI$x e:ܬCqp6فo~s&~KWF,!Bt4x5kpJ Uh:0uW.Z؁&D:[BQODGҞEs5S`"A41*,:&dq̓`d4MN禸E"k'/MR$UX8.(#۠c0ɺlcL7 mcw7ףmX0OF]M"nYڂ;^ږGLf?fsUa-yz >|1'?tu`!w6Οi vO]ggX Kx2t<ыL0f7Gv<[izsW%@ymn}o|Wwu~!:LtʟGf3=4tˮpFEzۅUk@Yڋvu<1еHLHKkɐ1)_^񀚖zWWz-Y 6qzaW3&I:$RCw`_5SϸgNY?Y'ti]8CwH+iSVmzBT[uԝ&w`M 0] 1zC[fCE5ahK (DY,2%xh:˘CxeU<](ahӮ΃7SGͻAKY5dNXGE HVT`c4M $eB61h/)6ѳ}6a*f<@*w7!d,S$Ф#%/xS`,0͋žh#.+I dj02 {_BR>"[B1ol<@&lJzy$6e#嵿|Yma4̰p})ٙ.+_t'fŅi,v0x)]wbL.Y0 9-Ր2t:o01FYNNQh$,J@3F(gMض_YCz']gU.Z@E'μ}HY[ٿOGyyl)}i7Qཋ『΄ &pNjtmY70-jgIvXOƛ.*H]N 3.cz]"[/!X@^NHy@(2و^Fp[\v $n\OPʗ I h  h2+*d&A]F"@tQ 2;.(l%0g#OHQe S0ɾs̮8 >Ma)MCԺ^?"jݟ:t?+y,k|A!\" X^M6AI u)[#Hf۶zWKJA]Ӥa$Ziu@,Z{fJ3_L3/ /b"-ݵx?1˫懿n<߹KL #5ΨdTBFnj4 l5N,Ԟ-?J"*SRZUmM N̋ s;CQ-η>PJ;]Opb&vګ= -t G̢d|q If$^{`4`FAˀƴݵ-ƺ7!32,:krkDϖ}"&,EfRM.nLpAktTc_SsK=Oj]}?BR%*w*A𡋔SgfD1 'X{asƎ^-vtǦt$&r{|}5O lPRd]WTZo%ow6B14(]k:`Pj,A%eT"h б$紲rB7{7=נ~ȃv!r7F6j}q x1]L]/_3jhE55˖ߨTrJ 2.$dr]&P6ֻZ2h,ؒ')$)r H钁:]2>:LʢSa}ۤ>/B[A=QƣX*lxEY 3b0El#KB8%(keKg[E-MFi%`,{Jߍvv" ;8#kQV&uaST" (³E]MYlS;DV6DlܵT ʓsAh*ƃ9om"-bЍ8®蓙K?RTܗLd #NT.+ ɒhғ+$#2T$NmX0Y* 4+G}>{*ک'c(Zޕ5q$ҡ;@# ˚#]yQ'!lPg}3$@6n:BDꫮ̯22.{`~ϢCCI1 MN(@Ȭ73lIԑiް`w,~tcbq)Cְ&4qTivq֖KƍDʢ=~*),4)C1)Z"mIN.6qRia:y5#.S0hڔeAr44IcRv`WsI2wdߗou6mۃpô\k]FlwO%.qaLwB#sG%>ZɘCY9X5YrXUUWI3W10`uUCVгBk48aɁ"$=P4{P[nW} (MVfWkʺK'Kӂ=*s+l<(u$Lz.'ϵ4mHuE:8x$(Os" -5H3"90"5H@lQl]@ iA*$i'p._ O$PI}{֌wy̅A4+~ZZx{CxtV5ܸzQݱ ןna&OR&W ?-+|*iT@ G7d=B9a328ҊsUU:A5<--5$'@r@H.auO& +Vp%Q8/MQd)UW)zwjx.3WHD=m.hflJ *5mz \kh愲Nc+3 Ƅ~_\b8n-_Pbvu6C+0hǣݮmY} [AȆܓwty;Yw1M5?ķt2\hv,+#^7UA^+:KM6}Qb !b:_Өmh2cǯ߾ׇ_}wH9|ݛw?+x`Ql` ' nu7?еmko57borԋM!oP~(Z8_|$~?&\'xrjG}V=q\F1|䒡*vZ)+Vldjn%T57&hv@DXkTIe4L ~\N<8x+&bO?iLUa·4aVa;/C͇,'eGg3LoTn \'K/$*/lӨg>*1=)t#kP"[bI9188RPr֊d@Ӣ|5l 3gZf2 +mb\3N Zjh ,Bb! WΎH2O`5ɪL=2:pBFQJFRT<7])8[$5r,GUYEJ+6$XÑ8 '.iȦKsmӧVu yvu#W_^D ۡZBVtUd/ZZ"ȅ`,%^fKE5$K%0Ad}gm}lf{*Q7jy'+eu4*kCTVRbVZJ{SR`\ 6NN*Cqm6Jo0 ~ՋR661vݿpv2{a[/w>ڕs>7[_^F JG4*p=cM-Ȗ9&Db<Tc4{FxF@Yn_݀H1TgV*H>"I'`t6f+P Ε 1MIka9)TgفDi >fΆ<)FTL>T`ɇҩ@swVs^CLQ Ǫ-T;ޒI0g5$'Z3O#̃ c" 3G^1SE'Q.!jL*bҢ̔ Zߜַ2m]kVhp@c!P;EM{v#0@x  qTCwI0ToӸ%R9ΜIq$B) IH)wJQ1ܻ\nl][^m>مrf'j.98Oy;u6JfӳyG}Gx?wפh4;0=ڲ8<;98LukL+b6OEָ-yPn|vu X&Mǣp6jtT NLM#˸γ!$#Tǎ;Р1lKF\`>E:j #M=E⩂/N?%ydj֕ު,S/nӇOCnӇzKkZu ibV -L:0xM-pyp1_pq"Ejcf7Nvʒb?l~t\e8uӡ|t5ƒYLi#nk%\O {x)mt㟦ӳ?snv{"|yх.Y:7\?S^LF75usíc6?|#]  >30a=e  /h 1x&nѬbb t򞎖úe٭5A4*[ɴ1xd& NMd2] څ6q0q(aI}2I%'Jv(u&e+-mO(9a2!Ct ]\KYW rvBRtt/TRe`ŻCWWufO(jJ)b])E)7+˘ ]ZD P ;HWZq%MJ+AIw\yW r-NWR>>*jG3[PvB=]%]YZ׎3ד'sr|!墦Af$>]F_NDH~_*~_lhߑ%I.<~(p!շ0nnR_Fzr觟V'M@ c&c1u)Ml՝y=uh^hYA5Pb PU}륨.')kQH$KZ\/ U'~JhI9pV\bĒ103U !k#\ޙD[(moM|9rh˟XKt\nhy !]tw+cjn]`Kdg 2BR6=]!]10U]`cLg Z RvBLttŵwר%=]!] CT+Dw JBt(Mwt3x28 0LOO:o"5 ShS ʹz-%ZX`Y&?.]<(7'BBƖ._*ɵObyoȗˏnغ~u{z1QJLH~ޅ>x7L/qլ8Jg-`7;()̊[y5|u7^-hT!)~+ųyNԕ8<RrTKKCZWk~|-1CG*_)7@*֤8/SDY!( :./eY͠x;5MKΖIPtmjjRѶ3hs M3\!c HuXCuXCVXXCcMO!RFRBt(yOWHWhӂ» qgM3;V~+ QEt?te,ѢKQqKL\ϝ~t(msteŵ6]В(u㍻]-Ly5?ʹFz!rҖGDz0^*aR)JD9;?'Keyq= `Q7{ШI02QK-L2:'Ͳ16jέ2f?ryjd{PJI, 0'e'- ե`+,%ˆpsQ߼_&z1 V1;朾0]Wnh %$ +];Ch KBt%m+@ij]qI 2BWT̝P j{CD1% 8v+Dٶ{=]}0.;DWtǺBVv-'t(jJQb#Iw~;${{XOjsu$_P(GHjdDò<3l>3]TW*`CW6C֪UEHW/ TC  u+$FzteA]T`1 w8 SyʭБ^]9pV)wU1p`R3?*J^$]нT\ xj0$ۅ=^H zH;W=[R*{>x0t0D}C{V&M;]_aǕ0hNW%HW/ЃA%`Uz(tUZwb3"X]1`'CW< =]Uƍt銐ܠ+B%P *\%BW=]1J9 LV'/< rC>(HW/4 C8 bv@ fw(-t銹 SlOUapj'UEdQ,rBW Ґ0L0h>d۵l#]y{,n@~wD8L 3ݗb0;}c+g.F"hU`QbkWwA [=p6h9”p0Slbr l}AEƌ'f{N~#~aSLu'N1d)PJtuԃR+W7ꊮ*\+BW;]URtJ*K=(:j0tUxWV*J^ ]@= bRU8bwbԷ󑮾 ]&#vr0tpwUѢ;]U[%Gz9t 0qapOݖ0dNW#]D$m*h:{_]*\Nr:EK>~GYZY??9H Rr L %`]Y(ȯr_xy'+êlZ8׷mv[JR+Iv)ި&`L}d0P(u}2X_GےvzIUelvy+޹)(U=k~OV};շ~SAՋ7miSޠzʏF[|]IiMΞU J`runqYy~D2Gdl!~XW{Ix{}=r=IcAbCZ՚<ga\|;=X,j7ve ǿuޜfos,7BX}w#9uH#'XnP2lkjŜ͎_&Ox|7acKD<2| HHa.RۀNY*A8( u'SbK%wtZ.,7+.aϵ=#S.n%{pn>dZulz1wmXeop3ΗlE2 ~.<&M[*wWԵ;6Cx3C \rBY+UvCݫT@{2+ Z`Et11T aC;SI: uA&k ;V 7p~T>)K&#2Rڒr"C̚X"b^JsR2ϑC6f9MG f}-߽\Nm@-8{rgnƞ\NnC?+Lّ"%Rsلd5([KN\2 h@If]u'ˆCJ |Dmi2?N.' TuAv؊kW׭4}oFy>܆:;Kkm{ѡJB+Y35g]_*}6z>&itZs?a6߳ެL3Ir9Y~~.֬:mާ-4aZ@4o 2&fg?Xԁhl&pۇxWkC|g7%?'?wzdJŰu22 }pP1ZT:uj2]˂`Ȇ$sI [M+ES >бSǝ/ G1.bօ [|r9mYZL>>]|;Y0Cb੯^9#UK6<ݺE Æۃ}IWŢ>nU;ݢ߈]u5wt"Hi:ie v9ZJ׍hU%&: IN%#={Yt~/KԷ^(,;VswY` H˖c]T, C$}v0FVE9 )R(,"%g `aY'hdi왶~{Bґi*Ssٙ_tqВ U (h]dA%q$Jʛ P*#ytȎqRPbnR2BXIDasK 캂֡g4ϛ0˥}Zw&'Gfvxr9|5좒J_L;-KO##XzXs=c߼k`K&P׼rlkY*†@ZAA]wpHWR*!`Q[v$SI* J>'[ \N4zE0Iu͌yU:$X.tsYpzCZe%bE:W4~:4,sƶ JI!Ŕ6JF^" G1:!S]LVu\X[~ 5I AtuEthlnj;.VٛH&-kߌ7h;" us.@L9Ѣ%ȱ'8^ %}k[6>8Bf0פ,YڨYߘ$"x1vq>lZ.:\/ؙ|ˈ1#Ȉ##>(Y>9W"3 N@EFYcK66kSFtw8SEdH3!h)`1r$-!XS;GqS͍r)ٙˋc^#/qpX|)sUŨ19* Qt:">L:ÎW,LJ@a~4X揭|48=^!2q"DՏm׾BUiWR{^v0nqi*~x٧cٱJymea]%@٧Ű%͂)&HeXU_ ; W36ӐĘІC*웱%lI2N49BFm.#خD+LW"rQʷnǷzx%wZON.I2 )#21 :CQFd/ѝzl^LBձv^Tt'i/i֫~0#Q P cɉM#ϳ'Ԃ ΖڇK:0b-YM^/j/zgM#:EitٱDblݳ]l]A=+Bأjatj`/}̫U㦶+>|ݷjk5*&HБ[Ubyb ղ`G?msve#Aݎcq\.~J(PCUσ$EQ#LjUuuypˆh)ȏMEJvC 9d?)"Stp"S܎9(0y WE-6`MHRtH\4[`݆x8(|b|f`>ީYʦT"ei~[xx8Oo1 `5U+(>"LSO%Js#LeTΪɵ._'o/VmbbJ.978VkKrmW#x2b^^q|XQa$PHm>ì2ZƓR? ULd@Q8àZlDrDTY'AvJ#aրUVkw!+zECǀ7‭iU"bK(F[ : 9A 9[$5>!Ӫ,"%0-$Vo>wtVO۸痓]yi bzLx *JjR h[{-Y h}&H8_Z"rKaQB2#s#^)˵","pTH6s[ﳍ}*Q,C_dU\D3 Ds!P}1h\8ph4 H!`\ =`<i-I DpC@9 Q u$Jglq`5z i OSD$NKݭbs7I)y=I/[3ݥЇ+2PEF V1a1LPG y&A1Ytb+nd6?iU=ym'P>!(񈦪-X֒%` EJkkN4Oƍߟ*h:e^⩟'ԫ[eΛv>49ڱwMBnOMcmƐAcmUZyi^TՏ[:H-:!_J뺣@\}K:~IZء,EGUZeZh?he[V>'3_3;nn<)i>1#(-+;MD7Dq"!M׭qM3S0L+=rί7Eq S\p-m6xx#/[4%\E7(ϕw}^ox3m`:^`]7|dxy5 Znu7{/ 턷1bU]$szd6 Ԙen]++}" oF[S\s`S5>2cbQDF H!pXGVXF66EJT>{]q%訬/@wSqi1@ ')4řE:8{׉KIWL;3^p"iN);ą{9ڜᆡ8''+Tkb>q%KA?SEV{Q\EKr$ EoL?΍i4Gvcz{; bJDL54@<5XL&NZ*T.@:6}=k) 0JE;t~vRr࿔0SAbKF9 "%$:),%NqG([qce髾b'E[ H/)#)px;)R! Ta01\g!u<zH eVѸT9EdrFFhGTl&j}J*w1jz'3D|;DHǭ9H 08BV ]"Ad/c{$csG~!k_Cpk֢rzgP|&w 'G+!| ~_~h;p `hn)1HXRL>U1ańFD@aZF&NF휍D&t1xnHFI 6Frxvr :q>4";`r^/?AoA~O![=BZ땖59Ew`CE,80RKpk]0GٔHKDjتb;go$t;o΍;=hpsod{O,hĉ\2sƅ * \8!8 yi3o;#NDK_$sH{ 1 m,Z"!4`삢0J,}^{ymLATUv.u+tsss1?#m.#FkA1 EA`xx3pMut{ՎhSЭ4*kc&PfTetۨQ1,%! j@Cȹ[W dzd^|}'g5Bk+[lҲ`WsYRrw>z G' ),<\p}ۇQR(] *0()eiB)gFzJcd/]Փჴm0ng$nzݖko[ePY-bYҤpg vXZ-uIo)dUMp мi6-wqQ)C {Y/@Ìj/7P)pcv5-HMGTgY8z5";.FxPoHGހa|N9Аȝ>[]5t][RTחx-\(2]yqxd2Ƀf+C*C| ^Z$ԧSa4sKu} AHRVa)5;cƌL")豉`Sѱ#3rtUtLn3ر "3" Pn2:>\&d:ٿJ|ٯ%<c2$YO+HC4@Z &P6TDGɿTþ%zF*.F L9q0/׆yQNH8Ϯan@EE[B)=ߢ;WDwǷ#زYj];s r#iUD]>5StwFTNȀ.&'MF:ZwXay+kq5AWZnoyzwkm#YاdGKuw}|NNlKK,&^'>Ë))e 1v} 953U_U]tzO(/^w$[=Pf_zhhԿSXC+7/^f ~_ŦݪN tW f32E9ȝwa cO)!Oq.,QvC 7A)5 Q$VMOBYt$J47uN M. Cގh& zmtPWJL3H~|xKޑL}w!8ǟzwlai={XL?a⮛yg>-ҁaxmj|-_Ԟ eV}KH^R)JvyNݟ٫=u|:v|ȥ\ȣ EE|xiUYȲn$]wHW}::2ҵ؎L녘Qo/_ig0NaKNaVb2BId>oo݌GY7L7<'^%>%ku,ȧrdoןOQScķlk+ƾ՞Wclj,{\>M~kcshfNkۯ~\ͯf~sC3oL3'(5K>})[;BQ/GR;4$0;yFMI($h2*&PN7}XGcp1I9g- n0Kg6]4PtFB2\{NPŘ󾀴0I$􈘊Y'ti݁8n[==tj^= Fz vǘ"KޟjdbdlT?6U\ciQifJfi|ͨ@ l ĵea:|zÂ+r#\=K4ZUغ*.c+:\U);+h @ \Uq=]U);++PWU\VΏpxm\U \Uq8,}URW!\ 'skW}*vrU-`/5gONd󺁺8ۦv74>Cz*Nx}̭ L_>/g+{XFm._u#o2M|Dh-69+KEt HI!sUyl"-m|?oWP{_ί{`hpgU\ôo_dn0#;oc%Lo쒗d'ȊVn4 -/ `!x]q)"JtB>(P9hCF〧v@(ʰ[͕#lp1~HC5{vZdɡ 2^>ٓd\Fje\PʋE_ZoJ2D6a $cK L=V7T: =R; l,[3=E8vgI_m[sUN遡{5jd}r P^煸)Q u1Y#_m[ cѕK6R$JKnRBrarQ PZ#c3q#c; ic7B?bbY7~Yǣ->.*凿< bq#6.+9dPgBJ^#M +$l䓲iٽP1**lj뒅~S>¾@_xoK"MZ|\̡v3x$jhګQ{d;4>9阎DI6, L0:F`6` >v׶Bh$P{Ic$U0Y7(A,;BhُS?.2U` "6ӏ"lrDj/ȜwoVI9L!Y)"zd7IT8sLdJ SbOZ$71"6g?"~xK#b=W{HfZX\TqQ8}a HR5OYkrH>5T4^b Ozša3x,xa/khQl2}< n~|G McĻ:u[9=5ѡ7k$ ]`ޝɁR;9Q=krI|,3U[38ڑ4( 9 !(C$l:ɐːȠc/)5qLJzA 0jmfR;^bv^EnMs`M(Eyi"u ۷yK{0wC<賗29bdvh)zOFE j}z #jű@UeBk&u BH%;kdF,Rm1r,:85 wuh,C2:dE{ 0N),28ɨI'\Pxd4QRj40vU.~|:0_h%%90(,2THxJ*!+Ŏ S(;vr~gK&RG@e\@ ό y[ݑ[xmuʍBo L-;PU꘏xcB4j ox a@ a ,A"s.if4 +RvD!ٕ5/Lܘ{EW}5Jch& +EuZZg)6 D!C;Ec)\'xAr^~Q{R97*#Bk@Ic)YҹH55 جIYZz=Zn3V]= T5Rw=_:[MЅMA@v} fY%hq׮N+$6%U DE𺱝5@;df]3j{%NPVpd咲ڐ,DcK]%2SEI,Q *2@K[P/h=0jPLTDFaXT NH &6THQt=O:,/Z M=/v &d]k4sT$b2McA7CjCzπ˴$tcԮ s3_/Xtm5*G1~v?:~iJEk|y+G0qG"C"$-jTVY(aQ:AyCZ8y3 TfX0|AЛE]XSSn],SQz"QP c$8B#hVvhUq9f,Tjͪz=4s)Z NoQF{KYlZ8dBoZrQ ʗ1_n$JYRN$.gm+.rm^c+"耢ıV]MB!Ow[nxyX\{4:ZPAp):Clw$鬋:HMXjwIcsŷtХ͢lmmsRR %Il(`G'c@VjksђU_6C߄Z*kP(.*i mɼ$~#x hU0`ՔF ÇVn^]fu66H%jhAc)jG+^SFf4- -rH m% >hA5(QgX dJc2[}Wa5rCzWː/g6a>^SRjA)??Sӝ (iן oMVazSDퟰ0K,IeC x=a*h^ HdGƏB\sg:łǓ5nul'ZAXJQ$@uqI8/#?f5/M ]W񥴯7yȔ'Ї>~>N.C"kigg/o$1lZI<.NAg%٭iybRmn~umm~ܺٓu'هkyrsvWsaP0ß28|z[l> W~;O Ď+YJ`|.v_ڝV_h3=>OA/>]}7Ϋ*qU7;e7W]fza]sGrWPT]r+*Ŗu+ϱ8*%_D,rXi4OE!-a@X >cZ8NsSUz(N! 6bu ފ)dt*K?ۜL<ȭN`Րe4Y)Ȳ/]OƳE:I,9<x|2$x sZD)Z0la% ,;yg ƳC(ur~}Rgr!g;rI,֤H0+㘂E\49N:6UNǦ/U=/Uұg2e#%#"e: u T F0Vkw!TFCǀ7‭iU"bK(F[ : 9A sSKP}B@dJ+a:oV/oS> i7H>&n>=>)L\@̀W*Aqc!WbeG[^XT)12<"蕲B{) A9#VsЪ Zo rq8]N*hѯ,s voA}]x~2gR!g "Z /B&,.FIB!hW8f6aRȐcD2Y+ߧ^jc1h#ؘtfM=4ָb6~>!v}bo|l7߶Amm}ҰWLƦ%\͈inJ8J%$P0A "4Rǎ#KYt`b\߹ar 098Fa)DTf ,R-u A >H!{\ =`<i-I DpC@@9 Q {k78{󬷃l;Zh+1H(*&L0 *!eQ2(&#PÓL3?N3X$r2b5tj dl`hI<1XBZ'Ipݸ4OWwp%ToRuhѡ!]B v0={ /0IFI)S37qa(1Awip 4),Pxe {6`j'񞵓%";I4n1_׬2ySҗ_hIyMz#('y!}bCb:|,ͥ?_ONߌ?|z1:/. 7XГ[+ӕedIz0--T8;37f@K==1{U6QjV5JƢy{x| t'#x.AW5qIŁSp7F!"^9:wOBO-٭{}Qv|=)4\X5;:Ht@F릣W囆Kz;% z]=n{(:{T{D;sc~ou:w?G3^D XLcNjNBWywueBq?u5My[o1(89>dKqM.Ly].…ovmQ~zPظfDzwlRsFN||X>v"82pt<}sˎΠOy*iʪN-7jU;yC++}"G4ci\rn4l2cbQDF H!p"{+,X*.ye2me O)VbĥuD18NR(i 3t-q>Z! 2(5F/K3ӢWԀƁԀN9͈ {o\FrRaxP4 cUG7abvn&,cdR .0ޯo}zֵ}?^~|:qTQNr9r O$,P[m-JyQWsD>eYU&lS-T+,4K&&]zjXI87W[u^P!SM􅯪_R >ޔo.#eTmЦ麯ݶcsyn64n9&U|Tw͒=<O/:U;KO?E>~썠n2MN^`6QL0~?w"{.d ~7`&X؀+s`Zg )Vp-C`k[3Y7fU݊1=Sٚ<} ".OqaDfG9plfphB-`Rb <DX >>(2 9NF휍Dt4lHFI 2tbp:< ;iYv&ΧW!gjGnr{+nyIo\pσ{s[ 859EwXp$,80}6s]pT>qǦjMP욃7LowkDo}x^2v8QHfxU '?bpePO}۱9i Y;@!)8xT(5hЀ ¬"* @> ;BJpdʃJRv@w= s8kN'#ٔÇGLւb?` T)H55IVT; ~M'ӘG"h@QAsglrD`p'Mg}zg0̂c% I1oi"_Pu,%24*r穏; ;3ϱڞxrQF=#Ã*0˙@ ʽU2rI"9@ Kq0y!)w*Czg՘ID>qb=6jj4BZ"2ld +GkFkb+xm ;oXaqZ>.>mQ3^RPr»A7DE~uOtvvB~l NJ/\r~R]@\Y8'8UU~rv:7>8,"D*L A4*RC r>HF25I]; ̩0|P8joz٣_+UHu̢.}!b\ t u#Ƹ_Ǒ|,,0Ui"lu^\?NH8 b,6]_IP;[,׳˳kZ:4S\D)(R#E\p%kD:RLIxk)jI(K)µ0h8z<ˬ+3A.4? wιCJoRK(SIL<\ !X!M 7_ _BmS8. CYU4c`S$)[XmH%1QS=N|KʃEy=qI 2('%q+=A'o(#o!HH9Xtl\?Sd{ mߖ`|p FgD ˕L&q9z.$IZN R*/$T$9Î c]]o\9r+B^3KXdw<ad+kOZnmؒ݇d9dbj Fă ʞ5v3Q8E?T4^cnEA\V ]) +ճDڵ u.GLybx,1l@Mv2z*¥_dC޸>{ZIzRN>UJdHzvJ8gzm ~7:yłxNen/'gwOqĻn"I,8::]ky_knb(ho6 CvLݔ&QRYoMzt {av8HS'.&NYr -UO \4iVjpIC+$&QU dX6ޭp8b J2Zۈ d9n8VVY_.:3s|mwݶ.R磻pz]>G.ׇ/9~k]b[!$Ã5u!n7ﱓV7Q8(ܷ޼9|(Ѫf|pk>r%혳 lbtp>'?Cs !ٟ0%6{TYijgǣA~]No*||aX4pu*`LrpǏ>]{>-xҍ^xϻoYka' sVF1Yُ9^r*Qmnb2u*<\'yF.ZhPc%ŕ05nibgLŬT(ly"u [QIMsR?ԮjE˨mCvDųnkUSOnzlD#T(VM5wd)m !rVԫyx-zTRvh_fh^f(׎m?ԡF] 3t5t5PWHWi0CtՀ®@Ͷ@WIWD.BW-mro_%]]0Y3t5ຝV~j=]FRd UCBW/ktzʳQKKdw\]+v@IHWl.:+BW-˶@t*%`a3t5Z?Jگ]F Dn~|c~1NE's9j1b^Fo .fh6qoR͖y]=]=uh]+/h3dwZmR힮^!]9o|>8fphSP+wȋ3t5ư+tm=] t 銭?[-GW ] 3t5@uOW~jiz(@%U5%~9|xt4Z &zLT^U\'ʒ|]Y0Jh7(?ץѸt4WSO{^8vsc"?o }1?g>=H!^ykJՕ\M[ W`x*jg}pњOHU0ȪOF9y"AyO薙yBxZE Mk?/5?g|`0Q *ΑPO8cKC%9 Y&esdO&7/6G!KH?cB|e}?MFDgVA['}~Y) LY-ЦBdR5'rTrj>~[ sm)O ɗZY[7a"E*E%MpmNqh9k\`$Zm-ullb(ׂYfԨR6"(kv~-6Pěr91$\,.Rjv䵋(WץPRҔ4˩cĜDh=XzBɌayhR|DP.rbZ䔌ת~|h`qƽ~ՙ&۪hcs{Іt% #D㒳9w ؍&?A}4 4fP%vMGb0΄Gx`l b}z݀F$O$v(v&hHź"⭂cFgdI|șc>Q-yϛUŪiw5ԺXiH%%ي6b0{\6'gl *πPc1,ɇ%Z}"vM80Ǒ֑~B_;O ҋIsh#8$q_X )EDL%VKFC&ks:Bh*ςzKBP`S5[ЇL7MQNZc37%cfEƧ: @YklwH! S]kCvvilI|S(-Va?N5В`Y3r3b1ؽ:ؠlzy9w|Cyj#hH %ʉsX}] l XBs⢿b"$l010=ƺ*^џ^ (! 8gVec#s6P̡  "DkGU((N*ZE 8J. r˂|*]*jWH JM,.@.0؄6e^6. vEk)@@ZVSJ+XmefH57J=7(c:o%a `NC7a?%h8L8p q_^ykUׂYB|k6=Dfx:p<eP˦td)P䬐t%C"ZUF2bx pujP4\U;1 ne0 v*"}<ͼ ȓ$r2&Xvo,B&Ȉ!A]ZM9|,)Ţ>&h~܅,Kc z.a4”A]4KH-QSeL%3cEX  WR()Z3X-DYT3JBX-{Z=a8, (YGN:!t% ؚbBi%&H?<(B;c תpaQB$YYf RNX@iH?vX;VHYtg&kdrgV޶KWkne1r׷lfX"` r߁y!x˵cᓛk~1wEyDyŰax)' [T1"@5H,cb|ST],k~x!mLv21(c@sJsƤ`fJm<xETAlԞKg&Mmb1K!#Z ]SNj{֓E_v *]n,MklZ΄^ k NZwvTa-$4EO0zs!z~)AH`g=>\2'ECQPF)Prm⊑duO4 arx1A8q\4#tM?ᓅv ѿԧK78_/9k=IxqӠkWF>1\DlDA:chK'8E.f| U4VA}U?4W󂋈M gKw;SK Zs]QW)c,%SK57%s"8j#Wj4麟}oC6 Uk ?4j7K\2"h@8 :cY;Ψhrʪd &s( D暠F 1ej/ytY7U4s6z:ڮ,%0bL3ewogե [qNj8T1bL3^g~ կ714ذWsJYGJm(V"MB gI<`]i{^z\gGZ ǗpꐦI`+4 \Us \Uk;\U+$zpUK 7 >+V޳j%7W/j<U5 'mUV]U+ zpq=$vUW7% `WڧN;K+eyj ZpU aW\+WhepU4w UfuONj9|QO>>9x|[Ufջlkֵ|(sY8 0<&qFC- dٜ^ݩx;5ފ|U/|ouO\gϹ_A!nՁ6 ?/=)|^,e !SElTCr> 4Ei1RP MCɨ?ܰ >9ndn[ؚdMB\<l81/=Zjt\ӦC`·"޻Y]ۀ)Ot;N^=Юc9?߽vg됍[}/?.Xv|)Ol<].~޷%n vg?<Nj<)v767Y WjG:׶̫om剬xձ]pmYkM/&CʵIjRȣv'QDtj g*9?rs^LH(mU8u:Zl5ԺSuF:yPzX(4IkEP* Gz  l~ONxT-j2r)eh;8JS"%E`G}0ҵ=z&Zi/}}9.\F*Z]Q!fS6]䊈9$@(}f guk/ Oɰh7:;evN;"ts wк-qձ[ns R7nO5:UnԞjjJ4_݇ŧnZ}5Up'^qm_q_qykdWFqxk|s[xe+y||&i;jNy4WWk{zjJQ)>su6?wYQYe# hUW;o<L()cH.CdߓƐ'<Ȗ]8̧̟*b-FRcٌ/inuMhlQ&++t*[ \q5vxse 9s!X`Á E}t:e:$xI%8̜hڣ10X-6 {M_v`}=AKh bT9R*y  ha.IuATGgp ̨1SI:eL` E*odmTѤPtfc;HMMOΐJ\3˂$N&;3wBv($8m=2e9=fZ 0hUROrH":*( #TRD0x!6f`8s= x[^D3$FHapĖa6vue5ii7aTg}xؙ9at8 }APD#"'D$D㓳G\qa speHQȣ66k`SDt&w8ڧ6\&T )D Yg͍6FIC*;Fy@rMͺsi3/9EǸ( x  p_r \;ŨRcR\$ԎK1%Sb_38;V,< 6,vke1A,[JhsGُϔ(/c}5:*1fT5o@d b,2NTe}+ =* TeOՉZA%/D0& !YF rB9b2^ƃɔh5d*2WXXkCGcҾ,WYBˆ sH̒Ǯ'C;3N}N8K[otK~BG";IdڡrwU;Ug {`ƌ`9JIQg m@N8dznw=YLTx R1@I;ŹSN&̓`W$gP~ ѳѷBgGPIVB$mܔ*P!bk2'09q$B2b0^'Y1í!KE*JrᐢdEQ#id]SUTuuA.f\sG-WD·x8IPPcH.Z'9D#HQj\[<8>(8GA/ўxXn3!JlY:(ܸ9Ub)DCY$&l"4q*Qق-(u!Q F`Tp@C(@i$EAS K!N!c1P`ZP"ɘ|@+mWm#tzZIV(T!ska|Q$NeյU L+<*"ijBkr:Op(G) (C4cd_be*Z!Q|W$U[dm%Mƣ/b93\gga Wתp*GBE C+M$s 2S̙C$;L/H ^>-J|[DGLNTu\0sx)Lx!kr]kȫ5]8|0kggZ0GqtV9lt%\ sap}I7y ڏ[M R(}Cb4 '-pZ$V)sygq@ ! ;8'ZqNy8pe g~|uPQFp* wuJ6\NXpF9&L>j6u }SSJT-zz݀F)J"5<;!15N愲f'9}F1?n̩n=X4Ϯ5rQxzb~Rv}I!1ghƣr{kΑd:"+[daZ0"# yaX0J5,Ph#L~̫wd|\zhmr7Qlq ]ZK(WzT/ Axo:!@|>&e#PH8.#680SXoeAXJFKƬ6ZBuPHOlX67kvzBKD_$8sP%ph \g1ătZ9Tkk<񕄽suTt ۑ+G,߹OI ;QK'U3_xYA*/l_!ES O]4X>Y`EWUA9rqp>@Prwy̆x+ia؈nDXN$ 4; y/rMxb-ȹB]*b㔶N(3wgR>yqDgy3Ae#@J) 'jXrN[;vX*7БM(|Br=yrؼ{MNQx/OiII\4ʮf41 Zjh XEHc6R+9)P 6PV%Y2@rdms7riUVhJd돷i8B> iJżOۤϟ_N.'wVu7L\o>^Նk2Q^̣~AfA\fb}/~~@~?y b gՃf=:(ݧ3x>GyzFz r\T)‘<ƄڅoBfw5~UA*@RlMo涋#|ߋE^-wU||*Pky/>:e,:?R_|Q˧EZ}b6)}S^4:X\$T{l{;aUeax0@n\M/f+RgQycWoԦuzz9)֦VHRDZlDW. *ȱ؏dcێض6PV*FŤD4w)?!O̗}-SBTlίA>F QII!cYi {e5.%RhpRh7L0cNl`Z$ qj|xԖ <9DКw.۲O5rv)y H?_UR {ֶ_x Jv 秣|cXgQ/{_ls>=]; JG4*p|j Ҁ:D@ۭ B5W*Ĉ$H## ,E{Ks&Q89S1;S) ΒCڎDi|ج; yi\nTͽU[l&K{3E2e|&E4E(r^&:@(Pk#$4Y#t)8NExQ<$Qk<ŧ%x&PZK΃`jIL'ZkV4O.3*h>xR/XLҘ\VNxRr(5&IsҢĔ量=dں}%7E? EԽ@QۖM 6"[Km(,ueR78gs'pi(N l \ c#,4 uWW\D*JSP E{Jk#hB"L;N@e "WW!a'00ήIny_>E.% !JF|>~4Kzq/b.q;NJK{Io"r20i:9Tjdy^5b1qryt r%h ۑ(z.r>:*B//XGխΛv6t%ٳwKJe[c]Ac]ư׫ҔTk 7tYuTC*uG9xyS\n.#< 졁z;{4;oRfp"J;j@ ]o{f*p,ѭgԁ е)PTvkM;fdkI QLӰ@u{z}Gg&&u 61eE6Uj@KӸ޼ӮbpD)=!zB[nit<ܾ,QwG ˩$`Mt7IOS91z?o1]v+GЯ/cҿ)S.7A@K.x0Rxb՟,B5($j *J90SYB[=_j#Kh^[ąU]Α`He( _D%Gm׳1>^-7l:)N~,UMBB/ $[[R=Jl;ifQ22^d2.𝶮[sD?oW,[eΘscFlXs?v68Z=t-Sdg2PV:.yR𠢷o$/;AoLbF.o L&M[AKϰb&tQlpFZWpOqeIi(\M񶎦5{QeFu@ Z"prjBG )i%3:I6:hs=QrE2iI.'F(ɢ"-;[#nt7!;ar̷RgJFh-M#rz1 `k,3w!n)7f땟>J̲@u1SpkDȜM.I"[LHRUeܾz+5 H/)ݑ$s)c}62L {8ʜ H, ?4CRlk(6c$*A2$(Հۜa:F61,gjvhCtIFϘ-*1jz뒌93RCК)ב sbXƱX+ NӽuHfr|@C־ʶVnqgr*]uwƗ2iq97߂3/ Qx9si?o+o@r>xf퇛+B%DaV'&A'OLg6Q:Ls-XiOHV"UQ'2RS'jj1t~Ra͡׍Ϟd˧ 3W2J^HGc5PPh>yC0ƀ6T<B,2p hԳ &ORM+F{#ago 3\`^LYL/,ۑzYL%}Yo,1fz9_!d<"ݙw`v=~aIqZ$힅FV)(H%RöXLFfF|qT @q") B@]T{@L oFaI^` "v䴳FFC4R9XV[\.it*Ř[:]+r6Kv+d2OM䗟|\OQ} eaP .^& ` pX$|!1W==ZcKS9H<^&G@nWqW7,\G"$2nfʮԗ,YsţmXgR}$ӥHsD:aH E ݢZ&e`fIo.^_bu%[ ,_X`|&u P9 [lLJ[\n5FD9Kwq9+=4gϻ_; $Rp]ؐXmS @æ[lFy}_fCQ*ѻ,E:M1NpD yh˓҂ayG"&E_vSpB&ѓ s*Z}H%gP9Z1p0\mE&ej;jWRß pE{%\!U&WSL4Wہ+fzBpŌ N2 \ej_܊JJXWoxW+$Xq2pe'WC*S)7W0 tL>gD;\e*BWWH* ̘< \ej[W>v6*rQvVVե;eKr1udӱfæԿ.`PbiD"7EM]'^ uTS/tsAS,srw\?e:pojբL"9Dmvp6X) 8[a#&R8灈h|..o8>d-&PD{GbyR|z<,Bަ:w=|| \3TڝwgsY@>9?JVz{<2cR_%!-h/QQsGpe]JzEDJGR<$؇؍=?>Z)w]?WywM6W>UO"?)çg1Ii$P*`sxR|&|]rJ$wg=ξm|=?FΪ?|S:DLJIf21J".%TBjFŧ\^AS|\V3Ǜv7eǺ(E''W԰꽹0.=_HK=Zܧ4d ]a"jzNA5 IJJtDK|TcG`$JSO\dʵ[!~g\ -yѢd{z;]0)Mx2' MftB7dzaj=(4˹6Q[m^\XzH\ו=ƣyn>\vS>eoVi(A ڑDһJ>uyꯓPg_O xݜj^6^aufuZߠm~j#o6aNidn˷/eXqB**M0^(Ѭ޴OmW ءiŧYŊqEjknGMݗhZdxglu$:V%zʪTjV4V޴b;\hy:?z+:V(FU$i WC# Y'x1,n0EnqP-[klzl&jY9s~jo.oecr",)jsf{ˆ ;oA0ɪtKb!yљN䘢ح'<|ps"#E153C$QU0M9N vBr&~$M39Cu3HZ*6G(rbyۈJL6EdOnᥥh^,g:L0:CI3׌ Ge4'pOItH9VݼWr'חQCYZox?,*{w,~FD(JI\IwQ pD?I0"jeAӨ͞Yĵ3޸q%-#R^ݠL ڭ]xXus_]@\H@}ntDc &I}4]rl.v:5؉?6*0"8,Yid[e0FiOt2>:v2|Av7ľN$㗫ї)%IMtT1^ %*kfeH;]U*TVWVQV+7pTt%m燐[z|W;]n/eB&ŧ/Ӂ(s0_0}FF )8j .wcvUG( 8'٧e7N5[b{qUO>lvoNX}~z]q09?:BN!=<)&ep[㧑9Twj*ctf 2."*8RH0cdJerA@Lb>P*S2(vJ)*Auq턼PTܺol뤀(Q@߁1_8@3L0ͱCF(?wT08w;n<u鵽ϐ`sJE8 ^qh # GS4Dxb 6a&)/R4 :r"JJ)?vƯ3r6C^&;1VqBб/P,'쯬U<Ԛ>6}~~K/*J}BY).jޚJ NEɹr(F:<$똸2p*|"_ۙ $!QzweJ J0ë"4HSym_/5مNNg7?ִ{;JQtɔ*ˮM%t|5jܔ}Ui^j1Q}C7]aS-\#y0i;_ H5~5/h-wcZ2lsKJo|S3bs3-fnNhD#Rd4 =nsp>m뜗[l{Nnjuc_ 5luZqFM>ߏ'Pb5U6mDGS)gbK?qSw/7?滗p_׋/N?P8(x\?B;MOѴijoѴtjߥ]e7{wpy}4Gn|UtQv0jqx4S4A\e*-Rxm`ZFoKo8Oīr1]#As hn__nՏVGH|:_$\rӤ`@#7pH9;V בذ0dsn:ʋhJd@1sWa(̱N^;Cnv>y[.:k`UqBUN9ܡ7G!u >&N3v dd v&9}Ӫ,=%HI|r$ᬜ 6(mOtÇY h^ `d7ALW :iOݵGԂwxgkTULvڇZF\HGu_w@Ȓ1wq'7Qy )K@ɴKce`KR񜭎ZNq'SP0K7z%g9!:4Ȝ2Z$…wv&s q1@#zY9ڜ6e\Z)AKQdys2tz+pV ޹%Φี:&i3B\mc nѡV||S'c(20o!G%,lA̘t+˂UYHIY,q‚}ԴJKaHY6r04]u&U=8mn=l)K'Nwi5rsnHSN[zW')*s1Eԧ"<PY!:/Ff9*;Go%udޛF\q=MNZ]?JJåg. $Yi/HWYz_o|uՂ&}} ϥq 14zn΋T˶A^.2uѳ%֪$h_'Gͷn_oExZ]4;hn'@Hl3cV6hPGX]/'KQ"1(8/_#^Fvn9pJ 9C*9tR CIIp&v`](e* e, Cz\x#\FKs4N߱ۙ8;^NbWh_LBvٚvnV;Ti[2:м}t+ J_hQ:grR61=M^ `+\b=Jz #1pk`dEuMՙ883XǓx7/}/7!l^ž&{rH_*?B,R%_(Ee I,b{k{O1aEs|,{m:)i<}tVIU>Y{.P+IJ(HhVs41 >^\tc2SREfdpzFa:Ɲ6KI)F|)Mْ>Zr-W>"1"MWaUό0VA e2::twr tr'8NM~"z[>x3 _$@9GpVt52rO@!߭()clSF`&K Ъk/tJ`.SyOz] ˮ~k +Jﰘ'YkWOM_HAta$_p4b R4Bu>E#k>gݨkzDȆP9A=k]VKE6fns0i魚-&WM NףAIKx؃7)<ߡِj:{qJnW-7邴yW!6-Y>"f{ե3Gm4ƶ91tRnKos4e{8qh6RɚJٺ{ۆNׅ|~C7JnŔ&>2iC-z[ozOsK-Gd-%˩q,\fPyJKۡ?%),qnazK3Y4- ˀeQF)M.n{!: T{U Ή@zPADŽlN8i=v[^Z촺5*|:ʺJj3r;>EGv3uaB^<2* #g*O0dٙu6ō֭3]gZ4o_[z H0c =bU9)ͻiOPoUi}{wh~A]/SgкfBiSμ"-?v rB_틺{1⻿(!3A*h]Y!yEB>ZJk}H.R!'JG}8jDZ~ӅUGY=ʺemv=p8@öGZτ,A)7E v:e[|TaZQ JM:3o B5rkQG_gEUx%]>+uwb}򀫛.} X+ iI;!mqR1q't\,Sqʜz ֺ;>HWٸ$mtcCpJ$sɷWPဲEic9F, C=34@XEMj`%{!/'9km32tX FXXoto DjLr/cD9Ϥ82]՝ PRL+ B0) (FOxʎy:A~c8eGY VD /\tF:Иs. Iɦr'[ ^7[w@7~l֒&?1kɪ^{KτZg(m0L93- ldh^~*k")i@Z/gjo^K>Ej̺6hҵrs5)]Biދ|^Q1%-i+0C\yMdsZjI9S6m_=8`7τEƹH_g.GZ(Ld &лñWIS{5wHMF=NSCEoNڛgښ۸_q^;YvIY Wp(ʩӘR͡( )/\ Ln/֤x^l"Ls/Q)񛱃V?^%~`FkJ)1QY E1 xT}s@l[zJVy5N%0laOXEFŃ%g9HG9NfFB4k1$5Ac8CvYI( ˽)۲ENBji.ٲɞl1+9.n&Vy{G4 ow[=XM!L:bk؈S l1QGl8"۝]Uah-J+%bfS\yD.x5sf"TndFȘOWɆ8c_,ԙP,|P,Q>bq4f;foן:n^Am&هɸY|刭҂)E8xaA ɉʴ,R4" @B~Ty ː= JlR,Œh/#vHHLBDNȹlL̃q{zWPh$ Pt!b8u*Ӡ!sb9{kY1 4Cv4H$`|`#Fu&wl܎Qc0 "f}gFD\ VZiQscLK##F *\II aa' HJ蒁m*bKFbS, A94a@̥L19;xpYJ;l\/.̸H .\q61xI u9yؐHEphG .>. f}0s@XzO x?~)G.b!AUc>^3ӔF9P-B-C5!_Zzd~;Og 0mO(}pTBU.|S5S~V/}-L+%$&6 j✫_ ւSɸ5"/k%!z6k*lޣywz/B?Q +{sp>U\K1Bkqkz<muS[c>zOޗ aǜ*Y7_կ]SoDWHc>BZKނ"$KH )! %RB@JH )! %\RB@JH )! %%RRB@ .RB@JH )! %ɄH*eKRf/eKR/eƋ2x)3^ʌ2x)3^ʌ=ӠR-u{KR-u{Kޯ[;";jG!-~kKħh1ejp`OVBH#eӸ+Ce2ar Ԡe}S$7 S00 Hc'2jV廬\i9J~䭚dv:x-7z4JGhl,%g TIXj0(U:b0)h,o9؊a5^j7lR:OrPTxGDJNc@l:YʮwZI%HD0 A [% K/bo.)v6$Ig>MJˎC @SD.ѩbLӄs;:jGa_%P0 *:mqkBƥCRRي\6v'I0])xŽOc.\UEڞ/Z?<>-/no֘sJ-&͑۩ Q|$$E6]>q'jTamucN}kYusbbJ978ukKڮTӗ~ŝsH ·!18:a:k V0bgr1Ŷi:8*AG]Nrh %oGmvXH|7Y/Pb`cp5N=gbDt*'~:OǕ_; ۗ^~_ŷoN0Q'/ V`l G${*IgCwZ[MahBsκ_]9<0cr[mSjt.K#]\`•z+ b~Jm.Z*ܹo!DYȰ~^"|" wkV6q' ?['l#e(KJqy [, i4bQINT{t^ ؇ Kȫ[=rj‹ FNp&8ڈUzL 9J(X+; di`rnCs:#7;_{lq!綃+rIZ̓; )ƒճ}z}"?k_z³:צysa)|6'"~J5 @+ G\ߟu/j~`j%EfkbW|lp&[Ag~.ώ;g.1wiҖ: ~"!<$}ӛn8m[ղ|<g_i)?s->c|HNMm&Eɔe5洨RfKjo+L^ x-B F5m ъt9XS-gBaw1gC9&`G EmrN2( ݤR.k%!RHD#le#v[S:= uuy=%ݖHɥќ]7b&X,/dQXNe~5jߍ3f!ew"X#I~t rVcz +1E_B*\^;y[kC`e4FזGRVk/E-XE4 @J>pU+JkYu9m^YϵKe_˫hN|k_IVqB" 6D͵^@-!M .FI݅Bhp^ˆ^ Ljd<)VփOԔP0+D3+ѧln/泏WtHU1ہ2E}vn~n8F<6ÙnҎn>6[Ҝ=Pbh Ũ"4HA3F10b)H,ZDxDHܳq _P6`Q|Xсf]!~ZJ20pDXq%D'Hk`oIJ8'%rtXYh@qvy f6WMޝN[b{7Iڇ)u?IMg^UnrZh "Fx,R6 aJD=b+0Y Ȣ3 nWPt5uRޟu+nS}7==>}>m;@PN.֯ſ _PE.[ut~J}Oo#}^,Ɲ%vέ _L.-tǘU7_S5 Ħrg }OM{GbV0s|Pȶ'RtHc;l줓 umKi:`p"Pԟ7&nM[fd}m>>0In:you;{ occմ]SЉ2ԵμD|L6{ޯJkT<nt`bQDF H!pJ4JVXF669}KT>S|y6ܙ!viqi1U2BAr*٠8H?n 0x@ 8_@/ %t>xcq9P?{WF O nC^dojKgYI^op=HPZ׻fyKYC\,~wRgq⅚ZXxpusu.AM(,@j>OgEw6Z(,Smtq%&tXpihtqFFх}kt;PÉl8Yi̮s !MTCZxAp'Ҥ:&C̻G߈Ub/VG]w/jhxU*Ip2*sp e #6+Z`Aed6 JkQ7̯(ީ꾋$G>{>Ov!5r;Y-v0/Ca\\WJJV+MdËnBd]5@o&+}tIAGNݾ3ߘ@cAHJij8 ZoEL2XZ$|ۮ#&$l`~&iM>^*/%FE-q3sUuQ\7V1x}9/lP =gF4΋j+W^*?$цY~OӘC_zQЮ_. |%2{d(*AI)R@:fx}YK๝yn;w@6!B)BHh.슐pwAhmsw~D V ع1ډڦq \qD謍Ec?̜f{j-J( ڨ|i'M S%&+LjCDI4FI9b%2CEeM@m[]>}+>Q0Ow;UK̇0c(Z*P"QeUXT ($V;@*jZAt}1'-v)Tj)4%f^LNJMCsVBЌ)SA^IIqWu=+V#]Tͯ$8eLKJ]ۤhalKP7mMQ^;co."¶L)}qΆȾ;̝oF#ɇEHZEYeui2E bjC_ܣ;X&;!z;k Lš )M{ʲIK=5e0$P|xԾ p=|nm>vROY% Ӏ dl@k$݀wҍƾ%޲-WzEND>렊R!041&YA \lJ}*U[COh=M>ˢRozZ 3y{ 9Ksd,Q(&"7o jոqŤsQ:Lk]@׎} ti]Xɠ6DjibTd@3TRd5BFH.:rcN!F/H"rX.YVo@W$)% &pkFYPgcC uz2[<!1Wx~Y%Nn@N'i@vrFGũq7x.ӛwd9 x5GBQjcL'Y}lG?1)Gմ]iB<.R J 0mו[ȕճiޓI{ٮ^vwۓl\\yF7_oqqq.yw[g_}VՁ+WzK.rh.nq9[AvU~8w6^hcHM瑶զO7|Q$|"F  Ĭg|htB1G3tAꬣa HzCˡ !i#x,k| ! 1X(d4%De1Ydc-| TR*G-&N p؄R?Kګ=V[ mߖ`|p5howK9}䷏b@5@A7QX)7~CΝb+1ͽГBOsϬ 02停bQX eTjŸ-ym)х%N8yNTէ,oJw* y|g/A)ۓ?<6T<_}5tHEMny$IS 1%dN̡ 93%%༝8} td~7-q}Փ΃(vM|;7N7- Gͬq&7{BL I( $ho2*&P D)D`j XhvǗ:X斎AV54Q&`Z'l xI: اKaLTtvF{PTk^BefYUnFA]ukB!snpsӫ^PPmf-#8u8+Yҋ/~-b`f=ެ$6J+1Py*a A=wٚ/ LΖ]4L,J!ЀqL$siTqq#w=#JQ~E9ʄ1Ze_TV'kG¢nʇ4E"9[mTZE1&bA H[XH2JKEňGEKy<3t6C)0kiS?~ (3)xXϐ}1@̗J b.^J>.2W@ӛL[tB}F#SMQ"7 49 1fF0q62 BF!pHvvN$n6 OBYt$dXo{pY83]O g@waetm[vR:@TJz)^R_U:Z@_D orijL(9)p7)>:k@Yu{vð@Е>^\L t.z)(IT?Ijw" ֺJT+ P@+昦+6(vx|jtU4f^t%t6tU5Xj-]U+W8] ڛ#`@}4U5 bk OWʕx0]* ^׮ڝy|t&:,`kI;{;!/6NxA6ih|FX@Hh49em_S=j+T7Mls])γ NV2ߞV!`ieLRB@caȧqbk>)$OYg V_aܳJ-$ ΌjsFWW`ve7@ߖ>;h0 ӳ-S>;6RF TO;[yyq/n*?0.6eNV_Ok}SjXdc \OJ?W v1Ab,1Abhi z sHH9[d)c׾$9?{ײG[[Hm/z| %^Q$MiF7nd7"5l)b" @ Dxe%ZlsA!dxH,K[M%rKIF?2gw?~8i^{s/'g8ŒYn3|fIh|!nXM4M<8Ane↥\n23ę9>"xX3m9sKNJ,BMX3Sh xVJ)UaKh4wH lCs-)hĺfj?㴾UF"@مs9{LJ]%\Ym|6k?,JpcsMwW<]51'|vQ.>vK4Zty:E-}kק^z;9}qZjY~z{o. tP1M)P;~C{};Ɋ&W3r!~,+v^ƞ(ԧ scf4|'.!o >I\]q»tj#ҒuNxX5>w!j\8[[4o?-nʗEk^JWFQR:8MǛ?f5Lw/v r ߦ}ywTsn?/U6T1^MiY5zy;\>tX ~^w~7Ŝ]r~]GzFn5mq,5R5o5.wim>I^~zX5%OsyCr+[o>޿Żo~ÿǝ]V_ݾE=+WߺƷ;_UN ޻R}뷿t 57ezx܉PŸph!!:LfYt&x:+ĐHKvqy.kOoiG' `)M1I, S.~(Wzk[z X+$,c )6Aq(csPEc5|e12Vg5k~w==ն\s粂rn?n~z/50Ջ!Mmh$+< Htۉ]!|P3;PyS8:=>"=>&&AB9Q@\[c< d8&_XlV2C9i(#Z=cjbF16A X)6̥I; d|=>m95CO215`:-c}niQW[_ 1_bat D戃a# CNSҌM)gaa<P,[~ 9Pi 悂CќYCDME U( IPi`?z.gqYwE'ɱFk 03 bq k!>"ǹO!i@šٖH/+U☢gcBܬmJpX`P͍GOfSX~`[>-P sܪT rƋOO)^[887!`V}x&cCYQ L+صp JkA18<$0ÿ!NJe7WgOЍO4A=,=9p<ʈ;D gq۶g捹ژsVw&䬬Q,ۛ*xiSL̓prF!o|,<뾽[5+N䤕ZK(ܢGR {5Tm;8` /lnPuBA4vM.ZB}}<5e`+4_nidz vV=sq # sowǫ}-C{w|;D9rYL }ܤՋ{ t,>/!)_q~36C̔J9*]69TRFZJ4BNX ج V?NRuiӇǐr;ñ91" ٝEG1Llw<3ko7gnB[L+ q ͂lKl *3 x+"RɌ!+2,x!5UL N9JrmXMQR-F3gw>\^~}V'cɈΈvΈsF5rվ"Z @$d&b>f"9B&[0cu#Zt]JZ3 A%/\2ijY(}R8̜=z މ>yqVvc"T?BVM i"!Ltv)0-F5c(C4J@Lh}P.`r#RRw-)+'3gwߡ^&y~`qSM!/17yUO=6ևe5|e/2s⮌:f$./wsb\\)eqJR +-{/TQq%w}J'_ }*?;s(cSh֗y8X,v9{Rk:BE ϳ2e%K:$ܝ܃O!5vSmv0/CaeoԾA".!"t1S]2".;>e=8"e=E\6hTt?SsH` a TJd+,&6SOL%0RjH Xs4&:GN ZK!C -2a9{zLÐ5(wo(7?[9Y^.N$Q2U!)CH\d)APRmsk2Dg.INФdhUfLi(Bu&h5MYI8K%Oλ{ w=E5pѱEC)!"!d&E?8Άg-fXo9|,LMzO"Y Ŷ,DsMIUDC.Ps6m#OsZҕz(RrN-kϮ 0Hs./j Z|I3Oz%i]*5^2/c"{GaWLYJ*FSLvȔ1$q,ձt cE5Pi+R=~Lg{hIAp0EQw}[HrnmvDԶ2V}3"p<7`pg3 lJL5TUL`KF}'!5>gegCQr.+El\8>9KEi9?(*Uu j`eikuWMɞ$Gk c܏0s gUMli,XC*4I䍰L jd`hC_6gKkkߚɩl*7')7'_gYnh37BT?O4.'|̛KU{9W]|(P~eOdz~;QUOrQs|}geJ?{ƍlʔ$Px?Trdڵko*~WH\Sdk<(A5cNb!pht X J1u3S;Lg@0i[  ʜ85!^ǥCRRv(jP"|ŧy-7>@ })V?$RQtۋSU^BӳůR9 J nʺ[Q<"LMŸ0O&GWfT0)&SvRcD[Mp0 F!̩:gIٶT_H2gCPyChIM-k:]V Y^KYT0h}?A Vx>] =sx:'Kk%Zk!պTVk#bK>qiH,8#qP= Љeaq8qóP^z:?Gß>y 3`R,V0U릪ꪩb+TMhRW JI{{`z5ͧbQ)ywؿN]̆uYo;qjx Hc'i)]WQ&* ȧ1rxE˶0_ͷ/[kHQ}$JvG"PrX ^'8 T ,,P((B6~O=V6I&;;ѹ!t ֹìv(lBa;\ϓ)X6:WgƉ ;Hއ %NqSw14]e,X(8ү^0o C5 (ba2-Y < Q >UD .y.Yޗ_/ MJ9ZKRKiR^[Or @Ig L2PApy&A,9ԳI~1kZe۾?Ջ=j:C)Ԧhj4f6umTc=0C(뉼'rtÅaY ɹX;)nK~B_$9BbH0S ~6@He_o.7Lܕ4G($ J."TU{e\| q@I>g$C]8'{pWe≃j ׷B+z\55m)#AlV1@V|IęH5u^3|" m {^EEQ"Zo@WG-dmxײޮgͻ^ !tAS/f1Y45#s hO`"JQŐM$EiC>q+'e[8Q&mdc$Ps]AcKXj fA!fD;c Aq(Pʍ2έ̤1d c`;JX+"D4k )rgҋL|nJKo"Y ^:Ic>lR8dewpU>Eۗ]X(jŕ3k%#2ʘu.ׁPIpF5Zc `"pP×ghR(sVA [B7zf!% 0/heMYኄdnmw Yj>wc+ݫ^n__5qSShj*]^F[^F+128bRjŮ71.Wi7׻;;}k}Kb2N(ME x8^x c (BOlaLV*ǜb"^*l?*ٚA_ Œ1ф@e5 kl^j^i}}4Np(8c>`̌D:xQ&i /cq%ӵ=Odbheb+fb`{< H}J~ 0}?x8e8 _O ƘDH3X,BJR[YQ"]R.=X:rռIF3Ɵ8~l*ӇPB#>KI~|VWQ<^e0g&8.0|>C g'vӫq#>8_>>ﻉ?b}Ylz| …eXDlJ/md/f%̸U\w-/^ 3 %~9' 0bR?/'-մl+Z$KT`эE5t`b fN_D̐0PҺXҤuwp22ى욛qsI<bui= sx,U-cg@,kUށ{|q6{2yW/W/Iρ+rqQn.px@ydj_ΌAXGb$[W蜞@yuӓ?6q$9NN\H9L0KgQ\R|+@+(2 _6[2ǸNgq`o ƉQbL r"!;ϓa䡺o Uu5vޜ>ƒ>}bO{ɱ6H )ȧZr:AfV 4w6 mλSbb".%ιiSMqkE~dĘ݀k̳xi,KC:jÜXLjAm3h\.јqYʮM3UpLq>lYz#,AD|y2gzqJQXX'™_o.zlX-e+>[D+Q}r!ٺ PZ|>Zh)OCWd- >gkjӮs;DWXSp ]y6U\_z_ !"d"h"\ƻBWmR.Պ !|j=OvUDp^ݡ+0#]Zuw֮.2}l[{{(jJ F: `Qg*•+thjR{DI{jJ")]E;`+;vh%j3Q~jJa-5GW cdg*e+tJv(5jJc*[py=; ]V޺(uvtŘ$Aa~5-a=4vNL R2ExCɼ#h[0z!~+Ѓx#0Ip|PjiMւGE=љP,5 v=&)!XUhW*ժt( F=] ]B+̕ ]\dW*"ʅHӞv(.31"UDKi*j'#׃K;v v(jTX K:CW.g]V~ P*zAs٥+,dw*դ+t%`׮"J;IWJ.P#">kP )@, >OshGАB]/ʯ;pim*hSfJ!, Ҩ??d"n:,b8I>IZP?WY3ǞʄPքT`bmS4J5AR鄡ʯ?Kouu.0&= H (F&eSS{ф~:Vi%dT+ƧqtLaYT8 G.ZP[Ei~?aT- 5!y9D-nRdŶ1q7YV),]& %JV1Ym!>e<(5qms3Bet1B4S51`oj+Lcʗ%ٶJ"YhV8’"+Drں=O}Yw ϣZIGҝ.99+o3q[ix;'6؁gUdvmUΰVǏ}s@_r6 *wkY4xf-OROS˥_zl[~*\\Ի"sd]<+氋2 {A0_|[9*+Gxк=#\Fq{{_TqAKjr΄vv2~NֻsW̔~^ ~E>4^O3Agq2WճZ4 A0áQC;Z^꽎u8CZفh~y(tF/MG)+]](]o:}[x9_0s߻A}qnS"r͇ۿgeHܺУv4-5lyKɡ/|kf|B#µtSU:otyM9mǒbN"-u`Z_+}m 7w ^)ƝQY{8GVy܁+M%a=$`ه?`ןtLd\tLMv#sT+ꜵ9o_>oH_J|_g\,RDnj6 8 S Nmz" DWDV0 ]uƏBWUGiJWHW땁 UjhmX:]u~K+ H @= ]uNW@i]銵2{\6UG (Y镮.Dqv `+2 ]u~3?QU]]"]YcN< ;yRG 4P; ŏ].w5]u2h-:ʽfW\5`hU;Γ.~쪣U`v/ma:mZ HOB_C4! 4lVdY5R^Y >O z5kCWLdSNW? ;&6$8̧S֞Ck\8Jӧ6= [ܛ\)t(JWCWZY<3azh_:]uV@2}"-CWw /:JkW@bkPcWU5ī1x Z^Ǯ.ĩ˧80C-UGiiK+/&4vN3-_]MoL!@v ?*(׼y7xwsK &z["4)総~ GGP-]@=\U=x Wo'k_d7Son>xMRo맷(mE.tk~-Ywb~ww-ڧo쭘nT|ӥw_Tg};Ǩ}=nGoJ&PR? ㉬^T{\P 7W 28 Ӣ##k$=D4H*N ]9*2X )$ZbA_2"$dN1RmZ#mSq%“`џȒ5/f񬊋2ꐵjVb[jWB&c&E)$X fIQH!Q!c%XJEv)vT&CLxeH`zaL9'Xv EA)!4v+**T(:笂B?6ԠӊË栌5 <(GNTbUlW%nA2(A'C1cmL9MEGJY٠+.̓†BhXL%eV>0\3IJҼ1k Ki)~P}JC.((Dآy ,7l˘lC=R?437)FpՈTb0VB8l`a*\dk=d sY'8(N#WEIPi/@5%gd,ͼP B߁ #7CAwSAW(c2oŃa iNBB( 2%9ihLq̒ʔM'֔K'$udYw]̕]Ρ'88)ԙb]K@T`_e ` ɄFB8nVj \|MEu&Q )tn7 x֚e@Qߐ`V* q i#[weRsORFgU]%֧ z^>teHH/'}2RI(X(! e1 =wҰWmk?VZ6!hⴐIx! mܢ{DńTE>b8 9h#(*sN.Н` /H>LElWxao>MOh TA3dz|'4 ڦBOBkT^XTYd&prt9|4 \ p1[$;YB'ą {PK$a2 #^ VLta,;Ѽ<{*̋`C%!Y=Ec<ԭɽx3$nCf6չY,TGתŸgk@߉xa[&K _V3#Hb ҧa~o? wy2Wɷ{K:!V5XDh;Kr1.y1ρ 7DR. /r!؏m4i50 ji,! 6#-Ch'R];N/B>Kv%AN.|Y5D$AV5 AK%x5t2Б%YcMEYIbD&U(M6@wpDlExZ4=CYe 1!RB`ƶv"b8 ԚKd π>B:k͢:k4Q9F6@ pf9vMM`m@\ 7譨 K.™6 20߬vBeH@>ow3P\LCRjZM`ۃGwO:߽?ě1ݻsmw&Qf[ŀ š4&ZhL!lt = ;iRKEƬ#y$4jDx31\DtkH}Ӭ2#C!p)a!/k H!ds ]a71Ե\`S"Xdd*fX`J0dCB ,z͛6 X@|< 7`E^H"T!F \5$\Fѿ :]F4SBuE#TYࣺ 2A lU?<ڔ.mTҹ01Mǀ4N66si֟@ <֬Y TR%(I̤:!&Cp FIBlO?Mkٻ6,WcmZ1;H]1$ӤBR5 դDJ"2͛ ԏS]UέS8~1(cfƲ2+o2J !M9R9U*H(v6bae#՚ t zԃ#Э1>DPq,m5dxГ(AXe(>&xJR0[b~9 .Dn1RZg]t>p*E=\jl >@&%!ExSXN^\ Z @^l DevC ND40(B WKB7Iӯu< TbOE-t.}f z|!tp§j eSY&|QҾp>_jsZP:+>+T2P+q jY+#9ip dL4!D*\1ނ`~5 ᚟Osk5LrVA6ZTI_^QK2Kj)` s2zV!+(=纔eN֯Ko˘`^'R֍8KyR~J'.NMO*a\ Rk! gt9s{4FZ2W@R9Yڪgb~n ׃W|ޘ9|}g? f5_#+J+f2vQ1!n~F$W&X¿A|_~y5XRW|Tt pWʳ^cXjET{|l4-_y O- !uXsGCc[툖Ě;S O5bOSʸ!=+l( ]\k{CWrҝ P2tutŨR%7tPBWt #+%?7thu>5"]!] &=+,X/tUju*( c+ɴ}j/{}UC:BRr{DW׊v JJsI]?tp)!}u^] $ЕesUlRv~쪠Ը *lVD \fBWBP+A9%hYءz/3P9!f[e6˔=EͤJ8*b\9>Zhw2ЄdiBtYF'曞!W Z -?n(Ujjط(0/tU u*(@:Bb'u-㽡"wCe骠Ka]`{v+caUA))ҕp#*{4vpwYtUΫ(J*]`3vUިVѮUAipJ))UA ]\ Z`HWHWZYګv]>zCW}Vw^]c+}*{4pա7 -#]#]ϖ2kҹ<_@6.IWv_QǿS핬|XfRFR2.yeZQ3r(=im+cWXY6ZBEo5 6> }f;wpvÇ jP>r1Srݷe7)g11{ˋ*BXwJMΗOÛ $ƣ:m'oe7xbzΟu7f~NN^}}GOVg̓hq6ܷ˺&|:,d o7>}-ӋbSY׍R>7sׅ*wgnNyE JcR qc( Dh,PG3I omTD٨h:&`>QJO7M"ϣ 7Tr]ߛGx4ߝFXw̠OBqpy&\W%t k{y1@h/dz9ieؿ-nN=k>kYX/[*ho>AYey`[f~W/>-mֿ[8['rNA!S3?5]rH.79[o,f%\//k])oޮPy>Z:F!^=+o&P'nTwiUx\egA :2U :YÂw[ҾK*I o($tXχҡՂB+vzxMɼ$ϛ}IQjJ$v<486U+ rWi1=+d1󦁳r~db@zh=Ҋ2ңaXg1ZHׇ F4G4 n@"%Ƴþ쥁aߏmH/ 1(|4(|ax'F dzňVm_Li3WlI6T]?_z/RK%IkR2D'R(U"&Qm:g*EKZyZk J7YBmn,-6hedeyR`,["V)ãkS'J=vB(/ d~Zt /bUsS%\A7 S $iJQmqogS eݼwR0f{k :$2e?>]a6@IwZY96LHDD JPif3U&2Υ ܺI묭͂,BFcBfki l<0;vn1YPwTu]q͏(=˩ux-Ak=L%eZc42:3UN’^1 A[5e Dzch: )樍uB&S!12B#>ri ή .鰱t!n "39(X'~+|}̑1R` K4y\kxf10߱Ѽ8h2Z3DFEbِQd+12r*T2n\dIeWd+Rݙݽn4]*ȭ ~izѸDz˔C}AUxuNP Ic-ӻ*izܳsܧID:Ҭw,%Z^ N%Ҧ #uUQbny\%.M}6b\Xw/jיjʻMuӳxt[Wak@ȐyDzOh'h P}>g߿`ndl{^Mh F*3^V _KVFM"q/v{][F"dw셥(%$%Wl$\h2,rщl8BW.bZo5HI eoM)zAy"ehuZcq&+iA!3IL+(R$њ;&LAiJWɾZm 2TRKg|P`B@K ISrˆXLjZ^gY/.¸\p/IP zH 8ly.ڐJBPYBJg6fǾxX@+l{]5gq RlE\~tѱj|=o*3.TgAɬP)&5p=J6TBDR!SŌEVChx%"T&Z|[4 9>ZYE3䀥Mb, yTJpӨ5>KH?-L%ʬbN0pOT ƹDla\Loa7 d;ۼw7;U"Ua UzO4Iy\Dijg^AT \~+p+aə3wDyGGH4 $Ɯ& *@HlZŨVXv`'=0PTrJ8K'V=bq!i\I x&vp^lMg;Ńeb6p ShX ix+._iJ"iϨ h #.+_l2m}a pEs9a?dj[vYfTH6#NDfk>Ź /g7aP/xŸs+be?aa5щF|aRM?@-@>j~ёfaTX4ћygSj2Xsrp甇X?:X,VտFpgU S 'I:m`X7^MQ ( .y"đTN? R_O%˳&+*%gԌS)lrU]@iN([wm{I9?QoL}jKyk ~mX$f_ c0z-̙'c^|rjE<Ʉ+4+(6YqZoKe7B85!@,8ꃗ$&d#g$H8.F1mp`!O 22Ymf~͆ۧ)^:o%"S6(ɀ[FOg8h!pSLWn7γwѹujp:n;2wYuD2]T߹8ʓ;QK'U9'#γ$۫@CN-I;Qh螕O:g (Rv4Zn5lC2bAeݰ/˥ :$Ia[7HJ f$мoLH{k+Ls?9B=|s.j3|y8'٥pm=6zz'鹽ִg"8PD}>5(V&j\R},w"TkYo6ARV |iܱ7/Q<@SqTF0>q%a&q!|ШĄwJ(Fzk12Bb!$l&wS>RlJ3.> 'ddtRRO[\3SM Lb݇#q>u!tYߒI??z;}ΨޯI`jz |+U#k?ʃyԬgj& a6 euw‹/ to7n"PL_l6z>og!Zp|UO,ٲf=Ҩk dp,20.e=WًÆSKy~W7 ϶ih[&q.ۜjy=,Z5WcCv:m>>e5OV ^eY^̧|?UQBavb|>dǖ#+Ofj;c%E$ȅ`,%V^&rQ٨UDE ~"ƶ}oүQi14Y.5p8\D>1_յ§{ TIBP4(".%RhpRFR͐"mgzeҏӉhVg888mfyZ;-KDXhFΓg xH$ӨR'BZAш|I%9F~@'A<7`0\ReZdg~>"!_A 0VC lѸR!F$DVf)[3əޡJ1pP8;?؎qw"J/nT؁%&k{DjSˎL/|wi>Ɓ'ZeΩ!Lt*RIP8'ׄG"H4h F$Rpxl<tBπ<dRPK1KIM$~38Kzqvq2Oup-*x5jdoűɁT:ҚnWno;jMT{k|gw^^FY;t a9kQa~9;?oa1i#ɼ?EH,lXNZu<7qv`'}UQCNÅy ;"Zٵތo68MS/ot>7>m5}~úrI\wl/NVsgkt<^yu5U\S5Ӵwjz[WE*UAej=P5|y=7PTfgT/wM2O>q{QKYJ#Ar<+Kk?/<w~Ls8y6;3JהzufOs؛MCgzMS@ Z{ `61l/$Thx[#<%Rt)0oζ_E6=Ç*> Kd3ޙ=~H29wdvKm9d&cYِbvQLèR[1A,hPl z@{`*oQe.olt@K߷QJѳjyy9J)y~7( ŮNC/47xaS" Y>/#V&^|ظUcD1tBeNH ic;!{i08o[m pl:Ġ̪(*Y2. W|:|||ݝ>׏oݪH甠YVF6diȢEI *K (Hcۛ49CέEU Zptt~O}o|=]?^o¹dzݧw?jYԨR&fh'Ou#v1Y=)\ה+7qgsNeu^;a!q]L@s:UT$yxi!Z:[L} ?Lʻ{;SqjNܘ'KĠ\=0R]{1˕k^o ui0 Ej(6z% zcܚZ*SmM顶fݍvˤ39h2 YBH{\Z2FE&Jd)#*7B{ŷj5N}hM ;z)S+EYNL8p`PCnx}09O[_T8JlZ"^~VrBYKpyv}_cMo]?3ըޔ*ʫ%Fl[Rj{J*P"2MeX@WJ]ީ uט HH ڶz&Z=TʢHH%]։UO26Wz(pGB]+]X¤,HI64T4-с܆=f|rK41 BQͥŤsV0V@ "q*)FM6ia؈EN0Y~kӰpB.x=Ši9pOˁ`~旋?sAY A7Q);*Ѻ&?Rw6|"BVŐmŧl=d_t-U|jEv[`ޙ"UT%&^54jCi(e oXׇ^q︻0s-6Bc2rb)9%D>TmN9 +3 aJ9]Xo_%wdq7&|݅4 ~ZX4kƓtvםZh7qv~B϶t{}y"-'(n/5}5ON]!jn*W^2J8GBttM(HzDɖup]IM+JN~ +]fDzHrԧ./lU#qhJg?tFOؙ_v»p5~\ɺN.I {TIrqTщॐrkhIq/+6^eCNǏ=aga̱H՝zo-fv?=K3} .uQMÖ.>we6] 7.ӵ/ ^pg|e楮:MOƞko̯^k铥|Tڼ2k<]p- 7^`zٲ`_̯\*x}_zٮ~coxo^+on= ;qG@r׏/_^s,kR<^>j8mƝuic`?+rs+`Ul \k Ro J-FۊpłW,\Z+G*puB W!X H.j.X-UpuW+eEQ X=Hj +Qb *U X-dFO:@\y;] H.j+Vpb0YW+kWW'ݵ}vNxo ק+zPN,_ggym$==:I`mj~kJ= "R3Ƣ/K-$U6dPGX|)u#{eABO߻o֯ޭ^~rkMSka?D?ތ˚= =Y6WRPXZpjюWN:D\)鴃pE\\kf"vJLM"n[KCjAVFo]J:pp W;XWֿte~݄*/EMb\\'jUbKR W+qִ3ȂW$WJ[ X-U+K.=5"O`6mz SXuBׂ+VabcK4j'"GYXãzɵ;XWю <`X XSqE*CpW$KU X2Ղ;XQWhT5-`_uZpjUʱhp\c,ִvEjpEr Y؎M d]Y+ \#kubrW΂5k[XWU\J=YW+>Rv5-fSv-ZztS<(nl q,IxQ91Dqsi'KSOX$uXVoܳJ=E~=>pR rI.{~*0a\ᄫM^:^W+W$Wﻘn?ZWaJ9 # %YW9\Z8v\J[ W;VpEwQa(\\Y Xc W+YYH׾\z+Vq'\ tIKjprWVUɺ:D\Yaig@K?ׂ+VGvE*WsM:\9)d@몱HnbjrW^vI0V+\ZTJ7q`VGS X)16D/|GQv޳KW^*᳙g۴h]E"FjprWq*puRdUH0jW,\-bƎ+RZThJ+=T+. +ku-b^WR pu卓۽'qI{in+VjlE"OP`="r){} Tpu2ڂ3pr}55ʎW̄ĕW++\Z'ǎ+V駝Cĕ&gб"˭'՚2J7qő(F]<3S~5ZtSc+U=\Ϛ˭'ކպo`J7iӬYjNdZ :ϼoJ ga; %/ݗOJ4(0m{`N޴%j-TEbh˵z=v\IW 5;] H\Zc4vsW$؀W,wPbbNN:@\MJW+뱮H.j+VGo]Qbl&\WhP9[HzYƺbnKV['\q5YW$˵\Z#q*puV{Q H.j+V nb&\ 9왽'|-bvkWM;+ WNsIc^w}Ѡ@t4ƊZ٬ީѫNTJD*O?]m2*D0d[YRB"Ju$4[BLA[M'Dl KƺIǣ_NY#Ç_O;BzRjjY)3?˯e6(m7>//YȆ䜋+ش B۹3le`=C){Y.HJ9?Ht.{\qJW_;6siCtdi3t5 ]M(]=ArQ w7ĭD I %#%hCW3[ͱDH 'aKtfj=R~hnt]']0ٰpi3D{t5QtJylyt;\j[&6Mtpt5Q:t* 6DW,iofj G>Q.?EJF V6aui+t5w5Q.?Ib.#%֯&=aj=Qkcf$mo&]nt+~[@W@yd5QOz ⿸G_+}c~nQ3voP+QIg/?#_\\B?r?4y{Huc^v7/Ѹ|qz}sy`?PWB [xor=y%1ޞ~)>5Y?M%f=qܶ$P|?R';>.,@E D;+eag.jƯOρ3S~a0Am8>POw63֚ _z~Yߜ)`ev>359G!R6Wl(m`k#lCQ'>n 9 7h_#?eoqf`.N_{Vc%qnSRD \B6Ol}rm) W}~lE+LKE,Jdr34I. B_g;:ۢDD#5 +<(X0 1o] Xg)V[E3C"ZN9VzϽ#B]4Dzo%*b0e֡-ɰ4ͣD9])%}^z5$R,}mQ͛T|N[ݘ$BGҗ 11ù`Иא7T(:\Ө%gI}/DE0sb߮EdM8cliVCHCøl)C1i HԎƬQFMC2K@LcG&IQ"o^7L$WEM6QbUOh :DyK}p~Jtʒ{YU^r}}! \ܡC{ߋqiIslpBr"hY1fPT)0E!NHgy;vߙx^WsY49;Z^ bc拧Qwu`H6#&f3x:p2̇o$u:U)qHC1YU&:cxꘆ'$ `U.Ao%CJ$JEU2B03ӥe,uy`t/1 dA!YDx$ ; t9dU% 9աkt=!V1<&[WFNC/u KmmhEMKq:'Xqo.BfȈ2!A]Z%6XȋU Pw}I*q`8#(4@]I0PA~,9j&-Ch٘y  n!z=bA ꫴhɯ}F2z 4VFa/D(&EVB5ve6*I@̴T3JK)C2~ȃ 8vGyPq𮡆πE2,bcA8ng0H!(vDMz5)U4?U{٧FP529 j@eVvx6Rn[3W21ߪn$l |4|E3Kp03/ ֛ }e`(G<]O`twu0+r ԽQlFPI6M >';Z<,FݚZsL֍Qg5@[Fm1f P.2zfy4|}UeF٥f\AI0 9ȷJC6G=T3ʍ[\{BNJdtLvpdWSA =>13`=u,dP=XA|.`E^qH"r6ԆeAX1wH.Fa5è#2P87ˢ#$qQGb$U01r֓T],k~x)mLq11ƀ$A[ /+yңZ5AnMJFffvvT d|AJ!P?d="ϯG.&7P nfz!@6(= @PHZuf(AL ;̀|12 =pev]?-Д nF5>dN֞F 'Ep*q˹ .nIv[1x@A4LZ+>FVr*w˥1M`XKEv,F@59")ԅN V3f !Kk}Vx^]!D;n11 n^ryysvo"ħ i}ʩ\  "$j3Cq/y~*S4w,4N|'1:vJ_~yý_ܛS`wrY+hM5_\_Onqqzw2Oa=4:M:{nĈ7mw.w|q6dS[ {@ZMI:0 N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@zN v>@ q\o݌h.;xVzN 3 N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@O $ M98v@b1{7#Iqf~?l$#v@.O'rH!gH"%QYhD8uUwUH&>%Fi>p G@}$P G@}$P G@}$P G@}$P G@}$P G@}$P G@}$P G@}$P G@}$P G@}$P G@}$P GHZ_^2Iե{}h:7@]37]Nђ٥% F\JLmjp V}K\6!J;q-NƵ#~tP +تKƸ;a .]JkW/KXOݪ+tjtt(oxt%RF.U+EW *$>+TCtU*Ѯ>Ȟ^ ]),I SJ;CW .UB+١UBYOW/4Au3tpGUBK~=HW J\i)*=+rΦWo~]N:+qd~ʦWwɕ+ɴX\1\}N;ˡ?{M}4clT@q2-.'.UVf3u[sWϊ?yXׯ_X#A93Z{L0&P)ec5\0rr1tE٭7OA0GۆfY&W@?d|xU6.W=&Bc#BU߳x;] &g\W|PjqXC6СD(:DWHtg *$w:]JQOW/ERV;U+dW* Pj{zt5CtB+9tJ(=]bTHե+Xw*+t>^(yqtESxA܌_a b>rE P$?]zT {OWLaGy5▱?'{4 S }~vD؀ˉs.gZgs)!8c#!l#beK ?Y̙C:I̮a[O^%Z"kY"2?brJHlXu]+KBK~g8d]^"0pC|0D"TLiBhTɆI9IMVvNPg_u,kI]+Mk&,^)D@6P$Y:#H;e]JJN ER%oJZKjb0ٻԆ$bxU\&T_7L6˰~bԗba>M+ -6<;'P~}D=Q]e3\O ͩ *YϚߗ5r(  )IrƸ8/^Qa@05 Ne(T)΁^?0_rE؄ A2 ǨU1kTJ&FS@xʃ)GgW4ZpҤHRUrA^eFjz-`B"_>A˫:˯.'X6_zeVGb:^S v>xgr/ 尤@WfLn啞X=3hHKӣ0xz21,WJ4KXtE~ Uf6X9u&PL:+yci$̕kn3At{\JAW $LB5kLn"k0, ;aYq0QL b>;5#y:fzkHNGr4bj?=Ll:HI9SJ C m@ySD348I1uS܎/|:V`VE-6`M@ZX.*lKQ"^o#w\@&,aj|QPYf$r^~+Yh8 sJ.-rUQ<*l<{ ),a~QR+mΪSSܢz]rqZ𶩘Sr |MxtrZ-՛mlde"[~kЎ]k:U ]MuldyBdZ`ŴO'UG:Gmtg^몑jVծk-[JFoxqYlDCҒѴG_vkTb\4Kxt9>sJl_Վ`^-,[:Y{K6*Gr>`yDÿ>SMjވ kmg5(~8-"7cs6w9-r͙A6ƒEiC;t8C4a>ͳ{=]nu@G=]4'__gmG: F5JFtFhEmw ,(,uhuY!(*G/KqTQ܂u, J6A~\$Jrh$-+G[9B+lҭ r!҉sqH.1ЛC0aQ ̖W.VP{jv*=Rپ3>r[W}JR̫Mgb!Zw"#I>uqL".^ywlCs~P=󃰴 jϬeFJ$GDEʘud@$8f`"ZVUA VQl=N;[B7zf!% 0/hlYl ݪ`ˇʇikO 0%v%Gԝu! nKmϭmO~u*Su%WW=U2@ #M[Wq" WS+vh -J,^\tiPAJ)V}JuԵfWdFl9)ZY#UY]QFz?|}о"JD1l k%ZC)ʹq&Aw !ZfF,"y;F$㑅H}ꥦrL ` Xy$@=6&"ݲ{:i̭ql>|Y.Ji+vwBKzv}v6JuRIYzo}+ ml ۥ+ +(@]*œ)`H#;W,y}Eg'a4iCO @ (4`Q.`C[dHT1 !L;,Z%,{I"ا|LޒpNJ7pXYh@qmy۞3|g4x('h~@qK+޸$ӼFM.vgr`ȇON<^ mRDPEF V1a1LPG y&A1YtbO;3/N}O=;g?$->F'2bPj3st^a#Q`ւv<97rFC+HQa{oORX;dGCRRԶe7wS%} O\txegqJ@Ǒ13YA;Dpk)dyĕN4??$vq;;n 19?/tyTP^L5*.@k,{煔dny!MuuΝ%ꂨkպy^'?f2Q' @ _Q,Χٙ*mtX]\02 vm 0?`'|#7kA'ӆ1 P5\)³ѸrZJ5Ukfݶ7q?lsޱ0j3P,dìUT{ϦWnk*胛n t٢0X fMoy "e Xh隙Tkfnzl126Wܰ2dF+f&]OcJ~X7A~7wy>>N47h:-|xd;8xbLXDxIqGWy&l>`M]ĐdBqcGo=׉f( ޺Ĵ4QEYD2Vz]aYsݭ1@{&8>93/ʙ4p~MgKm2 ?о% PFs.!AcDRfUD^Fmook?G]!7 T w?t+e{xXn2ڞ)椶9wp}ƇW٧Kǟ}8N>?|RE*u*ԁw bgf+<2'gwts{J*LUwo)Mnُ?(uS{GBxDBWXyǽtlvIűXwmYl =Ũtȵ|CUT&IfxioBD:HـN6G-xJyt|2w97KЎ(= ovxKe^t; :|:PFMLhwcaCܗ:HhAK$nJ*uN`meFùFs۬ @kc: 0pȄ  8no]S8.Y=+ӠsY"vK1B U ^\%'2I"Ф H&`-ƍNL'^d &qL߂Ghl|T}#tN$!ieI.Dj5 35HI)XE26Yvڱh8I7 tqhI7=#$KZE:c cpE)M̲i`V"Df.㘲:tXnp֝s|~pp,gfhNy{49B3VKA rfKsd,ZcL[m wL].;~m FYlqΑyvRȨm mKT?x#t~O?{WZ=_\өfџMAI :5:~r.~B4dd9v P # ]`"臋Or>JKM~e0/fvx,Vr7/օuj6c0+a|Џ`ooB'-OTTϯ8$\Wd*Z 8Gm;nN$ӥK)z!s8*X RjC?0aӊ{s8*Nc(w}W't3ĤF#oO,Z4=;#rG~&_M\^bvYae{Pr yg|s?ŝ*#^Z|N%䝢.tm2TZsZü3Zi~5g~ٻWW9S6TN}[MtnW"q=bK[w}GrJ5H|bV3>t: p!ۊ4kQqܶ" Vd{HF^9KAQHBH{aI^K*" 4Ѩ}(>@-jr 5zS,ED #?(X3l[sr iNI7UQD,9* 5jjJ*E2S0"m?ac)FKR)5  bpE L`e^dhJ= R2΍I !'=:y1EA`P(12QtW`|MC$]u2XgM$X t%$)9u-;HY8射P;&: k@Ò`}9O ƂEL%)!*DJ$SI'‡0N%ra촰 GM))#{[}kB[ۃAc8VHWŦoa61Ԙ Wŋ47@^ ^:\lo&T-/ P/u\O{Kè\>dwI9 u^>C&Rx!;%j- ܬӸnoHwj>c|U>Ҥ6|>X2L6_bvwTXUٷURԸ-EWROrQ=*wO|`yAwm|x[67[\UAxǶ#{&뼡P/^/C;[>hX(\G@%e wI*ҬTqqGj{bxSJ݀ ^LZUE%1+0j3HX57 M9Hl;F#&$eQX($%;RQ1"QhQ:LÁBw+xx`:QB:w\'Ocg:O{wy~ >F<S @G&ir%)"a ^ g42E-r@SWB`\<}Xf/7F^Bd휔I@l H|>[ Rq6{%fR!88g@k|6ݚؖӕ  ;~^蜡EtZA!fG}S֋ߙmx_#Q־۳;n]@Wxv6}akَH뙐= 乗g}';wpZ$IA,K^o2;0sf@3 L`DB.dk{ #\r .Ps*N s6d'ڶ;N&[|ǒ==T za&L~:P h 2+ E'VT&>i&`F!`PʋJ +N_Z^Fֿe *DHPit$-9" j3qvL KizA&7ӓ'99I G Gѕ,1@@J,쨋YD4gm!uXrMjEu T0MJ;8 \,ywzdVe"B/Fxw#,S' )Hȅ4+٥B㫮꯳oQnO>{fIیG֣*GsZ}]Gv!W> vFw㞙2~n)us~y{376T@z4~1t>K؅Px )xu$[ kbۆT1.A+ϣ4vL^{,7vwXhw\k~_v`h]Y;7v29 [C0v $Imǘrӵ,Ewq~p] C.&wt;Mp~߇N39ThFG6FF3<{}ّsm@8&giBF96]4$pM`$7T }c͘yL>C3mvCqtA:=F<;8HUūw)KZWW?δQB&MJ'wHY3%Gح!GͺNwnibMz4]\{,"=+z%f!B ǐb>1=tFRHKK)'rMK ςrw9綕0!]bOO7؏>5.HMZeNc;<-8˃pR9?c$]!:rߎtGÙz2r7|Ce:ߏqCЅPoY 5l.APwc⚀FHz8@LHM4%ӭk邯H9sy}$q>Li`.:~ kVzcnܠ/D1njm!G䚂pM;@J`1)A(x`ϩZ.u$1LjCQDr.{Wfc!}&>t>x-o>yvSQܙ4C~ZnxY[7Z|8h(f"C=I'HJ:y 5ǔȣ!]Ŕ=] nي6T\u9 JJq])-aRJNUW JNgPd *mtJ)!W]-PW)cƮ3ѕ&Vt%sJUWKUz;3+eVtPPRRuG ?jgP] 0JpVt%wu%,Su%/εޣVꪹWtͅ7/SP79F߮6áY/7Np ?oܶeR}]%VquӋWnηMn[gt(_Wgfw .I_z׭q׷KVOwm\Mw^ rx?quyk/5w. 3~t,)!#8YabhBq#[&Qj4DqϪcI=V3 [j{d+ťdEWB},]WJktD]Ί+vtLVt%R+Lؕ vf7+M+`W]-HWsl:I~»dFuA%@Hf6ûlOʽy6嚰;@^?J*Xڙk?mI.uWr}qR[56! lz0s*P/6tP-{Zˇ>|_هvp[fu˜ۦk]Pu&ku>~\軶tk_sFXwD4x'޹g V EخNwm= .+`D|ǹlpxhy2(/lyLAݷ}ȡ՜Ӏَ7z+X2aut%Ό])\ˣQZb !])0ىb+RP+)US;SIN3~iμ?u @>fs Yn/z|nvڴ'9yrO?qr{3<q%CvV\ }@7g"Hm+:ߛvm-Gj*Z[m'Bתl|1w /_\kW>xi{ YΜWNc ޕE? R?Ypyj߽ڽ~}}tfN~ś;ms\=f '-4l(Z읳3&>y+њF*s)sE֖!] cJq60B)jʙ"C`pכWZ(~,L)1W]-PWDdHW LΌOWJt])%Pu 'AT.vV?. }Ljq'~,&kܤf ; j35lR-Z.; $ΗʯCX|SĺU )>?).)m*ϦTlO-Y0ii i4ޛIf:ie(l/MUo{%B7 ])nB+RڹbHUW UJ1Lf;Ul+\2*rf *29JqC+E.]WJɱjB,Jٌ7ىSRLUW ԕTpvɐЙѕf3cWB.+ 5ZSM.ѕ+-J)U]-PWgKv:] m,!Rօ { JJp6bRJ UWKԕ&+EK{4ߺӤtzi¸ssq\Xp|Tlܹ$Y||]a6W01 8pN0zQ]4PX0'z 3+%3Z]Rʐ"p0+=t]).G+t])eUW U ҕ{ft`&RKוRr+:kHWftv:B|(]WJj¨ JJp7uARHJ)jJ>MJjW`&RZ,~fP)Zrdb2+ُw;wji\ؕ>W]-PWAv s`EWJRʝ˪Jc2`V=,\[L|1nG0gLikLiɅh9(.jZRB}%]R;ONWP>O!(ݹ,7PCLӀ餦5=L{eQVsMS}ޣ J])-u18UW U@K[ҕ?zcJq6uВxp7ARZ(^WJ\u@]逽+dFWɊ7wtRUW+L J#؉wS&BCJY UWq+hFWފ6{_2ԙ%*gOْ]).ѕf(]WJ5Z(K!] vƮъP;xEg=wl uC,iʰǕ gg."aa7gGX>4P`Cc5;h٧ң ܙ фhz>uLfYWSps3τM3 F ;'芫[^"ɌwIqRʪE*HY8ۉ] -+\u@]E!] 0G;ѕ"XѕuUW 0r:'GlFW1[ѕҒ+]WB;Qu]!Sʖ:c2+MfƮJ^u]erBC^9])mʥJ) +r ] .>ꦞtEzAJNjb&+cيS*]WBY2,TWqg|R {F& w;.& ͔ dĺw|Z}zanm66 1~~m]vpEH]sQt<):uC<^jyy}Z;ɷz5޷g'[ͅi/7?}qeso6t?mݭz3+pWR+?~CW~޽5C* +NN>_͓Or'O?qH%&ò\M'j2& H>57P\-U0Y9ö8MN?d6-]<3Gŷ봴Kcl|~e0-GKEq%?]^ؤ5gC`SDiDHp-#EH?Z;~lC,AJ\h bZF qc`O[7 V OUGx+zn=Hz;hY~~6^޸Ipg?9""LgmH66Y5ϼQW0u@TJ* 3y`LF(@k%BFS{Dz|1_\a( KȊX?bVma{oΛj IV4[;vc_Gz[* h&Xw]KG ;' N@{OL19fDdi4ۓ㬬Q:*/bcu0"1WH=bϼä $ЁZ(*wE73y`LOW&FUΗLԃ0t޷[Z`R(&tAZEj `# QHb p/0_-1%]k ~YI cS]φlZ9MaI>[moL0{|,jY_|_=U ^9 {ϲ}j#o,SsyG*DYiA:M<a:O 4٫8Z˝Z,hQ$ߌNJLƤqOxs"QҜ\0Yg AYԚ[[*=qk7|V_8(q%8G=j-gr%rG:حTVQc-3{mNՔb`}X:9Z^)5]KM"qtko/qt1aֈY) jU0ñ-Pg0+wTSnT3ǗVì>pt0k825I7GBn#f'@71pr4nwpTYa-PfuJG5:i@l',s@k}/=Uw;QSPg: gLz)wT&J`*GH,ujG5զE>ǁq2mC՘GOY5rꥦaNbV/)svbǺ2l+8%ku;(ۥ`V/5r]"LwNfxlQ'4qNi XL$"Dd:SmJWBg8D!"Qۅ~` ֜ؕ~s%~ex=7EaZG͊₡Ջֳw?UmV/c%U1=*,2$E-,*?޹yeB:bW5f֘QҾ O튻+n,cB]j䥧BjwOM~jwOʺ7Ft<,4B,|0Kjʹ&+˭xAct~Zw3&O߻铊J,yRo]rΗI5)Î%%cН J'_Zk }L~`z5UcÏ`qǶUu1;fKK(b'V,z)Q|1ETձ ?pǎ ?Jf`VBn,/1>umOu׿ُA;׏1IeWk{8Nlt{ ~);Kd!c_e 1@;/$LBzdL>G>KDk~&qlDA~)թı?ecۥ`VXJOl:1λZA -_\>rMi\I?=E(%dQ_jH<8Bspwӧ([ݓÁ6p.؆]@\V&wDph>~UEy!8JNL! SELD)-0;B\n)4B& ߫Yib1([\t¥;M?k.jwkEW .]`GQ6 DU_Ϸ] ۑ_Cպn1@m7PHX}NMƑu3QQjl^t&*'S8v%XM]Jǽ݌zII: u'R[+]_u^jDVD3"ؠHDΓL!eJќd:#*Y!hʥ OMs{)辗Bݮ)d壛Щ3ncCR@IFcLv&ű|huҁ28U CϓվccWuHd宕c7b!h9B;/^)T98O3A\p{gB2_ko/lr1K>4V7 #R_e gdٲI`q{6@q]J0441LXLO- ԏ-D3TD)P__ѫ*m7IK(cD5uɣRizR>I= Og̠@c,7pa/t5CR6_-?yTr,[DM$I̢Od7̋bLmyp[ekAi:vkrT͹Mc&"a+1 R%*480o?CxYFZY' 4JYl Rv*MQ3x=\&w97WqAt31k~ }.%~#[jridU9I>paxMЦL|Q>d!RHJBB6!gHO1:Iws8:98ә`^WgFXЪ}W? MĖV0I"xLo@D }NYĕg\e(pRչ h[e\<"^AtK=!h0ofU+n9o0Ph<Ɂ;&<vkF@ v8ggAC:dK"Qv (3u,H΅%<"P%C|vŭ} .`XɽDkH,kItI U"!_lc_DBDdf2}i_"4J[k){v:=Pצz*AfK6%L?އH3-Z$灌uj&ɬ&×; %}z!(x%y":[<QxÑ%omǞ$(D{1+JH}vUPXL2ų$'Dങ*jfR{QY]v@<_,+E|7^F'EjOhH0[SV\ܸ6hr;/ì g5RϰP"g`fCnHE^Țl|djoЭ|ÈؔcA禄 0~փl3Zb CvEǏEƒ)1IHHf]BAϛ7dhq@ %t#ϋ X2s8%S 6Tަ{38e38Ah.u&t+_<3#o҈2c6mdܗ_Yl4?]x6}.GlL!+R`"X()hγ8qʸ?ک?8ľR͐5ZNexJ3[;^д81UEIBDU2kBw͡S Vj$\, cg4{jՁ_xE͒E )~ɓ.Pg{foio9 KH8YNCB<[O'p j>z\/ٻqtW |z O Te;KĉlK6e.RdTH䷞Yw_}|Tpu4a"'Jkd:$&#Cmjn)%>k?xd%%\5L\Ļ.}UjhMk=q8!>~ya7|.%)V{ŕbS}y2 JA=Qb}lq )I ҧ gEJkn8Z{j=k˥vZǶe=~y@8+y@DhLNJEwzHLbnۣV];, ce!Stt*RB|xoqygg $p(YI rSgA.# ]>q:NaH _>s*Xc+ra%.G'WQ}Fiָ9+z}یTBj,tK0ۼ`}pXTz̟޼_2/>d]l,Me*unr3 N<Յg꼗._Eg[39(@^]>6W \.eiaHg" <墳$D'H#b 3TbBrE-cb5{1B]`{RaӕT(H82c%aDq`Ib>gV(y's>SCfF.K99!" y[7{Ykymqf\$MAAKAKiߠ+q`P5+ $žJJv*-_&Ea9M0]<7w  DBUbP341?6V/X2]J-iw$ё<,ʇ+thBJD3 BZjʲEQF3u5llFFg+?P:WQ%(ѢDH/CSYyhEE_j s k 39;ZG5{ pQR ΓxD"4G8 an1j`S&(m^_ixo*脲W@FldᲺ JGl5*j{ÇSPYG^|` uj,(1F@e),%Ei9#ZfFkVpi483GBg霋u+ AS\ ,A 4PA-c@;[cZ?,;T:az&0~b`6r9P+1xJJYbJSqt ]V2R7&,FL88A>ˢ>b6Xd`~4|Ø_C|8>*aA"[|szYڹc̰}$9;8N6E#6*l(ܜE%~W( KMD>f3\#lX`d8ufVٳ$d_1Ǐ=񹈁uUʜ/O*r'ru65NSp(ZmO\R5%G|ah2rrV=W5 m̴j=t]S6[hCF) cQe]Á0o51 b$˒Rj~ iBTbؓ \HNʈy)%ie:qUܚmsX6bQDiŔ0 @"xr6HcIzb6R7'Xc:]nQݒu!8挱?զSĪ#)%&Xс9=J˛Pځ+R%2E*a2d!1C^U!fjw1)kcSԀsJ{;ս?@mU̘](. 5S6hM{ZV/Q(_6gD-c BHT]CV}M$vߧM|cssbS}ЖE{j2d[+£F CUu<%tml1FHLRFeE>Ƹi &gK|)~!HL]]ۂ:=;^?k^D(첞+J| Ag՞YgZ*c=vӥƇ`ZȵqDrPOVE?_ׅ`ߦED*Ik/ l9NS%L_zVDV2㧳*4h\6c i;?YoI+JS_f풡y?_o*+^};S ݕE+3W.=s//oM$ ћjl!PD!lMTՂR1JAuU[D"Ƹ%@Ozzp,'{޳5a՟<[{e[{}"ig?ν|ޒY6t.N/8y7w{f^\|[0]l`cd-- oiNośu-6/-Egä],5vzuA\SWk}v=Oz543,zSL@ Y9n5.rϡ+wY]t⹹Z#u{rCcPI=ons(yfs]8h碖uo;qWcNdf?r)gnSAf +A<^Ba7eﵯx#Jnۻfz1.5^ϔ~fM*\J>|ٻߢ|Eeゝ_.ƒ@xLR/Ax+DvIŭG*D\f XH5b m)V?IwtJ@PMJ:y+@E{@( m'cbx+Ͽ%z{r~@MDh2L9nͤJB]OcS0DNy1;/)bL2sJ!D) ,z Ulu5Y7en +UCfFZƺ>'\UL9UNfHAfNBHKFY>q%dQII\Di|_?,h=Q%{n|7$N_Z5B??MNoH4 O2GP)wO^ ߝcS%JE&%EPUx*@P &ɝP)C(&}PP+S1y"hO6 5Vp4NEH5Ϥgs`` Xx"ClR ORQ;&11>9T@СLIئtBƴ &e e8p$`$7Jn VL2!&6ib黨hF"^`'JTrˆ^+9l@Q*ʌ9n%'b T9t-heft:Wz vQ`tE^lfK=?~$Ep>b?j⹖(<$͕c#J dX5\Q[3.h44\+#n:$_kn[_]vW=2rqS_UϺumn| `k;y9 w'P֜΍T~h^EB{)TݽVZݽ#QTH6Ѫقp0^[[Re,)t3t`p-c\S*xs£௝И ._Gh鍮P{4$2KΉ뫱‘̋g;VML-c\e7&8.L_0x6b9xGt5!:p UUg S%j>8."Dtfۻyއ)vᨰ#VDJϿB%}j3\' b=}r VV𻇎^S6\+TӓBEfJYV"<[c;ƊڣT&& -NN^޷̫ʆ|KWUT 9?8M}˟_BnS{m!s@w᩷-rߛk׹ `T ֵ_V~b9МWZ>ҷsTk~*|n~k դTS:Dc'.~$Ļwdֽ;Sr4/hLUQ` &D#;'A/)Է4.^ ;~N5Ė~B3t؞m,8]2<"A?? '/DxR8uҎ*0q0٣G' ۅs?PIVpV7Ws0=?qoRٻaGh4#FH&a8W' VywIoU }q'EAտq7ɾ[e  (=H&Phͷmw7f]"wύ_amU;hʕJ'_T?\HZrDR{u{|Crh8CZ( h Q+(ۏm&P'}RWyl,e5a1۶,Pk(Cg]! UO2 ;&|W8j"j6k!rLj~3?wNts +v:(nL&Xb-9Hc+:6fs\BP:k;ϭYIzq$ᚮaBT2u~4X8e~}KKϐɣp!L~u{t|0-8xR3VnbAA\imfy4ss0yրw.Pb ka].ÏH:N4n ,iI( :Drh0 7S{I[]ΥV;}}fM`or/iDVt^5)3TIY{#K^DF Q~撧gK# 5`6V/ :1n|,LҀm#<3ThK/zNM0Ey;-gQae4!"B|(PL8py7mkP/cwk.ecRt9)GYf>R^S:/O5eJ\GX=WZQ孒@[p'`x ]_8k̎g2e>J#<R`t|wɬt;`)-$}$ռLwd'՘ۀ^RЫ%A+\57*upSA|Z7mM^wΙ,q{5~[~/(=륏{#~_My>@dOS/Qel-//wy>}yIJ> q/ ͜݌PV&Qf7˳ܳn|7ty2#“ ~wS7?N_;`9ԺGC[a%pk|%5Oz@?ݧҾOg돞o.r%W- ;@qщߟAF Mc\Yp7`m> ŊkCd IxȽ7B_ mQkz[ {'BPŹm)Grrd~[r^%E1k\ 4f!nOb}!~ohQZf{H׫o//w]`]p;np }u=±Jad7^QFU gd-Y7OiEcXRZmT\)Ӣy|k WeQ62i9t8IQnVD[?|3/%W!QR|W4xH$mnܬAaq4m-(* T^?#.m#/#{j)1eߊb5vj P|?WW }ϧhӽdv#V95M&.?B>RNBZEbQ|ApIu;7ywu E6k*.<&s7wy#?zGm@Qp|-e1oWI q[/!Z.)ح5(Z:ķ iZ=b4J_NcD ?5Bv[^5 |>z:8Tʹk2yS!˸7Fi<^sMҋ~8QszFnCAyFD,( v q 12S"&)*i$HDlVII ^&ƮC H$$V|ן&VN+9I0lL[UCPΨ&uV6@2QcI*hv,dUDovTL{0Ut{'4(<˛ 5Mϭq_9a6h\<|-sԊmJW)m˪d`1"'4:sMMFl"cGR\|\@{Cj>֢3ImewZByh暵8~ȓi.A+hh^H/mGJf(ȖwޭՐ 4ӆ$e^dJoD+^ۥ{W/j֐xćBoMCb7/R=`3eXۂ~~5,4~-k 'Vm ))G!  ab$j A*h#WRV[ 2rBh֤  -.L[V5^{H9N,mUrIPDʼnX,Nq3iF/sW +Zb4+V t9ngH4jH]j\m{ߒMe7W{̤H9Cϴ>vmhltVsUxFg6! [zs@k%@Fo5d5`LLRk& ":챖9Ƨ[ xߓ( P;%7o^ zlbȖlV#ܤ%u$R)bBcQ Ȳ:V74 QfS FvPBTi~  MݲdoG1kt6}Y# Q:[(E/P;9TTxQs{ '%jW).,b"IM0 HhnUtrkHҘX$fh1f)PƱ_X6e!$ Y VMXC.u5Qʩmr1(3lGPg𖠝uxO;g)sZnH11QQe"L;-qR]4Eip$\JV7iU/_EzYUʗ#\i)/WUV{@ׅzD܊+ۚC~~맍~[0Zu +\Yooڍ-wc_7 oR9n>d 7] tl ~߀95_,j$|ADjEQ7VKVYR4*ǺiӼ(51d:Ou =.ITpq8uL*@4%?ml(y~ H܄lwlvf85[x_ӖɁ Ruz@1L Pܗ'$48Jg$&`Z 4:رJ3~g#ۇ[]2~8zAiƜT8_,%qF~ XfmRRrC:<̠M‹ rY, @Z\ѻC 2oƠPFJv}AC5l2g38e^\ G7`4rg5T {sY#,qՑC$\4Q4(ʹƵ%ZYǙd_c&eCOM>d/n8ɟHڟh15"2Ld/uL2M6c=KXfhJ?{۶_E^qI^%R"ѲRLɢHY/,gܙyvu 4pe 9CsC6uU(OxzvA$wR BJBu}=gx'S\~H]D8GC|-M!UDYp(2<6*6ƥR7`R瑶B#vV F6Q % mh@/ /V''|,'V`-E_2=4"x"ߡ?KqCV׊-1Vn#xuH~*W:٦"?F(8&Sd6m"N/ʪy:֓۝֓u.ݫ-g]C}S7^gy ;)*ŏmonMÇ̸o{vX밗e[2gڮ'(e'O*g!{,_yGYՒ'4i!'=Ȁ+L1{\8~`|zF[EȶVU\4Pp4KL&wD ^&qBFF[^z >ߌ8Gkp|F SB>nq|)fW9s,wmΜY_gA=~It^ ""DY,ߘ#x_ >~x 9~` Kn /j 8BᷗAq\xuZx@{87) a&%ɰe/h~O6mS6Y|oIof'o2HO# |n#N( og'@N88Q冒7>uc! FΥ]R-q=X(Vpy6T(0J:gw}1@ا`D/B!I /(PљwosH0y4û`O- 1LfyvxLKc"vslo3s-*mLT')V ѳ?O ?="/ei!7qWաg BG%Yh{'? s܃(o;ױDAns$|˶ /BXc*'"۟L'jy%E=gO%tϢv0Ĝv nA؜H߲fnM|wqdpC LdC-9țk+EDJ=?fBg?b p1'I; fEb?hr5gח2cS?f@fys-慍 (iۡ6&t;sAtQPT8&aSbե ׽a )"=V/e6뢌^ 53 <B!U탂xyt)KUAߌ%kG 1ӫ5` B)!cBmFt)8> Kh߿"\zϾ*0;dd݂/Sm,EM46\+bc\t(%4:'^OgR|5F %U)/7Cm΅М51\#t6zkC1$^sJ[:D`i4[ӕamwcpl"MiwiML}F2R3V B+>+'$&çc c91Gj~ƶc igNf~\]WhSM(FMVe3gx%u7Aڜ˥XaW\vX8U/XR-l(..m pIcÓ;H((ÁPqR9}j4vht7쏢d O'ÙOEHBqꢜ^{E2}%A} *@if8y5lf'ULIn+%[ou2QdMk.3?,zrevȾȨ+s+W05?/)qzZ$сx?3i+..C ^S.1'p3wPP>mCWS.hfmĹS~)!s-@S",өy5[mJa>~+˶ (G+ZR&+x=5?. Q%4Ze͵ [* R{i/* Viգ ({@Ǒү\-pS,Nu ␙H$UhkS̬7pڠYxuh]O.7ṳ _:sα|y9sOi▜@INxA/s"poα{c"`~? o~`\ %%%E ߟ&2(H\Ӳs[ĹIi^6Y~e/I5/'|DssiȪwvڻfv6.$ a;m‘!Avc>S0J {Ţ=>A |ùEP _QH8ނ+e/6^9a4|*T_q|˞`Pfދ#放FtSB̊J^jFt\]ۈw9X!)@,Q pz8wfF{|] ٙݧ/w؞5o ҵ2@&{O([;՞#u  M䓻LM[牨ݬtl` ]j<~*~³ڃYoÔ4!LI>˜N#. Qn隤6$Di5;(!hrYp pa8HT{VX7`']Fէ2I׉ 8/(Jn5cʸ>&gc}G!ךnbpBW &@zpS_w7k2g&j]D]o8͢~?_(l/'icF\pL'JdYB{Z~S/f%p3pVb߳r)ע$S_yVWEYE/JI150صnIfDWhcJhߴIDDa'}ͻcah4>zwgtt;ϟ濅7p7'0sx;(I5@HiX&SSR;j] {EUiˑq9 "*Є2X$ XBci|Lc@\ v:՝ߋRgp %n^?x+i4L>g{8[ya6fhh8[fb7vh8*BȆ-yk+EDJ9AXaGs=L e{ip`7n}yc-uY!&wd*1FRr t; %Q5pl.(v(6a; f"+`3m`!@b6ˮ3k=x]Q).GԀ׶,6NN8zVVn"u |-#9|guǔ1ВBD^c;VRl4 _]_ݲNFw\,P(.뮱wAkTw,`JGK)]z8trl\.N^dU z:V0%4Z3@8iEBWu^g.PC,`_Z86K-xSw)=|*KJ.xbaޯLBxbyAz xbKJiY6h,#UPwJh->0a ihy2$LW-ob%Q4NHiש %QpV6^gmD*L,ozQ*{XӾ[g#DdG o9#=GR8f^ǢyemPt䲢cetɦm;(~;]\TKt+v_)j+SBT(ne>v%4NJv,.m?tmPƃo5F+*M9NAw__v!<{=qF>dObbx2RtLDc a$l`,(Fb8R[OEb,om1%rHi/6e.:آ܂'ͩoYN3wM|㼬h-rѭS:5giͤlgkjRpsA2Q!`j?53 E`œ4]KJA!2X0B sljjHud0UlFjՖAD$"$R K(J$Ui8yMQ"uJP"a6RB{*#5 fdj.ТgDI\?\CYrDvp87Kg-+m,OZI:]RPGB|TIIe6]afeCK% ?~uz/NU^n 2jev2QdNFSf*d1bIĪ.*VEE_ƵZ;SI6Yɛ5ye@9d Y(Zz`[#ve3S+dxT& hɲn/5~' HqOӎ[8wzQMoW1u]uFJ SCjP T'y=\ FgRTjTST'y|QO)f y);K涑~JјR4}ZJl@98:k$T5 h J)zjb2<:K(>q? 3\Guľ{8M@?DGiF4zTQe'ֈ[~a>ȧ󻳌W(|VO&-9l!͘,jofyjSz&C*þbcy5H˴XjLիjtuS&GֻPW[_ciFzuodmov?"0(|3̓t#IgU$:k/sFy~uD xܱg= ;tD, *fۮWwxߜUC-/ߟ7x Dv"ehe2l譀p @%[RXƗn ZϏ11 xzqdɴ-ï;E|w}klIw]:8[c;(]tJMGI*TRe+Q0,vތkQ#ZG®nK) V89?_V1 $TgVһ&?`+ G''RO@al)֯Ab{s/!eC|FQ"o;`=;`>X5'E$L);[g͊Fك??nsVvD|r|*Tq@ڊ Oݒ֑ݱyVנ=)knw<`Fq &O }qAB\P*F{onBSn7cdSe@?yҞϦ}l>x(pJ쮓yl=~b'8kð%Ӯkwih|f;j!\c%AEW9]U/4-D8;;۵-'bLqAɘ^P2UjeL'g6LdLo?IF M#JA8KSYCYٟMn(~!-KgG3xSS`p*\,c s&BT 7d -.^'{Jq퍋Z'SXX}?Sg cQKeX$|ljU$\VغtUt5XbGcX88w{O<6wX{EϠ0%YZ5\bAA4YYZMd8EZyCFWFsGB]ѻxw8=;|❇g̀m(hUNdw$g*"yTWTQE(RGygd,LIƄ^ޅw͹a6ܑ6c *h3[!OL!|*N}0:8PBmn+!QƗ.oc|s:1)OwF[BeN{#XytPQA@3fyɨD0AP1tdJуI xJ,I6Iʑ@Jfъw.ػ抑P?^%@Wޒ?dyY4.5ա-\\VVwˁt\z^9=*/'{o?/Xʗ?\Df]V{Bc^bo<{rC6٦VQO%aBgXրwx$ f);xoH v |>$M w|wmO` }li&hDA2bNJo-o/ݝ I }Ў̰M3v ^li Mn[nK3jGkr[ؚ!bNjdX/'Sɘ|nTڶ"gnyLZdINdJ!ߜ5pL|LNZ8vVt?ŅV r/~q]]1E\taJW/Spsk_ލ{c^A߼G'^Os{>[,~=p|_o?=>OKr:n_w8⻉-T&eKT{,?/,iΚaI? 1fജ9Ŵ/85 . #>s+y@%&V_I ̡l(YƧ.ԕӻ1LY1~Cl-)bYJN>gqCh3MǷ`حw Uz6o8rw~1>r%$";Xd%t2z]S٧(GkeJQCDJ;F.#{BV'?|nZױx^]bhr^N~K.N~p^4H_g{x4NjB+֨KKxA 5ye  b2 no)JkSL!lS0P|PJo icd1 |=o5|LاpK,UTWg lxF6;}vK$%"&ZԻGJ*qİge]!mcɷHi  +eVX(0 &;JD'uj6 baVuQ2RPGVʷ Ikh.;6-WeD#+I5?nB'" /֢KHTְt1/%?(>" 0! L ST}t6iVCtW\z۱ MZU`2(䝕V7/Ĺr_{nWJ]/k0GcCm8I/n<K:CszzTR.w[E:eӨwƧiLJq ,&ݯA͆UE CqtdABfFFŌ鹪T$Qu%8O" N %`}T#fx"y|zf`uI ɦ)m.fݢefUHD.+%eU +e*n1*DAUmVZxS1DE^\tQ3WȄ.<ۣ'7|:{fފ'V6#Zt Z'gKJjoD16:Z2ffbc|lA 01U,ycyaoviE\"-cxc'p6 jaj sVUJAJ6۲LlNhEbCvhڴC  X00` 4C[ Lں e%Q(\YgIAQmՔWU1;?딤wOjqj *p×R-n7h=~CU/;QWuÎcIo[d 6e)|wTEa8U0gS"Eo%$ZƂl6X [ k*UueJ1hh4h<êCW%Ӿ*d+.W_/W]4!@Z'M 4U*]d(c *+ʶ-&dӷ栈V2 Jtld%o" K芺cI&ouK r(͠lbcCNڼ׎ -LH`w"L9Tֺ*7i龖CM.bN]vGǑfJ;ha S]u۔1BǑ>^);q<.{ъ⸄$yg~x3b&UTIEx TuKT\h"䑁+$4*,5A4ȚXI*eBeV {܏Kc{#_UYS0f6PZLZb)!aQEF iCcЙIͷ`w[dozr,*w7ӤoX `($fZo\X/e Sc)2^AI+kRdfa6@&砌 $ I^Uαv*5{JuRUJ$Yfł*g] _D× kFEb1Y* ifva fN j+EcO~[VHnY:`U_ݕ>ӍIj)MZQEޤjhps]\ǎJ*\ Z#dVohH+龈C/m8>/ {a+F_R(*y)]@w>HqRǓgT+S)t$M =iǯ7_P ^?;1=)кAP֕n4| 3 r@Mgi \+"Т9Ԡf-c^PTU6C>cZR(/JE P`w) Fð]dGQ,zv.Gǧ^h`or dl1Q@"Yjr䕮EsNù-uVWkSB&YDEdy8뉠عHJ֠ A5 ֩ 2@RN oexpL%N6X3:Ճxf%5jUJkfTIϓsT;-;)bzQ%@CT}b}t`>2s^#aE!Wdk k>iM.1׌1f5cx7.'W | VR$ P]:Mؒ{;xewe<ߩV#~Wty138BMɲ/ 7+6([oٻS]Y3 &5Z=c. F-9 Gxǭ-}U˚OcDܩbgT%g,VQVԩژsfFWC#[9*V,(%N;T#Ns#JWiܿ]).3ŋ3HdR 1[c"$J#WA!:Kk-{_:Z-Ӛc[Tp !*%w]/xKח-(p,e޽덞Xd>tCAF9cnpZ`:=tPI^w@|CWJJOA -IS|<\`Jp{*z >ќ%jW S>/0o~|{/0nB/A]53x(m̺yLs[MXUKn>ֹ5p鳺iIU!6Q HD*/%}vI:P^RF:,L\%|;0"b%UkK6FdvǑbX}XRx)1GuwZpw1qH$qAۋZ7&3th5icn msh;}?Z6/ hZ󥟭njVD[ͅe?tEE囡6]o~c$Շj;ots҉qwnO;tbNvl#gF2?wE)?>kN7kʻj72@3`Rbgŧٵw_o=FٹuU}sR ;@ψ%> Jar XۗLuU*u0jR9rόD5zBuW1&K\|i^r~/~YY1}*~ nFncaXƘ.~[kش_wVyQy:wO!-@;ݎ S7VBvX!Bp/]&XhC;rֺ?{h7d^@qJyh@*ISv/7ĜS87Ŝo-Y-nR ʡn# $h l2(nuv/)D {".Zkav(U؇ )2rAv ,ʂvd(E!;yyZfAv C 8Td-%b+U01@Mȷa~N1‹ǪZ2oX [bY>ȇQv^a5~W.Z^W[]|˷սY+ŊW=9=U:G߾8W^iғyp|rz޺oU|JufggO|3?guoyKM0WgPm JN\S#DfazL{x ƒ sS܃f,8OmG;낒77d^1yoZf)!.<;>NѢwB(9fqTϸrvy<σ`ZYG,! Al*_hKryCŠboU4>]JbQQϷg?wozIɛ$9+UC}핱[mO KM tC GU3FM*V&K+_[LXU?vgWOջM̹ݼwܐRL*UJK2!6R*K v;.nی%Z T4j-jȀUH)jZ^m]Q6Lw- ?)[w@.5Ao_]* i!:"!HL+~˴9EsetŠfaT׶ݛ ΚBB5&q&[P#b_$Q!=OOG&k4^A8G 2&oWAOFï3yDB.!5FjMXL*V܂^J-@zqΑ:rQ]+"ZBH߸&h8Q5|%'.WYp#](ޕ\U|vjRe+JN\3Bp5U𶩔=(+ jP?'j}FPt`Do\q!`|@+' ٫ suTXUytlwOtbah#0Q Qd qFFRFJӼPT]uw.ĉmhҽy뒒7-mEXMUJC|PjklxV#5'6%l[=ZbQ6Q.%5(z)& ې^g jH}[|5Zwb_.y6sRLx'9(#~ DhR K<a\9:wB l M!d#AR4f.eHZ(VVsϞPn8z6S@N(@䥆Cn9Y :}<"#8\5 ˠͧޭAKz0XVdw'bٿwO(^gnl:,x]+-A-(nnLUZd%wmf>{fj&ZQQƃonf}8F3mÒؓWYqܸrO: <= ;?t'~=܉SB[G;-7xr\'ŁJ\62 Fz(|}#\b:T&^P܃ L̝z쪏[uK*k QwŚ+)̷ SOp-s£o\rrϟ]STFW2, cFY^Etbzך l#;ؔ}Xt?r,sNy?EFѓ֯at:n4sMS5#0K461(>2Tͤ&u7k6$*,ti'yɸs%%$yA1=-PC1OP#q(Gv'G{4QFT?JT[` o>z*3R.9Ynۻ\]R'ZT]UD\,1%v`ɮgk6\Bq^D1qY|䛓y%9o}V@dںѦ*gr5֠eS];rx$ ʾJt욋up~C9g,Tͻ痼MEO Ɠ:arф5UH,k/ʡTK ;5qN7o>RlEf,_4d۬ z)}0: )3 s&ZĨ'_StB5̫GW5cCj|,!*}5MDQRREiQ "{RXNW: k!{ !Wcۢ Y::p#g5E&|J]lMmf,B0ҺyJѷ S2gʸ@Q_Z[5Ͳ[kۜܤ{_َ5e XuzYrpFy0]2(Z:qvw&8 ^zܝǼ$vOzy pu9 4L{wn,pZiM?ִAIn/(Wbb ^XQp1YG5E nRR85t%BB]Ov5If ǎ+W6+aX/m`$V6Cnf 9Pn4M Lne+um=DR1#"\Y~6 `agjxqr~N[KHωtƠ:O։ŗWkԪ:>a yzqkH/o7 Y;֊ I:ui^Tm=+Q]r9$P#4=\<-ӗ+˗O};4xio*V.WuϸY&ҹb,+vk$-Sgu\`RNSY_$Suq!oLw}0^,ST3jJE$Giguoҹ9ׇ2?uNe_n_r^i[]"=(fAj_o\@W/rF1թUKR 9:o]թoy^Ou? ~ta{  yL9,XhGN_VeTDF.ڮr٭nv; +ɡ~][2RVd"O-<\_f+m?`t=*ՊٷMŠ~Nd'ZxVA8fl#MڼJd-BCO=p:W LnZlX6KeKf25bJm}TBLCLРQGmbJy!Gׂ?LV[RU8N8Fb|(1c3>믿>RR(cWu śjk(XsԡR\*AUs9&)H &Lũ>Z u50TmwZmcg^\ww Ys16&RMu HH;vnTl59aGv2 <:bDb 0]EdS^5QI]nŇ$*Q"Z@yg-X A@ZB+I!FadYYĔG6l;g qd}W1ĶYNˑX,A֠,h( uEtAbt!XN`cE$yO9v}}/#c1نuJۊD_d2u]truUWJnPI )]QZڎL-敒`hה{jTdXU Ihߪ(`KM4l|Y,R*H*K%xk&\aZ$u9Ng!-i5NT:=܄)KY[IWH̕Ɲ^~lܶZ#oQ7L 5+=cY"4XHYQzrމ^he/eHNZq@jm)=gEL\afLZA&{`$ 1Y\d9L)Gɲr[Fl5_ۂ7dBYwg ;P,N$)-8GgŬɬMesoΝ* j*6[+"T. t](v+%:y ^4$8׏F)b 6QocPTǘu&sɶK] Į[D8@0rI4컨:M#v1+=vZB2fnpOjAM^p4DL8ڪ7"hԮ]N7z0g{n;MO40YWXCVdn6SnDfAQ2t:Ҙ<$d@J.1]Ka5L)Ҿe-gAuDw**8+V`3{/|~n( z7&48#_Cq4H>0h%)P(H#g֡ bS 994q&z VW:o1|3ѫ.N0 {=J[Grv#E+})޻|콅U8ylrM l./RnfcbTG[iɀe~VΤ^ x7$j1uϾ|z|OduwGhU=OS/ltxTI,+%ȴr*\iثz7 ^Qq\*Uzoݐg2bAj8'350YNl*G~6vPoʶt}vijQؐQ5\FĈrb7daW,:E]'}A v7ـk&A8?$3YF/Ğ x-(,kcQfgMdF#r1HrІd I yfm>128s@hX^q+1,l@9e:޲{<I@Yрo]Y\}gmh{ALuFC?~;yqe<)XD狳Om׋Ogd Y0w!NS`^EIL,gUp;ݜũx5煜u /bFRD.ZT4-?|.^DA|Nvfx'N&.Ng!|&'LNY+X KPZ%Q-M]-oʺVia.% MmcڔanH-ءU-lZ2aJ~/ūyD$|=n M_مTC4'lن L^Vy.n^Qఔ뻀~Og+ԡ=Q9##&{X+1) Uoy#^n|IRٶ?]4N?~ݚCֆLƠ MK"<)v9PmEP{Nި648נS/"ǟ-o A\EQ#;ΊUlּcS¹#l4Ze G1jRSbAG1gśe˛5h\Cv:Z ы0hx9'gh"fyWe6͝:r79MMdhZ#7l^$Y c]QZ.>`(L=XYaK?*^EQ/U[܆+ik+9 zܞ}7g [{`?ye`_؄/ߊ_|/WV3\-UYHc[hܘlMVw'j~[VG3&c9o`wvWg1toAK{XZaC%[*!θ2]a$,|T(գ&F^ߪGAM<Ԩ +FL'Y;-(_5C%}qwV"n%sq%o>ZIWȹܸof, 9xn-GS *Li9XӾ!M9i."쐣@ތ+ǣGo!t$ذ DimKYԧhn. 5:14.?Ǽs$glpjg~ ޅ$vxΪADٹ-;917Z19moּcw2*&6E[2"%5 ߉2 a|'RVZp5&X )jߕnک"EgU-w^Hkoʟoۚ~t]|hb\M[UOc oWUդ*fAѪ>ߴ@7/_/[CK(4jn?_;On=N^h-Z2&4-n\:דBFzd8O~k?sG y1 "@K%κs c9rmOD-D(957]֩A]Νd;ޓ]o#7WM>0I'vLEgZLǒluJ*ImT"|GgsT&ɘ(n*㔒 )8ѱN0l(mDt+o*mQfuuoG㏍V7U7i|<|7_ᤴ9?;4vOiv m n`2n(jMiON0[7)oqH'0c,&?+1VHYuӪ1;#[T_R؍~x]pf,myQFo/g}$7}VmЫ[ݫS__ݫnu{N2&MP|MDhjT,$7 {JPK״i&tYkony'}='?%eFPYk{7?{=“$'vW184WP~ߖ_9g5Q ?2/+7&Gi|H(%$~_47U#Wsi84 lxLTATvM;7}-}0Úiw-gm@CBMm/j f%tSa@K::(v6x] fXWNit+oNl4gSi6y#_LiFnnVfK6Od&={Yw%Q} ΡYB~ga;Ao5H˂?bp;KgR>Yr=-x4Qf}r" ׹ TMDv4ZIɑB&r{ӎ=;wWQ\ NUe w̴J9}EMkǬ0}k%a4oEȎ(G~Qȸ1ͽq߬V;li߬#goUJ1/1J*6d"'` ޒS 轝;vE̷׬עY+Vף$R֡rU쫡9*0 ,PA[!8{_y-fշmRnj{wXըx"kEfªJg/~P>0}3krI}fK% ㆔]KO &U~}yNMNo~?y_^C:9j _N{.U|wς tԤ_ލF9)+6Ww 4jbhE\Lf3)Fznkx~5wa,c" 'kdR=)+qaYRˠ}VRXd)낂iD̠BM=kn<'jPO/(1F /~?SΕT{ 8/q(9Q Hԕɑ$d!Dz˼+WRBQZfMp?<3 P(7Hzwj^]<&rvhkt01&_/ɭ]Ӂ y5Ѣ$~=y}Sm<|H2u>0O7(_Pp ̙12 d9x) !DRA4N>~qL=:_ t:G 53zCu nE˥`N" -r҆Rq3vBCt@@ĘttHKA ^Smքwkk;5Yoۚl'diءVL=(Ù]-eOJAc P_8  }&T (؃6j{',VV1U 7Eɂ@-DD-1*(P 8H%Pa( (؃}UĠ`-jnjXU(Ҟ˾;AsOBcg<4"' CJ**2]jLVg }I @WYE!] V;F$D6)d`4R%hfT%dȡr^bFڙ="P*Op@RQkiл zK [9J pg+ii߶+v:ir=J v#s`"L3+FG܉v$ QwtXSĀNبZD\L\Bev\pSgR١MߓTՃHܲ{p7գS`}-႓7M(1c雓kh`sBLe|s;7w7')cO;#wsՔo'2c+!}(mԂZHh"7'`Z 9tP4#ɰЗ*}ɪҗ*}ɪf_aB\٬X2~@W#1O%mLY@VH0>ZN&5CǍ&CsVح<0TljGI ~FEX%|X\hB s20 g}R&gI?H7X Ji4U Iab;"oT꼒L5~ ~JY-b.GvK6dhY hg7_[dyzGx]XܾOjl}.ުA/e&>f/꧙X%:^%.3V%ujImp!:K'#xK!y!m4h蜍)dn]Mu0ؒA ċJ`*!Ȅ&6UVu:ajԠ+yHyETX Sׯ^&oApuB nu9o|$YM2eiK;8I?3e2&o2؜bq| Ҿ"h!at6EֈFWލpbGGd`q#f9'ZvȄǝ% W3zEE|4YE(&;Ԣjf;L{1;7}1J\?bL/)@]ɲfY;MBS0IWW_TmU4s6m=2ӛIǗ^>Ƌ/ k(7UB22|6Av"wzA%s gɮ1)$Klq>)Q|#uk'oIաw7dr7.'Ouցokr{fṶx+^Fs;X3|k+ۮ}lnC?ۅZZ`ᔑ,^ m{oL>G$HGY.WK1㭛NyH[_݊]8g] q!A$AqVdY&xdhFA_\>+=b5U+6>P_%7B n&-:~,>{3m!m +؇__pdO~^ 6dRr'`WPW5ء_pkLK$"r(bx>2,QՒg,5 )!KJr}~Vicf A1 mEqGk [ ^/yfGqȥmօJb.1z4 )DQkT C%+ARഖ* dqx)z.3xb,y+=*FgNC*l%MLY9ؓSBG/)w TP Y'r/ւMP t,:istǚ):H_"̔̆u@I. Xg$`$.{M߱# j)14CdsƍIji21gB8"^$8 yIUhݪN[{$.Cs@A:{D1I.Zu;$|/Zŕ"TFQ3$3KL">LV'PiuًhsCohn1[Lv[3p6 E'2XD sV2jgeIò9y.[BST~'%n[#-|YH񙭊]V F/ƭJ\T/ŵA/ ")+L+Kl~ h%CJ4aEޔIJqkiݺ `=RE\>6.iĹЇ@r.f]D|S_#^P@JL겠@*x 5UHÉ|]>4O'PZو}WI$%"3_J]qKj(C_~fu#Ӎ=/JE (@f`s&Zk}=O.b}dDeRN7N]R#8f̀8+dŏ떦ѡJӳ%\vU+tgmU3~`m.Zij0^~mg2Z [pK wK?j]isV/k0}נQaG{YJW)>]nV])1[JW|)MlA3ђf┡gT 4 C~)9tu9F7Io&U舄m#+ Ru xγEdi+Hglդ5OȮUWwU+s}f̠GgIw1p%K:̖irHŦnz3i)[YHJÏ> 8mQ )$5[˭墱 .5؋weRi`Z)5hҥrXHHU ,)6B$8$r,h(&&IpQ ]4ߋ1v+(h%ĎKX]9]% fkmV q/[<- q%RlD1Uy>.dL\XE"ª(B48bV+"BQEb=^9sSrhB,P;Tp॒D#N>enĉX҈2bЭVK%`DA'׊# dz Mb1ϕH@RWډ^ه#(팅qR.qFA(َ}x1ioqyr@ՅcyO0] Zr̓=!'0*h?0(8Lr&7 7ozJT_V,>_LtӗI>%AݾQ͚5R撑ص\Y|%& ŵg݃ {a~vJXq0t5 bK1 UAd 8غJk*ګ{ #" i-A, 0sI{ie#IiQԕA`GMBѩu0#ur5ol5CrAd( չF;ua✟3՟֢;]sj^휚W:."qޝSgDsxZ>,[K%EM$U qH?hfѬ_iI^tʬϢ,*@v)x'uvnZ9'*CIVIb D""MLBe guBQp"$*jDZʴaj%8BơHY'qG+EIMSm\u7:!4n(нq Dj@t11iQʋ596y& ՙ!٣nKɻ2[YmK +m w[2)(m-/|%p["Br!NNz5U%M\HTNvu~+3t9+͡"r kq;JzqC??>ɁEv,WC4 Zijk'ww)?ޕC"P!9 H58My F8V#W^L *;tޕS63&X~{H+xvXr )aaKeA̓Y|R紤jϏ-ܽ(aN9l'?m*a#9/n/4!f:Ë x`AP2D$Y &ahX@8 C &Ubu[P3c驲 :$(D8b8$"ZLV!"\W1eF'Grt}{sU|.EdMpadmu]j p%cLÀ%}*`ɄY"ޒFi !q'V<̀If#A  d`4ψz.{\F (qgَ0PA,w9Ko]{=[w.kstC=| @'#wTqbu!hĈ@M.dBD #vu's 1\z+t`fѝ,ݗDc\9 К- M`Ka<}z>[pzM$9L=Š: $ 5\ (?Gw_Oňh`jf}@h9FOM $5ĄFr,8vX3X046vMDXʂICɱ? @?N}Jy@^u[85xrlIflg6}ں~6gǙ!׫3O^vm25tӲ g`ۿXJ[C'慘He.Ŏc?/g>.~=ӿW?{x:b\NOyz5̮Ժ6O/7GܹlnK ̣: ???Э8|: 3{o(\tNǓm] o=v:s-FJxcFƳuHs7|z1h}ځG_1WtJ֪̽fl7Q6c{ן Otp`?` ںY8 \&W׿=}wtp3ڟn~9qeTݎ~^]?*eWo{gco̕ P {,HHd\fJ~}V0N&OaVXI~@Kk@¥.,d-cǥ{s\np==6>o.}NH gV"kʓ)1o)O4ektM݋C%"!XWB i DDjh^$ef_fѹФrrj^<$$-?\ S|,./h%xri=סk֭:qN}<=!Hi==W9D0^xU^f3Z5)HEDd[4^DOEx2H۷r!1ʹ@crgXw{UT:aꇊ&(>V4 Zk~nh*}$+k8HAvn\Zu!vFpYd< ,ZQ0tnhъ-t2HzM彔UwF-"r燍#Z hvwr0N(Z7hYgA7nj}:n!27T˺ l;%` sT4Lz2~Ag'OYA=ʚZnɱQL8E< #Deќu{k8%n>2ڙjTN2sڊi:ti|N!bRn] F4I>ai%ΣQ&53ovݭmvhdc QflΡ.G7Sg^=*0xJ+wsV\9T3D(n"L IqJ [ixQp$uTF0 Ƴi=BB2x~b%IE#>7FPw3TU JAhg0eÜå; ƇrY t\:A<6$o_TJQJ9 ؀Y1DfQ:0aLU A0M F\p!SLjī C ,rjOyuK" VbFpXML`"cQl8V(S\`'0nD08`y^SK3XfH!^.Z;V ^m<^p%%=|Ĉ1y|?NB;9ncEٙhtJqXK!P|6v 6<!BQ-O3i-X8 n b9}`")b'l BIl\(5DOS֥S8gCx:y~vT.nM'i' /^w^ ŻOT$ޯC<įm&=05_k71Ce8e>$uʬ7oJww=E/ևa jrQJk9Pqf g&V L2 70 ulm iss$bnnY%'Of!? ;/eU=P%+_'e˴),t3?x -WN`‡t_ZU,.<5}P+I끖й% L[SمUjF_$`fn~'&Ѡt@S.w%ְbYTB biHTNmnH/h޻[勬K+,5;3+!G4vRӳś ˦ӭB;X@FXuԂIzyoW^Mi_݄e7Gz#ٲu6w02qWaEY cFD\ R" PPQhB#AJ XQ1g?o*PX1-M(혜1JXX/AOG&ҧJSXfل XwT'5:ZW (VXqQ4"  6\\)d`ʍhrw}cd6IJֱ'ȁ\lI;xj9#ļ_O;h>u5EJFʲn4F 4$&5E!"3a^T()DͻP/h-ʼ[ƦnL(8H#kٚ-2vaYI (-5ƳhX/ ު, ׏-扯4K+6cN3fFUT bZ"aBa-ļiI,y?DRrHe^yYG"mѓY0x]]݌ )5.j4_"ͨ_ ]'ܶCF6Ja](9t@HS.DLW)xs`SJdyHNk, Eu!˺f)@ ݧ*dny;NV0Eg!X`JLCDTc@|EYPe1% )C?Q;8~~ )Sjd :w[9u7iTɋsŌ,JF3/L(Q 91H7jmsئxlNg #1(V)]9 _ɑ Q'5[jHš7;i ˺7=҂Nj$H:9Er Ŗ:{`نj#EPXe٩[/). P!E Ju7UC\wjEU}v3:eN 58QG1:I$)BV g+ܫȈ:nI=,jѣĠ$TA+!*e2|*~.OUy3ގ/VWoSo$Ḱ ީw=\pQ⮺[fzu^߾PBP#kFgӏo/8-Vf~Yár\;uo;WnU(;7+>p?y[k[ Ii)Go$QOP9hDѯ~>}|qOC+:\d~ CUˋJʽpQ02(C G'VpqR@\CrZi,vkY9-y)et99|Ti?ƈ)8m4mƶvL80FliYfCuk45EZbC-l&rYjgTlmQcCqOV5'@;OIlH +G嗜rRr.61R/G64LBǍ@7{eou|:yUgdo"|^>E_FFS4(' /^yzeW5"Ge@)nm\F)6Fw~yun6<%Ǫjl؛m~}o6ʃ~Eֆk:U2F/z`a>{]zOf7gbJx՟J fߠ0Y ى#Q`ld'wrs^b%Y9Q!Jm./&r+81.doyyh`hX,_7oh α=u.ooF<։cq#ww=ѳd-X{ ϐ S P0p^:t']eN|{|),Tilcfd돞_ΖCſwi+;Ii87i-G^iS)ġtκWNk]ZkO̖t+0 nU}l${Kq2&/\UF;TxFC*lAZ>/w,1/&S1(J?|\ҬF ސ;Nw''vK3 *mN7;M@S(^+6f7z/SbBiO<% ~`o.4Ћw8EA)Y惶W(ئsh%&~/n1 ='ci1- "ڜoaL[27{N+D/ê P1ٲd&Du,XꈬHFE\uA$J(: 쵅Zg?Պ4'f2];*v20lܾ ' @Jkm72N*adnެ}W=:I,VT;)5o]mkKu]B}=/܉Q[V#T8:NvtX׹ƪeU<\*wFh#kGkF(dFKߺ8]__Q_f 'wW{u\㋫1!=xӃ9=xӃa>cLV DdiH$cP&JJXELƂF?gj>Ty-zn>zV]z## ٚqTOꙣŧ_{;?`MM~vD#\`F}^&s)$zax ي=[ xlf2*dg0cHl44/,#D(H0f E橵: B26 JNl0of:]xK2;޽ -l ֡g5Y];,Rd2SB Az?>~0ۭ76Q;t1ZQhL]2D ar(6S'j)O:2{sH5xք].,F]C޿뎁Ngt& r-mx]NUfj@h[AS!9|ThkF1,pgAnvuZ l#ow(Eã{IE[>sީ rF K՟=y18xk.R8ZSh`j|Jzɉi|9pXaUVHRR&ڟb=$mlS5 *V2hxs`S͇h4Bok>D8[ʒ y]?śyb/no9ow+s0Ka1*B]Bi4]bk.Ow\ƟoEv7OG.퀽[g?u5-^*gpLZ=lW)v$Y'ܯ=P;߫*fi(~A5\pdDPY"$bڒSȿBA''KT^.ޮlhN ]܎0B<ޔϿlAD* ;hX(T(JzhK !Z>@ q(X;e ;:-y?.Z# D"; 5h)=s.gW\[k4qg;3F8Kʞ ۲|GظJs/pR* r-!31˜{`k5x&:#3 f;%# ɉ} pH޴k}Pjcv h!еU/uO1~z@wجL$ 8.nܶm9Q6vY@<vÜ;~'JKXkS^:)m6|ehy_>۪#_\_%U]%+!K^6N^i[d.O|i^ǫ XX}?T'\xP~_bB'@БZ&qEIPW7qzH6FM_㤯Uu i_!Z ͉9Z )ڱ-MP0&I&WZ7`cCp옊WopvIm3S3O}x,P;LNӨF^3D<OfVyBi*g %O8bMKmrJ cxu >`,y9EO (Gm49cěC: @9F֋"nsxJJ}{Hj{eRfRԛKlNvJKwg2(ė!m5S+K= Q!]k0MZ)\4 R[z/X PLQzpZAl<)[AL)Igt4UDހ;bB"5Rjcve8.Znpvɂ:P;JΉ+ߐ('.d&3a }<06"]hN ;T2->C$}juj). v쇟:ur46[O>8l^"LWy,>x`N~z-VRuO^ɝ>~w|6U̕M ȼz dOu4Hm}y\Ȫh#Ğ$MoZa`O 书{KR e|4HͷP=o_߼!84bXkoj{t3ը5yɟOOoտ cs|fQcӬ埤[Yh\S{`PECc:uL7DR@V̷>? yM򎀨L@)h`P Gۭ5s|iɣM~yu~&]quFtX$%>Kd FMCDw_O!FށJ)[2-ك|<%ށzJw,&\$s(ْ+gA{P6m*:Fv)5X$TZKESVK4}VɕE.@NqT^#mHOB|bʷǒAHst' cO1pͧ 2EiPlḆ“,YM>N)列`NS|Jؠc:ױ-Y`NF͵rH +4&T 잁G\,y3 l"N`PͶeJEF`%VXZiϡTh;@5"9)QFJP:aqM1ҁɰy3z'`p~vd*gpJT3Q'%#HN+X\/I(KRċơ~+6#NPvi_7BS?_U` G{b< ."ANJg;SG{>N(FQM&hNXEְ'o"F'3W}cď>X## Eh FIFRjڻ:8i 528DQש&>>CR1"RW~ud)7@#Zw<\_jTB_:- S|F)eBFTA%#/F) TFW"#CQnxѩ*5^)T ,67/?;[i/p"L标Ozyңϓ}ԣD9 7J'RֆyjA©󵊿ejYT8~+O{EGLa' )j=U[O:䛥9zf4 5zn+5^fbƋ#I&{ ft4hm?^_{`޾` G˟u'+*6~qk:QK*A/s+61Pfj1~aoG|rɩOTE My;yT@yZ#8c]izQ0\RNٞ~܉ZN ?O(?Y(F" | 4WG##̈ [-k@\\s)>Y!%qkv^:`؃Tj`S-uCtun3oNBNF]*8A+4#R V5j&N s`=J73ǯU[Ä)]֫1"(Tf Qv!l0T$|iM7%oAy_e1&ۭ/ȲiPڑ&yDL- BGR6 e LeVJ7aUA}z㔐vͫk%9oNV}snN'Ŏ6O;q?]_).A76^+QToD6Tk ܴ:m.UFPAVP(Au{ Y4[ͷ'ݒ&VA~tQ6t+htCp-Fj-^ 1Ⱥ IwъGFJ7 !_8$(Η`{fɧuT69<,wO$TH@Ƚw8BQMߙ2uBڟyFS6FO1p]= 0)h]g7$WZzp+󝕃ϜX*A&lf8MxA*T`*t4VzT=7Nx혓hoTe9g>hGRq㩵^RO. 6߉C;++G_'%;jW7VNd5[9f*9Yde`4EbҳsNP{zvo7ҳOs]C/&?, o,Fzj:`>Pӱ@hH=]s7 笇+6B-Bd1TAq$͒ ;QqbS V>18QAtJhD7$yq2N#%4rO;x^E" %77qLm$jn2 E,853F( 'E`u!gмh:D ='L-T ^9i#H`| *T1 J-D1-| .Y^Y<=z5'W"lp t$0CTFADY1 PTfHqZ\Q˒w%$bP. 8 w 8:PfhO5/ޗn1jFVA~tOT'n͕nCp΢E<%5}馴Xt+ uJEiEpMn͕nCp΢;<3yjrlaVP7e͋SP 9>{ t2T^yCl'Ԇ76RYi]wU87..]!VK¶g>[6:E&GeQp~s{|g9/MWA]no%ރnoPs>Q'ɲЃ*+}T9O׃4Ά=L,m:FA0cx\krkӤy# sH0rFŁEǴQ?@1!' vu NzS@&<۝~zr*^hɦɭ24ޱQ{0+r醃+ !888FFAyJ1O&ƜO?@Gch|xn>ہJup)T>ww{i2jrk?e9HBi>4.l#R{[4q'~xiug4z@;-;z7%DQr~Dx $QzR,"ICn<'nIk.#ETѼ*b0kȳCa&_CbnA.z 02E?׭2El[e!0'(ÜhD ˰ :s_998Fa  FPeB l(n~oRq<@ QV^r r2$nh3rnfvPɌk'-u.'yU3wo/ns|g 9c',\߿}\7w0גXJF)_!xdxR{]r'Cʝ.2>ѺeQlRӪ1jtZ톻ۇ=?Eo|TYDl/x|c0!zd F/bg/}7\. J#(20ڶHs{ aVPDL{3jik=G7/XTCMf_)oiOR3]ء."xbͼJm7$0=Rx#hH1 05U2aFj eJ0xM&7#Z 5Tj$rmCZsT)8 HhNhjɬC:,QEd?]=APtaZ*FS{]K/j$?{ȍ=M˼_ !nNq&`l[l2$?d,nVطxzķVU_UU,VX;cbgG]j(Kp\<} >L˲1aø.l`qzI;׈~x?A#WYI.?̎_%zA @Zkz J ;U xMWk!@. dk$qb&ik J@X)Ycl@ "q۹$HlB vv Z$dUlz6%*ӝ6Eϭ8?w|Lb䦼O^-|8X+D9 v>g3ec7.gAcqEUUSV0s8wGWιw{} SLCc<fkXȸG\0%@J:]G2>' -S = Ejx2t@+@v;r6 #iƤFL ڽ&1!i՚`H ☨ƥ~| i|!%hnk/󨌣fn5ދS&s[TOӝ :|Pe]?=0V&75)7WH( zo~?Z7Ǚ{ {1 ?|\ dLF򟙨u='xln)D! ӃǛ!GCF``U/5S @*$(HZP}=a@2oo+@ٝ V0J;rA%ywcW!_'Y0ؽ{"Is9Z۽X soR s__JFxbkGBeD19\նŞ%K;/wy'ŚC VBh$CT`HL0Yΐa i U, '(:xqvW:|5S=^6z q"Hܚ'3FQsa-n|lLױI%&+@ &qr ۇNMUUpLWݫA۪~x~ޙf;pÌJjDC'ƑS~hUcLH9g;ޥKo^^_]qm݋(v-Ybcu7Xr%ŒaFtRdK- 7*8+ n yt+h<ߚL sSd"29QӹFǿRP7]+fp]f-C6Ҹ8xOa;{60*sw8<ѪoOǃɪn6ƼYՒ5g .T_=hG) _7|36?=#ѪBO"wQ<>pZD/<1[Ob\(OyDSr.-]ze26=oa՟> |k7[Wiys  fBs̑(9okcΕ㕴g,鞦%ۂS˱@XQw#NNεT f_A,B"2SS3gJ5 ĺ(8 sMcd۰+RH*u 兵z5x !K;8ThB7[ >pXb $Ys &!lR-1]NEN@˕VK 02m~TBDIABF - 0$].\ws );0!Ib|hWĆ[@)Dq 84B2 d&wej0~bu= r:HJ $QɁ$~$X_eK0_D)z,\`3Kh],[ ^QyHci-?>^'4E}jeh6uj_ 竤}yk#3}1J=ѡ:MZ|徚H;4&Ȑ]#`ނ]LOإPƀ0MFMk`.@Q඗wI^q;C5oRNapM|(?Mi *\["|P 0. 1OݡFݦzsV81fQXȺ/ N{l}e`O]1X <98'!-Ɓ.4VϷ Kdg?ђ_BkCqhI@uW)OuA=si^/OGzsTX퇺v6m8UиtpƁ=>*hBTjBA^ cBpH>*&8oѲ%/!{\gdj|z@u?i5/6d>H7>K (_:لG:_ᢋHzc?0wc;Wl| {gBM#-%Uu([^9nmy:u|Z'uK "j_ϨGTT^ы2`4|VvHZ>9NnqX˶+jeke5jerr}FV{|@T91V& Z]o'$ E&:Ē * I0puK'.¢Zʁ/gȑ^qb@yÉ*P` ,JG}8;Q۷ h|}ng\q:RMovXU0 vٻG5囒vf[ b1zLD!$!MӟH lI8ImHт@kK8 xUR//*Ai:&?ʣ)9&i&l`r95TQ7fqyݹḻG8Mb4IWTj:xz:B%IԬ0/օ i, ,d򒒆',99 n;yS$ BH`4cYt; QJp$grr|=l{>.A!RB&xTwB"%P%jI@TO޿*8۟$֔L@%9!h||ӉW¢!%摎rԋXZFSE4;YX$j'Ҟ&99Ī9+_RP P[S/jڠf4~2_g> zb׊EU,2hXr/f /o$r.emEUQ,!&i7=匮~M^zj}12^>q.AD '$?7l6_te&o !"ܷ߯?3]n9@{}˙!Zttw-!ytӛ,0{"Y#I <Ŭ xbJ6Ŭ-šjx;`ϓ T~# qpemkM9c5ȓToR0{5m2nr@-<TݫZGhA[#"*8~Gm p{_61{ 4UQَ%u1cqϬ >>c!qq)K'E[`fjpMC^- ƅ|oB`k 'wFoI'0J1% bK*3vmQv^B; +,w϶]EWtj|t= IZ˾i*$uG@:'3p>t B[RuGjzcr7ME#*ԕn0ֺ>lbyí;gp7sJÌ*;dNiF֝+QM&3jڳHmqyh2MY/ze]9f>5q8no(|Qk?]§HӭgYw[#vi`t*Dv=**>Տ+ K QΏ$D?ϫWdCJp19e=;nq~yzKކ]z;{7l/bfxcEN;{6~x"2Fe'I($IM%-6s~)C/>7l[f-X%ͥ۷[~PSBQ”]Y@t ȕ|ЃLT1f$Pa ܼ2o9X} iU6KĬҩ{G0);dFQ)>=[8^<^ów[/ ^}rn'jr紥{?.HJ;Cv4I۩ڴ]|f/Edvw{!FMBF&ƅ▴M_v2Mİj33Z\}Y4ErK%Ɖd7@5P$%Hfq!3<- .c,'/85!\(^xƃYN )4BPHRx].˭@(7kuMc=*0z0`6 ˫0llJ,e2E&È\S JF+*3øW72]~1a5Uh~Ϊ:mXfLa%/[Gs:PiGq+Уl{7\Baח땪ͱѹ3m=6ύp՗ck&0tv1TNg3ի&2^W  D\c—y1UI#%]#gGz߿B}_x.`gqE>_U{|7'5G,{ ^c{a_޺}[/{ˮv:H$bU*I&bi"X(J͘qDP .AB,ڈj-"Hf84sN^_Dݕt? }ыm#{̍ ,wFv)/= hXZ #`PeI"$#-d H  i^OXzL>ܹ.TfPBk`S@ǵRNg|5z}SwE}|$H\J˵U'|l'#i("ff!"G~L9NM;s]T˫k@Yqt%1 uPL-%,47b`$'f6K㘤wo)/d]j.U"%SLȾs\lT)${z1l'c̢ Q'uԩ{"/!Ssh)"G=9l{a{aeRnͯ.Dӄ&eYI( %IMU #+H8*kJq =dU["Qs4GI~jC] cdsޟnE4GA?Fޠj0oV,)䩤 DzQ )%Aq89Y b%X% 9e!ri5]s#27~֝O/q!E SHobVtY8}-xLyx 4QzkW^"%it "b/G-zyJu6_Fe~sA+BE/_)?3]n9@{C}sa^HOGw7"s \"Mg>xippXR䂧tI cA $Xd_L"1h/gvu.gڏ Yl`H$܂bU3P˱VLeT$ &˙ٟG'PK3,|%q_.R'H3ђj7Xq@X5>nKei=tFzu}"`잕3 vN]io .9\)t:0[p)+ wV4Aygb4OD}ADsO J:WEͨ}i ۧW9`eXlp\J{F_NQo/bJ0lfd&|1pծ3-16J(e1b2n'>D֟>BO=`8}Dvxe[_t4+Oq 56IXlxL))VRb:~9̄2 i:)g:w_ u2-83q`]ۗS&g}0'Y:˗ͪ#6c aA1|1opx|Dsek-.SbnYKp_-Blf2E͒qY?zK/ `z"m͊* Vme]Bhe}˜hЅ:mJewfff`,jqYU&A# u,ԈVM+-|P$6A1~AK(h(("^ɝ*yRT4ddf22Жふ.r7Y$*fY q O &8IQd 1 , 3rW*htU h wxq"ညbj ˏpf׮6hC ⾸'4&o7yֵ]KJȟ~Pw>+2WԢC}5E{3K2Z%vzwN?v_ߞ ۼ~< !B1zI.^DP5yx-Ftdޥ ݐCË4$W\b->  ':&j:rH&q=OIQ,ႁ`PdIcpRFp^qUtyaEü+Ԑ+DJ2@0!JYLT7W u;,oW@g\`4?PRxE.@Ja,.@JxU[.@JSp*;T,S)>.6&ëmTWOY,b~Bmwͭ?w\̾T6t\3v5ݼ .D.`ϓw喿\6{ñDσJ5& Y`~S A߱z^Yv쮟gx1SN&E[hfjd3x=XM fakɵ_j A+{0z;^~M{]T5 v2 4)To۝exl.1;",)%,lYBP59^ųzTE`:;<8WM :}^[|'b0ϰ9K\:g@jD cyDq.)ԢԨLBPL&5$A<3Ji$snp: ŝ%֭y:5eh?qdӐ3 Y3{Z=$ =~SӶv | ?z ҳֶGٴ茠+C>JVxzkTIDJ˫Hz˫ji)iE{ B˚Xg"ԋuBA(&rZ( 7l`҂mz8No@8 ܊&k3#+AcJv Š }Gv5)p .yg-XвڭѠ1RtcO zB1(3tBQEo 8 u݂-ݚ!_ SMIҋi wTZm8mcɿ"{WSr"͡ipm˥Z_%%N~de"v^:ϙٙХ0i[Dk[pȟE[,JMjZҠttdnmu!rmS''4#In-iPkalF#orR[[DknpȟEwx):Do2 SO;c z7 X:SvR8&wQ\SI}pi.2/rV|sFNl6Fc O%[֬l|mC&7yea$d6?&,nSBs3>W\CFwd]~i{.=b1T*L#R%3&g>6J;Q%{YMl*p\k3ed%8[.WL]|<b"7Eu>X8+7V<]\]bAL*G/֯-%2Ed 6H ]-kcR[p0ޡ8NQ|0Z1FmYMVbD+UAQQ"hN^(GH{rS:iWT>ixrat^ `c혦(t/o#WM G_xv"th!?q hӇW'= ?=3pmQMljg~,w<%(c]PϿ}`+aG:fsvQf-e獷pwMV?^y5.`nϖ=fȔfT.F[<1sUbo`RtװǼr;otNs\ܳG6l|m>F0C{o܃N#ݹ7΅ l]PgWŒ$p?D$k@7i)4SۮXW;&*XRsAJxR͐62b:` @&X) jlZw?9DĢ2פX1Y)*BJLbB' xmer%'ל<+u&L*yz#Ok5JRWbC隴CGٴy6mMg)6fb1XﴯZ4[>MFcܰQG@R*iт!q >Ĵ'X~9!7c8p`b,&"(!T@$6 (j[*쫏f2^Y8Ny zH$q}n}.\>5'C% 7cw+:5|} Kv8= r5 0D+СyrD_%"âGD_If29#Q(ZBc2`PDDuwZ4d [69(h/  L!mysƛ!DR_m՝L%Ԡ$Vp W)30" 1 9ۄDZ&E13*] ˹YE͎uPiM6+'d| sw'z:U#tTguyd9Rj=Tmt=Q\٪ƪk0!E%YhDbXA%Z.hEV\qn:.AaGI& &Ln$8 )J uJĕ;XI׸bݭcT< mjSKԜvZ;> :.r˷j5$ϋeeHlOw;S6t'-<9_ *w1"/: '" +7ܴD(a&ʘ'&X(š't V4W(,xO7"+{l8B G5%U J$b@ SfbI˜a$_7 bFԘa̸&HH(h7=T ېO"!?)W1T(l+&t C%clSFXZtHtM$eV ey!Ad+I$(꾁EH$2Y` Jf8+阇B}iİ>c  52@0 $`=JO{' Bi)rMCޠ5GR*Yߤ6X.%۽rM3q3hr\,_.<,(ɂ.^έ+M&t34\Ept <4Lr1 S_\$,1Ct9p)/Gr]~WE8:^8$gT t5%9< ffEx `Tjj&  %V0?DKekH)g2Rt 2%Xg!]-֠Aa +v]"W*5U`FbUrƱ&_b݀4 fDi'p`^P)3+g}\j9ć2W=kn pls.튵UtG4,~Ɩl|1+dRr5O`,+9vAlp5iAW" 9j)YRE7.ϖdr~eg`b5*-myD`ɔyZ[awaiR?F?N\1l>5hp )u!ӎl+ 8BtF..ӱɂ$k/LqNWF`R@SAc0uz!($`ku{÷>2S=(s ?@FP= ?W\d2n$0¨}j ;gJlqXؘ1;9rcQ|™O&0AiO(\Ϧ6hxwqC;-v4xk=b_o-d0?xKB ̅oC<[+1iY3NLPA{ ňt-5Ç"}(ߦ³BQThI  γO •,7Ki"ZYCnO11(;R-'@lW_ٛ:O/ 9kױZwAj5CǶ^ Xu`@ڻ2%A)ٴkKVAd ߝ}lmHovDCd}ݽP"ǩgd,S2WĖ6S r 3oE INje0G-Y([Qh\-5BZM(t|EDj:spb`?+&D_& V CD%ʧMׄU-[=[^0bĻ.f^lM]dYNp2<r'HՁBJOzdm s?RW;k?V\l!#h ނ`|j2N1[άLw qK.?_LJDGTTHiHw @fXb_H۝\el=u?[&%b6`mA5Q֡#Q0RTXL[դ3n DBjTKr)ŧHi勺oU8L(-pДsUnu|Udu1?Wx}ά֗࠱'W[ %|QX KB)^V)4]]8=[Ab&iO5i*DeQ,{Jܺ :~ VOcvUr &]Y N7 o9H^̼ɪ l8ғm',ѥHywxˊ*PuΆI+>Vj8bBf>*IOmpȁRcOnW>JjY}Iԝ#\K uޒ x3Zi.R_߯fԾYV@x ./3x $tkU t2 nv~2*B}9lrRK"Sqїܝ~ѽO`gc\hd'~:7Nr wjmBf&Y첶B0o^Qw,DwQ[^ )ofqutZ}%⺷W$hCn;mjiUe\6q>9O5ȴy$e=E$HAYVj@Vp1Z`AW·Q٭b'hqO_%tўb̡:psECFUR';;  QE?m.Ac,i-KtZc;Oۏq.Gv/0}E/`]Ѵ<(`BoUnPISK%1> |{~f 5Ed~WW5]L4_n%TC9Ttb-XFJ ' `&t?(͒TcJJ9 -[4Yxx {=I][G4KEL圁Ы<h]q[I 5 f"e`_;Hڟ Vv8ɹW\d!đ0~}s9k pv5{O̦jĸJ֨ !]%x:B]R xMriHfžaI0ô`b5.>*լ!)'K"11,Pl [*(6ޖ^gg౽`8R'B<6!met1V+ND*/%xp ->~M}I7k` w?xd  ccn|{R.f()PG:!ߨ:+{(qh4H)!6dxc4nedL4phE(9&rOKg0([70])~uɝ-F Ez55֘S쎽# g֟*N<$}R 9||<ӮZ!V.k/9c3w\1xJPQh-k>& Efg3ݑQ,?Dk҅]zQƺ@Ł(\#4Y/:kV47|7⫙]ŋv+tu8YPƏ.9| 8ߋO<2e . ?",,Lo 7O[ū/1@*DM`dU Nsm+&" X\Mb8 hv:Hf}ʜ ܚ 'X烷(ff0wķy4V+VfWtƹ3) A4Oƀv3r5 Ғ> ~2rCF ֥!8%EA@.q5Zo._TzA#RBX[exc$ bW`Q!Ǡ@߼]N&ފdl,w/ G_o͸ KJV؅2:~dޥQ~di~7[xo,7=sx VC%{ZE>F}uw9yͽSBe~ltՏ8cyzĝ zDozShp1~'sr[kL'5#>҇1:u/:E.ӫ:ߩ  _=pOϪw'e[_m o/=Y)@|٘n&"DƍiqUK%rrWAۅuJx%.yazX碤s xտz8i LC_}^6i퓇Pd ewu˫o7( %sϚ̒bfGΏo±o5֕}MxѼV|S,̙ۑ$h6_Myn03;y]5b k6SuŧC7TehqX+ Q#fZn:QLo+Vr]Άq kPE(XP`jdfPacdT 'j2Dߥxh $et< r$z?Ld'+1q108ALRE'ILbU¥uIce`HÆ HGS'8"6R`a#F)B%SKU$02H;4{Pn= jkhm&VK.0Vm|GGJOX*Q8U#m`q`DZz M;񍇘jU>YX핉ooœJONOcRj3,Qb*wyI0W=[7&9 o1{Xeϛt̃ȖSUc7R!!XLRHqN>&GHw3ј`c3 ة@bx8|]{Q0ʽaW]nW{|cT]R݀MeO+JdLr&Hg$faFt,Z魷?b2;WtJ9grMJ7+;--[mh+w.+Qgw㈶䵐;܎DŽ!%#SΥ\lfH0?#yx4̵yK]c]j@ɧ5֥qr!O4{8r:!͝1\ JQb9bqb?0OkH4ZJADOPX򻜳X&"Q40Q0H%2c&9>t`}0H%b "V -uʼ2scLX, )Įa+0բE)9ʻtm*XZ/XduHȞhLU#Rj7]iEӡv+ GtJFu})nMꐐ=""v#E!Yi1ڹ.[1;5HȞhL1cnc)"=ls rsaί+UyT&9o~vrw6qy;*=^9&5~yNۃц1n yGK!g$K7*^Jl'a {;so/\VߥЯqG )s'ya>,>uSRkQ+h$x }Z&;lkwƚ5$@͏AyڦAp-^Nȫ<^Xz<[r!nOnd݆|ScÛa1*Eha2~hRh]vJ]Que6xS ւdwOf/ӭsu4X8{=+!^p4\ΧG5Fu%(=8שg ޱ2 =&l@`L3\HŤWVR,6d9} v%+)IEY*qAGG&^qt\"P)xYG+Eĥs(c8@JQRGqꐐ="dEa\ieH!Y' تH;bR9j\w,U'z!_ӥ9jGHESWfnzꌳq&ds>rI*MHt]U!~m< 2ϟ>N(gas391"7$s{}9\~.$0}S -K%W}U jZ=, Hk&>C缡)y׽?D| ]7yE:-P#:*G6Zq6F&> g(x@ZK zpU"NK8LsFaI9ɩhmQF)ZUoZՔdn\wLRc ?u֐ lQ4`8(^FK=P*@3j`$!".hr|KJFa[$zadv) Bc@쾎&}\~ٴPzA`3~L|9z{t7m'Y%<!23j ?f8'B_ׇ zO7fzCo.p}~ϧ [b #+~C>@+Cy];oa\NĜ.N e ϥ@۷oK'&{2Y]Iq%0A8M Rp[G)u8 _ MSED#w|+Wn;?f:=9>?%;"a%U5$σ@]E)-2 'ӫLMʘR֣ŕ; kXj])y^s ,i5,sԺJ%AXl֞HVȔ։XÖ[]DLEyZἔ=zEE|#Uz<:- .Z&a KzcYp%ށVe9J/'zժƺjUc&VӸ,X=jpC2 V r |rr+wL#Ett ZGMc15HM?1V1σ$]]2a3B`jνrs* L0x9++f{38z4pty< nQ#Cm&A¬.0|?κ.k.l 2Z?ẉCM1aof]8LvKp׉+?IxL.N'^KF|MYg%,1%H!cGHPkĺ$7BMүÜY[T*g^9gij~- (F2 ?֚w/f")޼؀\0"g ?uG ` ~\e-l=5cŬ5b yc.-E߿l0ŵT3bۑ ƍ6U@(V;/ƸM.uEry\ꏖSe1j~,b:Ŝuf MۺEmXBEݣ$~+>SG:zXdAފ'-(/9ż7-or\82\F%aVSZOa% [G&Fz,<%r[T߲tYPJ*ɤ@z1q&R3]~ftn܉K\4Mzv5CIknw YNsbr!2fj$yF:K}VGV)7L\pJ r6̲=Yحvy{RmS_EJ$1\rd"Ƞ  PpKn@*i3i"l} KFI*?KVcNj4Lx%8OJb h  4E-q &5F)jH64˒c1R'j,F `18~Y 5jwH$lujq*mܚBܕ-h??xz#ar#&9)$BQFƸ 7A%XR l6ficI3zHX&}t")Uӵ/l;h͸+ql2bR ^"/j tۋ/z hJ]jR$96U>, 8jb3bATt'zL1]è4~ 0%v ᶒ`Eݕ|(?eFDf $iшG(5*-,'G}{Uese}+3BB -]Jhw"0GDJ&1DfhcN{Y{L,"Sp%͎ p왋 Ԍё2ˬGZ4mB3aBE3T)ABصzQz> )ˉyHEz,69**DAc`bmC0ڇ 0ܨy( b(d\YʬCL[x__F:Nw|\@}~_/*8 wClTos9z{p6"b:{pW{ڋ﮷:[Sq~3SBu7ah885ӫױ Z!yusT >?W%ֽnxe )8b nʻ-8Gv;0a-˝Wo4bj:Ӊd=0Ĝ"8_ד08Ngϩs29^.lz hbq)&WcC| CcLj$jDpZQ P̬ m @uzCp\y%ot-y(Ž3~{TwH}P3XWyRJ@g)!g(\/78ﱏ %6pEE3*C,5+L U [ ߼@.C:@}xQYOTzHDnf  hjDqEY0ބ-BZR =NC4XBZ9!e@4'#@_KUJTO( (T!@ 6TPqE`$Ac Cj-XݼŪ޵6n$E,BߏaArɧFc R ƪu l6.dvj VC-Z6ܓ2͑"C9GyGr !vZ<‚v;zإh9J31@Ns̙T=;A.bz+uP$׸حmv5 oy'Gc'm,w: 9[}JAexpi3`3< ^x>&F0rU}Щ6>J D#5$"oHSpmK};Dh$ځ I!;m-Ƙirq @GTˤngBvyXu0oG! PвI1- iGRu9jߑXL/вӥМQzecB(>R*Z~.ЖB~Jnۋ?,V*KK KX/ .%q߮[{`Y#~:΄%GN\]x H'/;<߷mҁ1ڑ T9Y;|8X`H&^Wn<1ލ+Xf_(Re;YXsj5sy#QRjߦ>U|yś;><4/vT,v{2?dCа#!{5[$49V,$ U=}C|Xo~k,_\{Mrr>ݛDB㯓2׳]oFW X3iH:vUn<V"`A Ɣ+, BaQawI14H `(1 LDHg% 3Q^0鋊%o)vcO6̾c-%ێNWn#}$W.wtbwmZknwW< `ĪXb59 z`RYQͼ8o>! >NX^]'rθ}%|l^)_N / ~_ϖY]eO; 3o A#.5ΊAA!+:s0I_YxYC^+v 7>gpуLC~M/*g rG؎'Ż/7͓SUXTa\ŢYc2mwW_nnjrSb_[dpeRόaN5_}t66/^ϝ,uRbmuM|Uhp՚ؿAEۗO&EH`fc'.;:붞 )*11櫏eR w׫ ښ'ݠ_+QN6\zt6)ރ+#1&tOZg})8=!|' hc&,drVCBφȯ61[2NhPp(kEۓflp OX@Sd4wv%pM.sv#gךVr+6[R~bJ_*^JH44"(6X[kSD0TIf+gUSAK_uU "gVêR2zt\[.ż,k,($.lXXƥvL1?Xj*пm*-?}FDЛts? #09It~e|RLU:TL5;{e*RJzڗªʪ' 2ťRc{J[bӉ\?2a*0\?{K܈lG<)hmk` K\^;-ynU |'e}ëT%9NѦݴ;rHҨs~"@|TfY6#:4X8cŰ4(#H{ÜDX%V#K-<+އ/wMۇsԬ!tC} t2kIVY<|nó±(:JS3wD-}%]yn(Do# }|(GZP6@3.jFB$&W2xje4cXsl&5pFiih.FY9uc00QPLh%"N_4 iXEE1pptҀF6V*J+ 52 52(2yK^.׵Wo-@$wZ KɫK~VgQ3Opٰc}7VJف>?xU}53ӯ=VG@9kJjπ L>MzOGBXSDR޷6Sr PTe3:Wf../|>.^_>r2^pRA7Ta%Ubrϸ7xf㜦%KM+߬ޛ7%n*$ waP4 6kd 3&UQn8S&Mm^m:.XNM7등۽aJ_g8z.XoMCϻd.&b1Vi<.'vx\WzF| (&N_!fY8omo;~8gŽt6wUQ|_)M &G"\.L]_>l#[58jN9D8ZuAZc!y=3a83X qa'$hs{Sb{/Eۈ#]$1*(Z'6b #L  ["unW}}87Z)WTOnf51;m-S\3U HSQ ((X$My-}*83S*Xo"8>:AP/ (=ޠ63 3詄 t9, $zn , ; arh@Pxu~Zز16i<\!B2l(Q;;(IvX-T-`9C@h'v 5q+J7c}C,v~=K;OqL¤n?pIo}zCuYؼ>NoQ};X*׆#-CqXjp:LN %лtN$aGNϴ}9ܤ/KqvK]qLoْ^9&D)3:rS<(K4i,_;kY ö_}VFcdpayDK (IN Q@/%0c" , d8)g: V*r!B(rKĵx戗w[Xҹ`V>sX-&n;p%(lFȨ"uBb) Nz{7Kię0MՠYF HafVXF66,=֫kL!P}B#d0\( H U1{ ݸi1]AxKActB~eg,4zDLx[\Y{!XHi-b!٭ROeTX Uj!έ 2mc J..X}⬷Jγ@k-`(@?"eLPJD=b+)Jdd{Tޒ))o3Dip`1C9;/tRZ`N,  ՔlcBRџׯGsX&ᗣ8n͹]UktuTǂ| D/(P<P_tgHΦޅSf[K<`'rop*t^^@)E&kF2Ja@lR*HYW1֭LO+ez=1Nfc #GMlۯU7hʪ2[#A:%z.ףϓt=(>q$.;4P(?Wb%ʥTQ,ΖK UrlGﶬ^vYwV/v@#t*]}AN&فyO`y*=A@ <'ٙ{0Y ˹[dH6BV-5-(S9~v)?Ξ߳8Aݓ 6:g5&U(UTдO?BPUQRs<#Zڼtl){RPˌƇcZ ȩCY>~-?l>l>9ii2ny\'OnL+)@u7_sNm尢ToB8Ɯc1\]_W Vy X@ c /1*5B͠9~ݹuy PgzSYT*'u:?uUBꍜ>Z;G'#GEaOLPrke 4#?N&+LO-g!OLD٩5M6 i ($Uc?~,("ijD W/Pn7BJ/#qGFf1blD͞6Κ#SyJ'y۱bsҭVY2"۞W*{|l{gɆRqd#&ϑ}(oIV^wJeY bY[BdXD͓9gɏ-܍sܝUzr9eD?ػRY2f_g%՛~j, =tMfOYr"[T6Itlgۜ%m.Ůsd\覽ej~-'%'Tv̱Je%;⪤>,)8ٙxP ^ cބ W/jAAonan4̕n8F.+߲ӐYIчi(?;H걓j4qh/!|4ߩ-7X.8xt͟9J7dbL玧OaAwt):4̥R沉JS{ttStQiQE*zrN$}[Tʎv%KolϼIN8iT hVH&PpұI8xXCP:;ڟ@ z ZWŻ0@'6 zt1N($۟hO",_bxvZ{: D5LB^hw͉ι)DʸW$Ȇ|վHf"pqzkG/@GzKe6T$\h+;Նfeim`JЪz&;7gZ+fQ ?Men,Y!Dh=5cqp9]F* H#X}VV@x}?xaS:նªpTߎT5;8&b0jWNR4;~LX<86! ʊI*Cur/̞'&WO"L\Md00}&AȳS ^O.kjDEW4>`HzKuǐ˔\o)z,%Cehȝ>v{חEquˆ(#`QqW)KXr]|/QHrłF&̏' -ފ3d`Xń 0!|JddY#@ K0Y ނ .O3ǏaK7Zwr%-8"=)gx[Au SˈdB\(J{3."9|܅5fADb!JoPB082{ܱ^Sf4BmB^TR ["ǁ"|nqgYC:l@Nc| 5H#O'p,Q"@rܺ Xp*%(P I8-T#mގDL#D`Ǖp$$PyX1HK "dq0&\dk/ 6k%4u#d0D!0x =bel=yۿ .hafWJ^ Y01:e@[*Qi3ҋbU- LJO5a೛^xZ{e,ݹ"\|Qeϋ,^'a BtnK"/%%ŹZjU,sc~t4j0/Q9[_3h-^~%Ji'nu1HmTn-r-t:u=Tք|"ZKj@-uº ibF%߭=N$4wk@Bq%SDtcհucZN6X9ݖjCnVG/Ѻ5!!߸֑)NP7v*Y7Y:>uBVѩF"ے5V{֭ ELBVnvú ItjQɺ$enulo\DkɔbUAҒm6**ۜ"MUh@Bq%Ss6}Sź D[] RDnU[ii-5u=Tք|"ZK|޶uSu EtQź2Rh޺[큦Z&$ѣeJsLKWYW˺8ۛdzq{qh￧+UˣIWmo cBbi/ WVR >tE`HmS/@+8 czDZ%!ĖBhbil- 3ӜyB2ufI x[֬bz wށ0ůU4_ QZY^U*vCCOޜ01.茔J4dj.F W;cGףׇ~s ޴ ?|O5L#Kk,i<Lv}fd uZdY~Ke ԧ$VF.J[UF֫Y.XɦtH~{? u q1C4f%ebvMi?5]كԝ$m磳Ю$m&?-__v[VX"oBE_|5Ztվ€ k,:m%XVĴ`Hj DR,fNn]TntK |;Ŀሄ˿?'_l9u0lb+/5ߟ*U]vubwkA10m0Se$ qGޑ(0HYY0/O? o. R`4\,~{/_ _A׃ލ2G` `Y&:<;Dcc^4]$ zAe qU6C(E/aYEjD! q04Oi$|Bҝ]TBEf.EA7x~ߝ+ѱJ 5Z'`RsPv-r􀒓a C2Pc( -i);C@gT 'S=NOUF3-"MV" ~ Bj%ݡ8FF:eZ͍drDZy^B\ )}F* r=O_ƌ q.޹`W;@$/%RQkIQI)E9K=UQf]KkgfFp(tftI~Wa=NK{Y{.췬G&z6+"=\;,%|9+!CT.bNi8Ayĭ` |ȉ 3B:UO3r; A|({Na/?mˏvw)pJnϻ>sel=\=LA+9Q<*F qHE,Ez#rrڰpw NӔSZ;)ϔ_?(ɳ7N'&)ݞ6)hK$fMaIûԽIAv,JիTwtWz4KԥQ^2AVAe0Qf, 1+BJ=rGXu)^;1e:MJ'7 MqA%HՙwVVW[i 6)鈢QduWs{N՗rDw{xAz[eU2-oa(מoB> oeXXNop^l=uD+Mg4z,}[OC=Z풧?[89.ь俈]\Bf./#Ӗ1lE [f(!WMH[sVb/}&^ l1|Dp2$hO{cI0[!9d "=\6𧓩l`z@[h G|z8%S\$l%6t%M9]T5^^e5]^3 Go.'y4^Ңr'B窱1ٹbfęXF7*94hwۻ\WKT31cL7X#z:S?ܜ;t8w;`MSw=߹m*jЫBk7J'@]5qVHKO\n۟06Z!B *{-1<:Vl?E[ܸP I̙YLFXR hm6qRĒuby_8o7ڊB񤉎u)k {Ѹ}cZO-70v1NoW'SzVBew˱UCDԘ1Iot@o`GAQ1#^Gs(_"yT™hZf֤Q%& Q+ ^^t$ sۑؽqm;Պ- q|v}qiy-M ]뎵ә@RL%r[B".>'TNChT4, /Ϯ[~)a3 ;H34X&Lt}NNgj!ZGnDԠq+**.YA 5%rYqIw JٿnV|{ح煅`eֱ߭B:lZ;ׁ 7Gư2aϠsr"2zB B&jAc4Fj}6B2Y+Y nYoV]e^TnN7뮏K]U?>qt*U,P:{֫Ǒ| jpW)R0q"L!?0TEH AURڙ;P(N.P4 KK(!8E`V'3 ci4MkzCVID!P+490oٔfYNwEt\prEC`% No]oEPliv/,F=nx27i|3My뎷pl]b!bxMt"]Ƃݣ3k<V䜔:*q! 4e\- d4CbAJ+B>;AU_8=_F-lgj-{=Br3)(YѠ<2Q*e 1V'@wu2Fv8U=e \ }wܮ Kw0:@)ۉB4Z1`,l]]@oVduGwu@/G'64zPzcZy%5{}Kh28}Lɍ+Mk5eOsl6MSjT`›&[EF)+"a47;@IdLmZa80З7ۡX;(6l( Xh Rv{t t gh/sjtէrT$+dY8Eő'-Og3R_W\ C.]ʼV$G7D-ڦX?:ƺNzS+! @Yi8?E|ްrt Jӓ+"?MϾ1_&.aR&.aRK-N.F Ej<  VDK0q'{Ƃ'7Zo 31{UfS< 4!/!FRcZ=s@= O_ *ܦZH7[iW' a0%BpnOBQaAaHA@f>ίsD3fƤޘr{cRQ7bmb9%& HEϤ:묐 `J6qUXK3QOf LAF%v(O Z;L}bZufgr[U vyUN^aK\-Fein.'nçʅ3ǐ6]-BVO,nն'0䦥8wBPqZ:T~>9+.C@1_~nH4d?ܜ\ Y1B/Qƌy&ppꮧwW#W0fF1[rŌ?ޜJū>)42!ֈk.XSR=%Z#Gc.v3^#[tC{aWk"GaeL*TLqQs#Xa#c%n)TOtmvY1xkfgvс`)6fQTz s"$,/If JD˅46|kݳWΟXj+Q j]>{Sg5wBb+ɵ`@3O,! uzICgBe>c6Ey7VgcXp1T\9I hr8Jq-dR4R6ڙ[ 4>P*JVVg\"EX—$11җ2q_{aM&:N.2)Fu= |dڔe:Bn/kgZoQJ*dD!fd"Aʽ0gNs\ OkKwU"%O0wpEv.gb RZn2ϧ*ly߹p@˛b~;]s/|m퐻)A?Wӕ 6Zy Jρgċb{B}+@/D+FRF?ϫ@9'faEd8>{ KV=eضƖ^vZK+6$yFg~d~:Õ5%Y =n7H`tu?)}~qzs[W~}i6mB)fEp/'IU c9'I`T#T}Vg|}{aY@CgU /G-<~~ڜZZfR $j#Fѣ=ǧԚ=c!55yj#mk6B Z (iXɨ< G'9O4=Qؒ Oip=Kr98] ~KUK`bB-hxR <6ßy5t(.d1\,SVF9sf/gl676I`9Ҍ0#9ti$l6&gم3XhlS5S䙍L}RprS胶iBȢ#鍶JgqOtnMV +yd?bJr#6M4FmOȮ+=KTp3ɚ[Kn\D1Ӫ1 qj(e>smnW.h!W{L~O)Q*`jE qwZL2i:oU'\oDnz\h5 n$- @oT}b},k)z>-MއX mf(bIShm%آCV풂 rY#3]OS&ZNtɼMJ^oRzכz.IXڡV5D:hW(ɇ N{$$$=bC.>h.>|yVy6j=^Iq&lRw2)/ Lx>wHXK͸9`/[ v6/3c]"HC[jl[ggcf=U,Se 3&EL_lҚr{K)Np߇)=Oay calĭc4hC4'zmL*AKN )Lv1@Mzxߗڼ MVR^oߟbStuNX[\yҔ K!X0VR ]R u1a0J|/G FM BL;m..6JGx 26AA-xH>`Tn'Y=Gu/,8^mc7%+#X˭$ Fnnr X"\JWFa,@ua7mxқF?3Fa"=K>"ĺow&4yCS(49&hDTMKtCiX i8hFy2?0dcrbDG5̶kV껛?zmw﷾:!L`y"1f {E:vpL>35ww`| \&j4o.ۓ N<[pvo'U\iZ;cp欿7?nbg{?ɺ??->GsZ!<^ ?}pMw)>2FH}۫?L%ϥoW^k\% o>&nKO$4īƙcDopv2pA1r݀rk[a<$z;]_q[d"-]=_<`m@@`D ̨tCמ %ApDŽ ru>:i';ù SR0sFD w> \G^qIi eCNEJ N2l?Љ^%B9\w \P*8{hƉ$E\XV멦٧H xQ@N3jض pdowHbR4:apэN 2wc@t1:afdtZR]N*_a|ѭmmSj v$it BˬL& Q4q4,1j96 ߙ= ) \%ǡU?on!8kwCkjOi D:JLD!07B띉 & )d|9(nnm['M㣙y4]%XAL T!$ߦS+/)ƫiᢶnvY%ێXE zћYExq 5Y)r $Vŗ^<#SZyN=$eZkXZt5.|n+`oRiP^-gs6 ,J*1d׬$ `rkrzJ?&!^H~8'|ef`Mzogqsz[EnFHY/3讖gVB#cܼ+VUn 3N%Rt; Zpf-8hey2SrIBXs3SU98U~w ..n]] (2ҾQ:0ηI(ܮ=zLWv lL??OAܺlĩn|lj]E/!jlZb}=@_p kK)4]=˙;݄$$<.dD˅ei­ĿV \I#cxk即U~9 㔒k|NpAoz̼eA4Ek>rÙ&c/23Ĺ*Twɱͤ'AaA%c+Qf-As#x0ף3aS&q=.f`?G'ep18G ΀G/%-! L]ޘ7 2)]ܒ1!tN #5K!@B;Ƹ2J>A`sʷ;(mDAAwX%!I8U'!в˽ꇳ!L*lN4@*_k7м Ja\xkkYCҦ&LSIU#\L6h';y;HP}Ă4pvzӂD<BoA@e8>0^˗lCk5} \n4|H#LzJ MAR *XO7úќ %D2S%*omkSߥhL~r`*uk1$4/{2ZPMm:4'vI@;r- ^Lq$To8z Go8z7}Ay(Y4:K-=%{UލX;hT5{c.{XzVf:9,mM|#=(!Pj0 b_60* vogx̓贁N>|W3m"[mXD?4OTDt&N ggqk4i6 Őy& : ũ=]paW7C+F{SS5្cS Iq.{iD{2h-*m |߱-h^ eOk$w2\~|SvI12A\Sӓgz{(_zR MF <}lw=҂)e[ZUK=d0C̅N67Ҷ/{Ni5B)Ĉ>U42K(]f ]f!ŭdQϝs{3+e)toZܠ W_ܧ-ZgjH4-+U=[1rk`kmy7cI^F328>=7ȧ}m͆0c<ÜQPݳ/1yUa4(j2D(H =FT1:IМCbc\1 zhc0zq-،B t|Ej{ɟ0VMz[Y{BZV%P*OT"&>O)LTd&%z/..WDd_yd6i/f7S*@ZB,V⤡QSN[D XJ]kiˠ1`'?'y>U:#M>.7A}SUxpiE&z?wNVf,~K .+~co$g,&7E&vK1BI55҇S&49`=\M$Smq9ǑpyyIϿ$ǁ#ʩ@,Tʧ OR"$h>Bjǭ4l啡-qJ{VX-v߿B8qY()úiU谞A_ߎ)[+rx蕺i=@=zLm%5ony ^0)}ՅvfH1P2=j55 \4 TA{Y 1K) |dLᔓ oa,eTOrw@/k{/ H-zC{Gpg#TQa)`R+Jō0L\b^B׻l](xCXAR dh1(jE4A{h0;gV"ǕF|vj#)uPƱXRZi4K2GF) [#g^k 6`y"dBx-bpʬ"T!$~Y4ˇ{Z̴TgQZr)mkpVRj$w,Z[toqy{J W99j꺐g$BFz&70K *U#(!sBS81ii$+v*]1V}r0 Cpxfk n:J&$.YES *ZW`%882$FJ"D!9aZK@0~APgVi9Eҍj U@w/ǐR 4i'PF;~0- L+: fs2\&z(l `w,,¶M!4IĠ/ȥ* sʸvg+ɘ7;AzϤi(4)벃)-y:ˑ-f5Bw0; 1-q\Vwmk8ߥ6fZ]AZ10anq^nj_ wI2V,ӽ,誥&H٫Olxc?ϖŗЬ"?|juA}:Nlə DГ*uv=giQ24Auj5,8N̾]D?,/Mɴ+$G/ko{Xbx!l#wMfZỴ#`0HF|B_N[y]oN۱z1~7i;؋_O$ E0ɴ< 胕($;{$)E+xq"'(r`WEZJgu=mj]O'˯YuKQvhN%WUV襊Ue"ʖ{QM[b9m8饈[`BD6jrηЗ/ w"ܪ۝')l+ ,l C[FԐx?E(.؅7([eL9"z, 3ltJo_< ʥ+y[M.n? @U^1oDjVBʨ \u<)6drqQ"R'T33ȂX9$l䦏c_?ƾ1Uc3ƾvzI &d`xq+n8_*5E bz5n1d[Z!~(6etv nTlʼnG4ɛۣfQ!VAtuu зh>û{yL^J<וg{ƭ '_e}bWv-ݬyԸGz|:m!aڸMqFv_=ԞNyç]~J{^B&?4䍫hN1O~ޱn\6n$&n2Q>Xi3x`V9iА7E:=&? V*)u!8YrLևq-)))9ӷԘKDI7, gD} 5'f~)(= 7lZ V:Nr=T3˗TRD/9:uD%RD`˜aD @zJKAl[i%Ba{l)ެrHTGM1|ÍbbP~:R/5v@n.>'>Z0y T 2C/ U„65bڈ *ɤJCC޸Dw릚%'n2Q>X iehVۑiА7%:E$qZ7FNT9SG6)B1Gnș֭ y*S*SRE n_KVi@2EyhjAQpXORl3{%7 1pu%[tskS5%\/Y$!iаuo6@(80$W6l I4"$ݧF~Ԑk`JPJS.,t))#n];T?x8)g~/_/Ftu3)?wSR /?_قaS4=+ EKbQxWr`x}< 3jӭC9k{DSu#~ntwܣ1=UoX ,^"`бug<5,y6##)ـENfo61c>aCvT{?S<.HH G {ICQgv% FOGQ괤NPkrNA(@jP@c#  pJ;P6j)Y>jQ,J$-0|ǒI@ցfMPhf@5Z!? R-d^%"׾\&U.K$dY+nQ;⪖oT lLf@ j 6u.C!߼h0LL{Y\JFǭM3?|̈4.~y L"#|':WL CW䞩f!_ ԐtDA[z. %o]Ecu5|D^ڇՑ6̰sSpbǠA1K,b>+&$L rb$@BG iul\;7l+ bƩ;N4Zڜ錏/i'dT2$aNa3W ky,G?K,Gk+Պ[LE<@,c60 TYiA.UE% A-cO bķ#K$*s',)ci`*jhUƞ5dM%1Q;hR\ tIzcO5)\Ŕvlx1ypP +-{`,% 4qQ1OM&~*J~pݧ! ;=Ϣd(HW-FHcYA6}a)G` wn|u_TJ;I|*6Zrl:Dg?1o 4ːfo-SEr! & iW#?!:to]Z߬$`D<=/ͦx%Q<ė;F-|e8 Vi6zdh raU-l7Fq]~ݜꖶ=`uӯszg5Y!FA?1uDзg=ϐ:y<$xi7PbD;.-S66G.1h`/?ވ 5@9GIz;"̷(a5_#hz&:Cn}Uk6]2U#ul8Ҭ3 mSGtsŇFg転i>c$x'$ル:z0Œ#(Q)E8}W7AgBB(SZϒ X΂~HAě3|}K3),gE/,tU!4Jwy^#UEZU=O&Xj6RX9m]-)1A<{e󇆏v9yEDgOv??]["Y"x4}o ӫzUgtO+Re1meVe|ғPqoykSCe9̞Z{48*lv:!VjƊ|W+T]d}V?f﬘٫m#x)l^ɒ[\k8t]`vZ=l>uBc[&XD4 xe/8x[~EYt'V%@5q~H%jޣE4Mw8 sE|NUL*)U2k_6SU ؇KVGу9Y-740$=dc wBf4i6Qx~{X碴Ts!H>Ǖݓ\<]F?_?]-]q:6I>#Q=^cN[HH$X VHK kfEp=)wn ƨF==X9 ͵qDIHJK4YΡ#sT:ܣ ύE:fY3RMͩNTN̋ZZ\s ɉƝ|e[&y)0Zߢᴖ Ґ~3%!]ʿ_ghۑ|=諼iB?REܞ^UE.|p!fY@>/nM-~Tw᥯XR Zax%|!9/ qiݝ}>, uc P3ce<  ʇ;_|{}w=iCQ|@&Щ&+W_KdupIQH|]>%u/@I}R{u%e0aOֺsUQg@ȫO;ڵ;*$Dc\K r+EsZTE p A {{a:mbcqGSA2/̌P31~ը+D/$MN Fk}8am;q=\jo#S\h١qnw+&/QfT(Thbbd[Ҵ)N;$'J֢=b[NB-?Dj 1_v:Q8M%h9LT$|V )P ڬ h@g @_L,pLg4G85g=&ek%3B#[ yDAu.P"뭠*Y@uoJ3]o`Z3lYAA0KBRI`)JںiA%qf\Gm;Bٴ"OPvsҖ[DD(]<|y7Srwn+ W=8྾Smx+fGtYH\6PE \)WPsk-Ѽn1!;^|-!'3\M"V:T1q0@Cn{u]c\rvĻD6 oF£"!9ѣc<c6fخՋK =^Nz'{_Xf)*Lއ% "<}]}^eʆڱXZ>rt>{]mG XR׆$(4IH`L%!{1'** }2A5j^ռ yUU/ɼrV*p43 #^35gLL )e}XiIɀԁN~6kZmICoVR{: ^3=2Ɯg /Eks*W`dY6߯o&{R^ 4nB?^/IOD _$[9ԹQASЪ4^To#&t8l^.)Xa< 1rc2+v4lfQ r5D kAQDzɌ̍ ^en]T$j!XfV+yNhf,BZ qyA-/8suLP3`r-] Q렺'r]~t-)u K{V7%6 ]8nɵ&hPq=%-߬rE_[+Y$pNM uĶ II6SgUĐi{2왛*:ptZ*1hwAtVA˜mvR4sPЅFT ㆬ+n& k"F. f\8+1~ ~=_>Cls>M^]\$KF(|6Ps4W[EbG 26$WYCMtw{BiPtu Π= )Btc3toIMp׷vWj$ /l9滩˗y2F+d#W%q3|ww9vu:Xoſ=> -ZZmXw)֓DQiUYdńʚnYwB+np7)J:qз.cEwi(NsZÉYnl|yI@6[qF&+qKg m@zNR;w%)A | / vV|y.kmBGw;n[S&J 7PrViDHéBrvjz 5;ڢ_;?WGF CTiEr=TsND3T_nQ*>P鈄G5jlr;XE4_]׃, 5:u'7U¾'d ~L|vp~hEõϧOEG/()rըǷJ+˶A^yoܶ p' .3k/>DUMx'.5(wEhݡԹ&X74 u2xo%ꏏiQx!~5Cջ' 5c?ͥlnU\jU8¹m+{Db\3)rEOLV~k]54EB䓿SGz\eLéͼzt-m^=|4)Q,x,H^wqӽK4(qS3ʡ}hS3ک`j Ծ\̩~#t4Cd8zv iEQnqio/M8)-,C ;shag(\;eM]A3g JMLr''qeyaEXAQ35<K[,Ay>aJ+-M6 UpT}"OZvu˞w걌wn1XqϤ׈P)ID3 5"x=tn\<+O,lDHr2"yϒ"Q;PaBVbHd*.F2O)>C6*"}9SF7U/[\_oO:Gڧ;Otܰ-+'YG /f1*&nߏ>!O 34X\ѣ+ŕPƺss17,㔉񆠥02Գӹ6'.cn/ԝ&J֛&g> W~/gmYW.Xͥ})ڳ$f|G={KGW x؞lғZvsپMƄ[_Y*RZ6 _I> ڔ%A`Zl lZC(5L3d(;k 8stFÍ]9Ň2ALJ`$=s` dP-5=Ń,6&Lj|`^#}OShu*>2c4iutTR(}UluԆ|*zNqI@EI)~\i}ť9e_FN9~HDŽfݰh@;O2)<&gRg9wS,咊Lfr0b- 0Du?U}dL@Zn^0[V[UhSpR"ǙJ-<$3!eB)!c `Ņ3k./V&)Vـl֠Њy)(Jqcw]`1*,Q9e9W2ϘFpd7gZLf򞕍 $/2 ]YjeTVg2\%u+3)t㼴u)BU @N}APrSܨ)#QdYbyUz ?~4d.;?Vb)ah vJqTqΦG/ᤣ.Zx9HZ2>ce9rؕBqMu%c|M26:ѬE(/MbN*z>g/;/{Jў5 a${NFi8yz^RR2E:i`"6p9%d(Pquc<"/s_YٽӃx73w=4RE8ZQ?-|ՒJmy[!_>=lLNϗDu|nO>g\}TQBpk#C10vZvlbLJ$CVڡZh&k^G]>­d(0L}<qK_Hh:vfr 3tZ6%V{pCn(^}"wjŞ,=C%-=b%/B]޶D$my[MhVɔoNF{;^Gۧx:^Vj`Çh-EhhTM׬C5< gk/$irOqCFp,)]9sh9B< Xu<{mV`x$H0/Tv].*oGmϥ55=~MJֈ=J0Z='#Uϟ⏄m3Nc Q!}8:t b/U*p[+4*gʥI,_?o-M%W٭SPKA^:UqY嶺8!i UrnjG*N؅2)k=Z}ϣA* wovGO 7'zxziE+5^]?V _F'uWn6N:niDϲZFbF9 *!$}6-Oޭrfn ɾaLa//o\jN=Er=ǓVW x*XWTkxa7 Pɻc+0 Kb6|˴+r?wyS\u{h*U/j/4d+VR#.I;ĔZ j˧|fZ*㪖72St?b{t9AEҲdL"~ݐHCua^gż>Y+ HlNXsU& 10[ɽ_e5W'3N/=`u1 νzZ$-;T{o`;Ɍlɒ-h,wAjrxI$.%ǣ1: `2xV" +v A{ 1 6q] L<'x֙xэV@ ٸIPLL 41Cʘ:"h# ,#J;"%-Z:o,"ZMG,$)R A]+Z*!h;ÃBhc{Jߺ_ aCFMa'!8HNc Ff!#Y$o t"һ[fljYHoYW{y:YGeI_;=r0m͇`evt 1aZr!%E?`K ^Tó1 Y0j   D{/]Ls%yi#Vb>(#B'A\nCpVλluflߵZI'gseSM 3*%!itMК7x\0S 1ٓgƁs^!32z$B OФ=ei)q#f5bugvLrK՛ξHg hK;lUO%">~zm֏O^F>~}Z OR~xur|ܹlV<]P7\~Vۆ3Xb뇕Jʅ{'cJJ SȋM^梉Ըً=FHby&zR\lE+c{bV3 EG1//^N8Hw_m0FZu7zqy u;Xu'5ޢHF0 $q1q_0F>^KH~HoC#;p7M7w~<3#lo_F//W]hr`6nrGOvbV r;7OsGeՌ,V4VZ>w*N`;6hg9[iVWWѱbp.}SnG>͠~ًV+m%ơJ}}!x^c>!R==~EȜB$kZN7n=n\vF.bŐorXJȀ)҈(Cle%ou;YR#);˂4wS7RYkV=/-ǡJ8 4O_5@N,*VSR)2%1Q9&.8h|գ|iөbyw?4ZvJdBd 3))X]6l +|Qe ]Uks~:k#dVHi_%]'%KbQi\Yr-d:q5[8d=;/%`NV\_ >;'exn$*7"_ӴvdNgvޟyՕ+BW9qO?tR.D=r7|,ځ05t_tA6/C.Mu^hS YO*TsUYkwuvA#l"0Bvƶ8 gb yk7f.?S$Wzpus}^\BG%wߵr7Ǔ?^]<6\0M]2 ;f1FCb2K@ϸ6$fG$zKbWgV8C+{?ژG6_v VhNn6@*F)Ԗskq9R ځzd#G\ QeR ざA#sF)n[ʅ kqE;IcfӞu$ @5fm" yNrVz,FCD \V(צ:C*}^j\C? %sxjyM/G 67B99gCA KXVc DTA|]8|ߥۻt 4+hxi{c+Y=&IȒ. x9*&\Pu! m]^6fYv.jk Z옶dI ZIJXo3 Ż舙z*ERA!fvz%rx&:\613 1bHZs #l(i=S}GEI Y070`f0# P+瓔J9 '޵qdbe4S:'@I,v^vF]-%>U%(^$Hb|u;wPIfD+O R*DBAl8)Dt IO2y#4F9Fi CZTLIL;zwbiNfYL#f / I )󇗟5 v=WxӽiZ%4_7ǤOU]VqJӬ` y <#p.{oIs2*+h Һd*R^R[&cQY奆NN ZgCmV[Xy"eA=NSGJ jexs27]ظ] P*25#gff՜A^ YVdJV1O@έ[#\?~~,t.ur>}9S 6(Bh")֋f;'U_eSL)2[0I܉~܀h'|Q:X'{Ž{.Vc_$b5׻m  M%k|-f;ƒ^&ZNlRojcG-@BNdnyaT3J G|3d)AcJI0\N3J;>Pi+$R0HI\,rxznTkDik]9nzFplkV*,3s1"Ҷi<[Iza'CH%}8ywHG""/B#;L@Pi5HMc|Ș43Y-^}U@^Gx-NzA. W^9x*(WA&tQy#Dskz*J-e}U36J|"Q?lwnG?qӋVT:pa$?(_T FH~W''=;q%\j.'UO\=-!*COOP @[*]I*i7Ijf$:B…yLB Ӟkn].`3|'eI6$?]_‹,(ZTU"yq,,I’B+&)lhEmEmsE-D2Ds-+a($DشLA5Xe=9VLJQqIC2=F*`rVќ{Wë8a׆B,pug]˥]1P׆j"Wtt 9M S߽&J}eWu7 #߫cy^nbTU]xsbpu71%ӫrQ s]TMl:\ɫn_CZ}G2G\S1] 33/>E7Ta3Ls>p˲2%OQR:2D jLj'XKA(qS)G!-UNS#Z0hWYNV.DD!qmaڢQH 4jQFqͻ>@nBj A8 $20=:Xazl4}Q*p"}$tT:ie}˪A%qsm0.%[1ЀFFjRk#UbǃR-/RhoBRsp:mڢ桼wm=QAm"6:Ghm*Cvѧ}h,+?iGAӎ;*T6Li NlwZ{prˌn9aW3^\'dT)-#FQT^FAb(2א @:n%3JgV0ELqgb܌{&j]ߢY JuҀ i4\ tjKs8`>sW QBQŕq"il^c;X;X" \FdcsbQjG=T4\cMJI(1 JIov=|c&g9Wr5vu΢YVM* rH{h-u\~]y׿8{x@zk_E l9?wIYNM?dȢE-e.["[qa5)g 5Z9h@Ujona_RͲ&d`uV-seYJS&Noz"G5<݄ *t$HȽVe:߸˻ݗ`pq@b)շ'p0Jh|e.oU; \prVo~Er;?٣Q3U#g**/r&gЇE E&ҿ/7lj-ߥ]6mu\BVgW"a8Mup->2.ԞjBE6  l.cc/?U ZՏOUm!1~Ӽz '}ԴZt9had#ҍ66H唂"Qh׃=,o3易H8{{Vx}sy ~ nb`'?@6_Gk5\0EeDCIj/W_*+B!~)|J:W_[ zTz$ǬV~x&]R7:Xﻀ 9l)E/HH{AYÍ8vq0Kڻƚ^ɗY.R<Sf%m!h9! 1oBpvwVemA[݃NBPW!cS74o^uO{T Z&۬f]6!H.WOnx*c=/4('4夏RBthć` kQK]c}ZW?ۀ|5ȼ_k8pȮmkƭzrVPp4}jrggJz(&,Y07>FI|sV5[lC@@㸋 % G@QJYN:ې+Ƈ_0^CUwsikUx:=r"Z E" GNi"讐 |ȑ"*)l7qQ' ?6FzZ87/W5xQfDp=Q1=BRg= 3Hdcf 1FH|iXn¶5`}NUd]ߺUToޢd$g3`7DLFޠ22m$wvS.y't\gr9wVpG.mXnHU`6Wnُx$' pIvnf A>7ILJ[^p [< @+ifPi^v>\!cDM}Vj.' JjZ#ئ4CRf l$H9G6'&6dzO,`6D- k4.lcyJyI Z2jBb:rU@%kD ACCcQIK$yuj;@ݔam9Nŵ*UzGuF O[dIi5btrx}<-KpißrZf]Nn8Iɧ|6lJ1laY;(7%Vp%W΋4;xvk/u`ٖfkp@ZhdR)5y"LxT75V -cqcL2 Qۥ5 4ZĨeM̫{X`OOQL˘qttnO(3IU"}eB|M Ц$XJJm~>AD!Ue-@USrR$qn&d6AHD~\F`&Uj<=)p2IukK߰Bߐ2S@|.AS*Z;xUR->"21E9+kܼƭlgx%jWج7#!:b.P8!^Fq#ˑSq޼K36K&0,b O;)fB%8<#et0c*Gڸash]9_\2q̋\rH /&>V K0z9 ѷ:y gUk[ٵ$ V ֙Q'n/5R.>Q Rn|:|'m5DIзIN+&N',ᎲZ;dT;7@[ Ҙ•{CV2R=Ni$J߄p*IA6R#EىkƑ{I[qdI̩^ruvE Du /8) P$\YwT Y׶&Z]U(.fHı|3E# p\6B)!ŽRlڑɅ:DWq2Y.#q̘o|m,Dx"/ ,:,6.iƅd4kEzrPE~ddӱnDn.E1ҘQn> iWEF}LUliPN-NR_MȵBTTc %JSh,3^Tp/<É3|uAARk2靟jS4@Z3s 1P0Dv1I|bj\)|b ipIDhMd \A@T &fӏȧ5g3T렔"UFx8ڧҜqLl&={qoR:E q󒚐<ֱR;)Dȟw a,klB:c?~=ϵ˟u .GZ ^Lz]t =2-*!C:&?ȫDb$ 1I  f"9f$$A 짼i/nj1[MvlCLO'-Xg`\˧w'ٚLqJ7sEBc3@߅L& /Ϧ U%wOd IJb4SVHN>"9Zf,/Ӗke#eTJj;? i~[y4P{\‚2pB2k6mA졔R2xC>$'R!qnYPbw`GX )ż8Ĥ O1dC,hWtI2$qG\fv<ŅRJw9eJM "ךkYLkȻPVo/]pZip% :f5.^i0{g2@N gi7, LEǒۓ2t'ӔdȠ[R!BGqK!1&Y1Ʇ9y+W:ׇ"zHu*Lŭq [&`4eL(p5:- #RKjhB}Lmg\])`00aVRd  iI;q+P 5?^|JEe7}tj.0|'jjOA nb;96gBi„R ʝX[!%G,bh8XX.0%6~ >>wj|4iE=opCvVM#Ph`WdNV!b]o} ?BO@vp<>\wM8qe38f|5vΫ{޼o;mf~m0yyͻ잃v޼ߙo;v~ILo⟼whםgvt^|NbNvmw2vꊃ{/gϗt{kn|p|^IEpJ1i[?wf| o.䑆vc Gy3~E3鉫MpA*d|8bG@qt|w̌_Ͻ˻> l/<*ܱw0Qꪧ۹^OgSI<ݠV?WRs2ngN`W?m0ʆfdTs_+JoɴoAb)jz(׻!4m׽~3N{r? ܤA-AOS.e8\Fӱ9Ex=ۏBm Oۖ^g&^ڞO3w?^Nozã({lI AEݝW+p{^tw=Ȼ.uvGi, =w:x54p7κ}g/dgU78Oǿi极h=3sヅ1иOOG3uФy=:u)!~0Cw:(bB\8\L?.\(QpGYz-~ow/fkJZ O7n++ $' ) EgOp;L[)K4bּ8uoS=ZzzqOX}y d%uWT+p%]pou:W+ZQԕ2X}XY[d/kAnTPVmp~/h2TS/3ÏN k. Lk`҄ Lk`Z<Z"@j+c:| cu Y+6(v,Tc,&t9QUT%|”\n%z‚cD l]LqA JP;8i&sР%4(a(`qʈ[G'8El$$^%J-df۽yPhY%%݊ \}FXl.Mba(6q-6].fFkH2+V!@9%iCplbM[{`نJ-,2Wg4+? Fm&9믱;{OW[ov[r)"3(>9ga= W_vfLMx:a8kga*@ְt-P'*0P,&KeeDD? \~+ JLpG  1F4)wy@$+׉BR{O<-7 0_]ɒgr4!™3X7v98:8S] pYefD ґV>bIpϕB i֒XӘY\l'q:V:W+bU X,!3MF2O|زGr,o֪oKzu$/sȪ:FWd ?c +BQcϮ] \ܞ=؊t+6r ϱj _~`\houȵCܬNUL }U8jI.': &cDC9'Z RS_x_ZׯXIѡW_~__}}_uZ0~3Yy+?N߯"^/_P ~վRH cY6s"Egٸ m׾_w+&t rM rV Z5W6bbߞ t̀ڞOb 9bs<0Zm~顇'曱?.۾jNa)Ʒy?B0ƽ)4rtݚd8rU'Zf*%VU!XQR1ht(I3VnjsX]f..c꘱:fTV\Et}G@9)rtt.HQDZ'>6c Z.'Ty~3Q uLEȳ #kTCER$" mw!֔*q"6I~^GRoZAk3MbsB?v<@̲%S85k73.%$dfrvQ|5yӝ˴maM$.-dT)!(6)p9ȉCv ['jv,EQ(uf+CYw2aR͓>C 1~sNF QIfUi5AAC/~ᇓj7>ڙ[A: EHqoxAR-*P#7ERq]@\Ƞ6(WeѦs|aaoKJX/E BPL;_@.rfWLGQbͳgB|VmQc(YȁBg`"at *<$hNMT2zlS͜T%IeYNP]*Gsr^28E\äƚMJ|ӦS`tG2. ,b;Z2I1 Pe*({zQkL,>WH AObQsHc?_ͭuySZ6jy㴺8zq,r]%4B"$5znur̩v?7׊z EƼì $I;? |3Eeƽ7TjC~u~m5uNٝjTɆvdG6F:xV[Eֆ|F%Js֍ GUyimlX$uK uGKkcw6{OKk@@!S Sq@›1!ĸf]Zvo%ZȀARY3foϨykfKVM+rf֜G&8!ˌ k ##ۼ7T '"kg6Z>UTi[oY:"$z )0`TS1(kUțK%+er~rqY9>tYXkХCF,!1K, 7O8`9j(%#IIeX\ .;,-[=ʞ5k{m$_h hO&ؐCIQ|KmMs%RZ+4>iړkhVc>Fͩx3N<]{KX㦕xWd7ߚiŋvբ;tѳ*juv^/n^ go[[g7o}|-y?}ɵMZ8cbbjoSb\V y*T_(X  M8jZ˜i_zLS<4Ũye G5ڑ ԩdXk8Ǩ kpܙ@4p{sè7Qxj([Z;0NQxh@5+jdEƒxͻ::v|;&p|LiB1{GZ~v^xf*ܩvEEw$xڮ({ܞ@=|12{AC/AKC{`)Ets=8 OH; a*ǿ}q ٪vvj]Z;کa)$gHOg=H <;uSnrAb*%]|ۺI[bPk{]]bUaPQa5' :: }vJmb\”H+TC"cMڱ_nwշĠ_\(4F/O3ŤӮ]ViZՔY,8Iʳ6["HCPJV9f133]3/_~|~sH2 O?o_6׹3#Ϧ6.JT*Њ_ja3$Jmͥ KX3)oh$kOeFOuXZv9)PhAN\P'[!@.IcL眣iS:4Z!bSojDԶP<`Q&I KC^^*[QVe? 5fg KrOlUs8sDنtӒDBh"d?YѼIbfg ɠ'P u1@ڊXăA>P*) $jҹZ]fcD{]r> g$2w۲; "J0ȟdKS;i 8jD*+#jTBUK!a4eN$>pH?3!b_~șhr0sqŨRt̑dՁ& r%`INYWqn)>DfPk/%. t%wbJ$p0! &bA!IH4A @'Y{ƨx]Y9$)ۚ:|:ΥVEJ!ykoHd3`h9&=i!;fH**xT8 ahāA33z̼yq[sȞ/mTgƒˆs-ʴ6gl Tx.?ISM!sd#'/~9?Gq=n5ϜnXݙ}5⡟P/rR^q([A;WkfqprV"4 F "Ji:@ku2ټ}[ogӈ0蜔WGŮ<݀J̟Q 2爦ڶОw8ZV֨>̪c\I# t:k> ٙJ-bUÀrRZ}DޏJk=evWf24  rc`6UوA-l?caye޾:}rB^dl3V~l蔳-Mi:Ͻ M"5ź<yVlH>Y8+hdP.f !sb6˻L2㾝[Z75,jKW8K#1Z97lv-i4.{@݀v:.Iap9XΤ޲1+>BKym~, :%,nygR!_ ) +>8Lf=X8&=ǥYp.edF<}ݣ2Sź1RlMk׳vfKB^9A%XGrmއ s03 |i~\zvؘ4_\otdP)QGpzQD-<-}Da^xT~6n3xJ 8f@c&Q}e*^Zjn[bY'yZH7(( ~*EA5g,*vXCR2l{pNފK1hvL̒ڹOOw-; U1ٶ5U /=&܃Jj='!ď?,ey%e&!{{-)DEq}u$zFԯP"xBYq]{s*Suu$P*a:P)F"fNMo$N23{+x4P[TK!g *WbEHݑ*8rbw[ \GÑc2-rqH \MADh*k1 o H)n^WR:zQ࿂!})a0=ә ?Au~>^rvw0`~h4:fA?|tJ"]q}[@\5h6Pxߑoޓ X<2fwa=E "Y}Hv(.k+RC%բ6Q03ݗ[l-7%:\fyl YCBW|r2/ߚxbH+sf0 ~ji \ Z,x<SF&E`2]d&UX&T2zmTB/t֢9v !|To u-fJp/EL4UQj]1E#r )!x;:\T eV(_(l04 m`uet6kJl0~ xIFtc0A k{9,gB9DE65囃LaDĮ?H߷̔l*JeJJ!wӽJ )Ug&^Sl:ebf4,ogToS{ YK& MBtu[]*)#>JjCH2J),̗uTi0KV=bLVK6p(uWTI[_Do}Z3Z Hm+pEB̶D\sY羴DYS:t1ZyIX: ^ҚԡLpNYKضi;2Pfx-QXeXٚ YBX ҰږH[U ҤQRDJ8ª9/d6m-!Y}?9:q48K{{>̶R:a4_ঋGcQ{'}[n?sp7}|͚;&U7"jA/\z"A /(+xN!vThO[#: G3YXi ~\I;T3UOW 2icvBpf`4owGo?Tt',_dr%rp93W#P]C ;.bn|xyj0 4e7n&՛+.]$==?`϶p'g/]ǶSW9howg/}^}NON |*r}{^ߍ>@;7@pvNQ7.Io\Gƴ7]^̿#ܰ9g|O,l_q==ۀֶd~c,Oߚ50 ۴$6t ѸwzSxg8??ɖIً]=_Lɘտ^NƓ?R+0GN^_%>|pN_^<=~os'{}:ԙbܥrWIgw_QoP\^Ð;.v {lۿ={b;m(cii0ƤaD^_=)/ 4-LMUcNSÛu48A|Lm'btތl{'3^?hvg`vHV?0?PyMά^@ umyZ)ƽ/[߿H~sv_"d9K 42 'P٩JҌkB$3> "Θ1θ̹*.WW ;|xM=~JK/rB_L}t]=۝_fǓAV^o[3k^u}~M~p r=#BZiΓhNyΉrVq. DIDyR$gFӄqZ8{6qϏMܳ{6q&={nx'fJ+e1{1qiɝ*cFL82a̤ȃdE*"D=~TmTbELkskPZXKT$QdP8é4 &<2R(h<; yE͘/L&L"dqT*72((})E<[5;Б [(%c<cXtŝjWI2n@e|-Ǽ/fx GL}Dؑ-6a:f,x>Vt[,xn06LX2+\-`gHBp++ZI$ٜQ,SFsuT#|`'慔Jx<sol> ?NQ9%#Q! 3AxI\Q Ib0tI_r3p)s4H{4->Nک\Yʧ\PPVI?+%4~dE M2d*_V@Δr'b +Is@\p=MboYV RZL^)J#RʅZI]a PVZpDU 1YEJ h2?J)eT/C qܖ(%R -kp RRR$xHҼP-0m%SqZy먥% zb:kSLlI[X^Q#Me薪Oze*J]<ɐ{Vѯ$emRPSU`eu4Ԁa$2e;'lm 1Me5-Yn鴘%BL`-H 8Ǚ\%JF S%B$+Z:#|-3 "ЊRJGVeryT.k_ez+<(D} +Tt"R mT< E);VVFփ ,0{d۬w X msIi.+loV\R/Wl bctҸO kHe ./#G8\? p =HSDD. ,'<2(Ypڛ3*WL1Q WH4K>wyI$ulӎôlف1:2ӼTLRq4SƝ/͟h¥eomVޔwVësW2%ـW[0ujj/8mYM)U/| *$MHǷں*vȒF5b$`5D&.x~$Fټpe n "i:o j^6+!Zm^5yŵ 5}SAZ!&Z+b^Ua >~NtNׯ$v eZ@](vveEFPZ)W5WUԹꜨ.s ]epK skwKu$s. >*jmf݀>Ǹ1_]%b땧NS 1=?.KZ5L* 7Pō!y/V>gsDgP*֔HHEuUn|T*a\17VgYiV!Z}v:.(L(x$s67"+b-"j\,LP&99UrZbo U7`s. %p$JAiɍة2r]N(l,"kvqRեcY4VKVߖvW\W"Z\zڿbev")~_} _ik?{mN(rd$ Y%wAV4c J~"&Exb P>P(#ӓB((TUZ/Mύs/>EOkTphꙬB6K)5jO}'U^H4/<D9~P7a2(@&K75lS!*A+ B8^5MQhxy32#xvJ_4՛n- *</O|1:&ERV)P56}%#kJRh3ݢ7OR%:!,NVC~_; 5uD DǍ,ɞ9v>SAeLu*zD)Jtb=⧨ .:CT;MX9f7C9 O2+#U`Չ; ੽Y߄!2suRP.$/p/_+LYBM}s{ U+/$=[_t($DD9C4*4PZqP*;|E0Qo͠Ђ9m5{>QjΪ fӡ.hgs!4|b6f%E1|VܹTJ CXcāAFl-*R [WX:cqiY!C#0lx uLh b`Sb؀;$V\ PxKX tDR1*Ѻ)j[Ͽܬַ 7ʋY_dS t뿈^Afs)`*D''?Wx1Q]ݮ nqq'dB~*W_edg)Qf8JZjjc) TKuP?>!WvDCO͚CgR/O(*3e6c*UO^Q*e&К%:;@/X2==s}Z?^N-@! p5Ps0ȾMUYk)Xku{W%'Acid䳧-Z:{qP6ت\7KMw<"S4<ϿϿYYv\8Y/_M>??x lec߷`>=pՀqCqw\؋֋}.DSҺV*) )xTU҉(ǘUߥ*D@eGDvֺ]; (t w #2f|gCpcpN J -zfɆǙd6?7gC&kÖ}p=08&a=?kk Zw0,]!y8RkCTR(+ܱ )uJҹ+S1kD*еWLmpp!Uc9_B7Xv=hu31ô# (HWU+N"Vb`ܶ=$E3U(^V4YD諴@HU )_|ko / M"?Eso]G)F HqhW^% kk^^V~]{rka-";[DVvȸ;WvsЃ?t)zHc\W~^ïAc4(CAWCx IL c Kx'sJ|% Mgu;#l Q*y7lUxNٚ؄u{xU|'0$윉HJG׍Z-նwlZoiP)sQ$tkK#~E/7] =Ő{^B*towpF$&V(K9Zۅ`KLS]~M]2 X0R=zRǔf4N9z}U\DzKzs5RzXK<pHryĵKHJFb=Ti6Wc{m?>^kɆ[evxo2Jj5x:ulxAp g#;&{5bMO軔o"C{:UL:L! :-~dV`PfF f_XߝS)|c@V%+14a6h2$iw8 >y6#ͤ4fDl!ϥz8Eq?gJ`c hw/L痃Ku:6EDih:5D"Opvw˪adqBn?켏,Q~.v|\'ŁR\߬H׷<_Tk,~ߣGXWŰckKV⑹=GmLģ4?=:uYrT Eq2Ns1MP;/]٩İoZV*XMRNUnEQRh5&:XvyG'J!TM^$9NE#=t4:xbL}.pX"g *Gt8+gD>͞$|)>YM٧3`jف G[dk5S$5-dpl%J`8*{W* J?c&sf jΣ{.W3Sȧ;6wzX/?}OϿ~ⶰb#v~{}9q&oX:m]=AqrHCqw\؋hڋzu.(Vֵ *)scZPDYXa(P]SqqQe ݱ'`"SJ:qVjIz_A&Jh}a\HeX[\~l<٪(9 4Z7F >GBMVS4"1;e~ȂNVشD$^ؖ'gW}ըdn5^*wSUy]ڠc4@0F <^׽3fXM*֢]"+61.0_1ۢ%HcL\+,LE.Sb]!ve^fUQj xx8 H³:9Q'K`]9j ࢦD3 ScGIYBiV5 MZ7ح"kQAPXKmcZj;`ttL.%poY{1C7w Q'{.]C]d%HKn.S쒛)f9ğFy 2HvQʑ =ki!9N~ς݈K ϊqEu5&,&cؽY2ߍ4GVDz`mCsi, yMERu̯{7N|@Zv;8H5iJWDqr~@ic|䨃i*C 9x-Ss7(@&ixjw?QuBZh SaЁF_򳁰iA8{V:-,Hd4, 5fuLOb㹷a ҢWή&hE.FTʢΖIe:hºcMu,[_K8~ҿ\;SġFS.=d1fɁhZ *tyB D, 'EM-$EUVD*twmY_a`>xm'ZXe'+Sie}u@4@M{[ƭAq{c2ڱg7>˹Flncw0QGH C%*m=%)c$df΀E,@Y;74Wށ[sS!߮`+t؛F66;؜EJ@x$.es!P!_8`|Rp2 8gA毷" ,z䏲M3? Gһw|{7:S z9=$GqagP׽͛ϳ]Ʃu9{cM`:[Cc}„&avea)͚ukDymNHifn 灯CFXK, |$b8hLq""A5CS]"S"I˥ʐ\ `dceD"lL+Q;q6Xn/EFYx, %i?Jk-%ed:XFІYm3bEI0S= Bp_(,KNu& <;=!xwۍ0x*`I.3N2sb~KdDN8 5XN hյ=7銐Nհ񺱕1`!mx'WBl4c?@(FZ+dEN>X_3i'+u*cn26IȒ>< * 3y$ (Ox@Zj * i`371^Q; nB]{8a]L>יX4P["B∶[JT|Q~"A ξ?~_)eq3f\?wGB)~fKg#}[Tg 蕈VY%rw9泰ᚡ¡'-FΥk04IIhefB&@ڋF 5 ʲ`O=ao%B;OnPV~Od~ 3(a2X{!T4i+8#9T,1oUD6\⍝پ/"  gDKjj@UEϷw ҷ3ʬOaTóMqyB0Ryؒ$D`V>UˠZhZE{ HQn!ڐ$T5p,+X&\Pa1+/ԛ"B)D<ƞ՛EmB?j!va{|M{.GuDxKՅ<|E{z .W$g30"}3wQC.xCE$j>D ,iB7Պ3#[\>r݇־HU$DJkfGqکFiLb։@/%z. RDMp&UaeygJDkhGug 'Pvh cHH&uNj] IAtbd%(|ˣr{Fgh?e(' *z#9 {lJ 6*ȚtKT%Ȇy}dAvQDUQ;D"È8ųkō6,WL߾N8/Q iblhF6Twе^խ݊21=ZK[^{ԾD0f .mD%eaI 0i jnx1kċcBaBn´DHnwXybWbeƃ1\9f|2x2).#.-m `M /LrRC $M*} rxIȺ{JvZ/ -X4KWUt.$nLcK}sᴩ ?]*t|'t!)UܹޖK4oKV@˖Cy. [I[]kG)T'!EF)`j,BèZwكK4]ٷa:< e&Auc/ے{ r>Ύh#}爓?W?0F%QM)?Fu'^=d7t>=5EEa,8pdJʉݵ `xr?V,[JnD乚DM6& %k%WV֖4fAV1.EWr8FrtGtf}ة &>j@-iR6n)LYH S[[9DRu"qahg5LYڏ0ubQРUn\}:A)&_\a\=fQT:j;),\pRSYj#mX܃E B\=2^gY #xYbI&ߠehkbK6lpzh(…5 /`4I$m0W _Q&k4Z#ka[E[$aJȮn-By+}nru $VR*E;ݻ}.+-]Q-+LBzk6rjF"LbGC<E /= LDw e%B0&I׷w Z\d OK;%ຠ6wMo=Ig5@f8\2LduDv p!ˌf&:>MFwB֝d=[fHѷ)fLk͢y-,׵by}HqadSq) X٧ЙrU"Q,T.cvvr>kU4s) G\pqnKjX7q| &I$:B,=|DKb~ܩ,bFTjhRm,bV򮤬FpJwU7JdM`JHK\heÕ,廦SQ3잾ӲdH1rjw[ \@:R w`m6$sb&-8`|Bs mp wmՁ8P0ֻk{мpgKsB+*$ U +2Ӫ4=+R ]#J ρIOPu.\.=\c).rIҨhi@.9Q7]=$?7ia($]k5] 8jc\zJJwЅMT+a/V6lA-1_P[L{'&ԐB/uj1qGcK`قB=aJRr2ynŢ4!ZnfN>!LVi7s?Fg/c-')&tr]/D`=a2E&xǫ>=v׏\߾ gcOEBy|F$bY=?Uq_^K~  x"tȵ*U<:([`,'XF ƜI%u5㘤*N8&r;׺oSua,;]o'@Q1+ $h It>(^) 7!|Qz[m2F|~_Oh1pK.m0Kr>M \pwh{(PtsuFω|d_ӡ!QMƂ+2e>.{? |~>ʙ;G 0~_ ?-{a/JrXćwaEq z/bxq~~+g2d=>gZ .8Xg"m-{d )N=?`Μ׏_ol jSKx8B:[$"]y c6_(o姣z4BO7d.I*?'G??~ǫx&ɐ0aCC·R"`(T Pp=Fz(!!<'Pd+C oPG1T@$h|{C᚛Wa}\\4ч8>?.1^swo|忝rx Z~!^뫛eԼ_kS _ ?}?FxCzqn4-o{2{]zzX*ڶJզ /ϻ8a,2?JA5cVs^) ~:$gB6!pQ12UP(`VŠ$ h,Bd-Jy 8g-E ƌT;jCNԀ ޔr~ٯroFÊ $$ b= ˨yחL{{g~>Xۺ_ aRVf, 23-+h#fWg:QʠxV3pA^J4oqr*oY ̃FAsF'W#=REYSt0gr f#~/j0)/FQ^#yTTnHS(kaV-fAnzP7m\ݤMcל6ieiq'$z/%>M.MK-ylҐ=5}o@1?y~F犗z^sYderdpv@+5*W1-^\(\,3姲.%>X˿,{kX`T&_o̚FCHE&>'I)UA-SIKMy1ly}s1ؠqu6glԒQKF-iov1y@;xQފ^+1"A۲( 1]V,iԡ f W\{RY[}Y(#XtWɣ $ş7jw}3սfޣ'pp6 q _C<2J ? Ho rT2oUR[d9=#ְB+Q7[f'xaFd&=Vu8VVO]j0X ,o.v,c2zCX~)BXS^#HKLr3.O};%v VbkQ/¥R]j-yOP[J16:a"{ M7yeFX.bEYtk踏jz_ Nn/4s<[Xhi/8x; 8NF yLug NSߌz4Z* !2Y"Grnmy}rl(68V,3aGEl,GTH6SB TerUp&u2LH_Ü&3LhP)J}rs6H]*Ėd GώGAr{^: V}ߞ 䡥rh}rC0,xF9jqڈ],EiM`UV,Zw}3%fޣ' mv%EE3s NiG7z˵ K)eg/DΉ״Xꙍ Ōt\ineb6`|7Y-38RH1#(X5#[/O U&#qhP CFt(X(r ΐKq2IJm){f>(1 CVUBQzf/6^{*e!sz>6QFE_}.Xq8)XbN]A_u%Ȍ^ V4YtO{Gn^9꣣

j>룃419֘9J'pp#:%g\I av<6ɻ4PA_+uăRFIyɽHO,%l.'X"&.q *ě?YOwR Fiӝ/?{ʹucuZFɍ0JVX O<}zpx8|9A{ߜ_GJ "MSqAf]YXG|~> nEƭX 髃!1v;_ op5?J;wMDdzG%zҖ^x}5EZ’!\:LC!J<%\b[bY=/7͗'F%]֏>GuGh+|\щtP&jv=i*/;4MD!ȇTxӦ.?NGn,( 'HyOg7˄˸xv-(U54Fv*tZh;: i=u"lJs1[uMt,׉ySe ^EU؎U`KAlN_[ט4, u1i.WET+t^~-R;P^i# <"}N`.E\ I3:Zu\<@MNJ΄JQ*@zKp* rk8.?4J/ivcar*n壂dzNX9k%F7yJC^\hwT/l?uc{NRDy#V 2((ѱ) -Q0r!#zMk /3jŘZ:We) HĉI̖JbyV2ΕtIκ "E&?(5)Uzff+k !rHR[VZ.)QL58o(>gFRv )hQasTQȉKT*Y e@+ZD:H3%DKiu2Ea!&ib|4$_)t/ԸrE'ڳnZy,7+v"kUJ ~07XL+raLf7۷ˮ*>Na(c| @& 7CpkGOqApcbiv:e-lv!0,,AQUl87^}3}Ur^~ʌt\xz2z$ģN|x]&rȣ&>m>|7*/~},;],G&TȀ="r\2+\-+FN)lAXr-׃@l˲r> }A:m}%]ߢ¯Y#:gRp+#bc C%3(e9D4z$Z(,k^u5jD_\+9ιN\Ss޾}󕭠JxKM6V!TbϗI!TM:)fzR>„ ̖u֛Kҍm/)bKڶ©텨ZV>S5 tIB`Ϊ)k2!D&y(臞Yl$_@=~3>LvwQf#$|dTxJ+)ޮڥitAqp9|F)g|ξZ}(10 )B(#hYrzqդ:̚2 Wp  A~U5.aV)[klߟ`"ʁPNkbcȷy/kt3jOR@T 8O?F@JqfS_VtKcFH _N&R+WZZ.ٻ޶qeUQ]6%"\ ME0$ulײEJ$?#ٖm7(j8C3?Cg&׊I]tޅ¢M_   qHk4 Cp%2PkEl[\%U]8#qae&̰|a5T&aϷߤ;B-1 #qh#$nӗ56Uc6fDHKi#-}~c 5Ŕ1y%X%" |m01a> ,r)m}‚Rܻ5s5pCOSά]i] i]Hh3 0ECE>Y9 DjT:}B ڛ(c`|PP‚XץՅL-)&)Xf… 0g')8Hx˦Zf.{I4G^b hSSYj tzgLqS*N5?;>zƭ7'7k=$!lnKccL}W3O)H8X՜99w|ظ6c{ظmU\iMDѫ`n$^c6D|T;UP0 p]c$0ИRڧF7Fq0gGY17j}rzL~(I`HFc1J;.B.IP|jaS9&<77L.sG=_G`$q6Feܰm#q>Y Fa3"ۓtE3T۷??|x!#QX8(lzO~|9yۿ?qA}`-il>,Oί4jN$:G\<ӗ=WӼ{~T HZ'Lkp'wOBe;ӗ=Zݟ_N4N8.$<_i>\\H6B1ĥt' jYLZ23r}B"xTtvSMSİ3}Р:NRzD5¾/ ZPm7B m%H%:znHmݯuO^d7$7 TRJڌ)m鷩mޖNmJ_}+VkA7zpσN TX?R ^:zM?DgJjV;j=-HUFh Qՠf4I/ ̝ mJF;g/ItjySMV7yqLS|EFkd\\3:n1Z)̶¯d/|' nFAAAmwϐur[ͯR8wW7_]\NД hAYz Hz`|߆׽Ȇ;2!wriy6cq;ȱo )ŏ\\nЕ"܀'N=zUq'KqaA\Zu ;$l~Z [v*(*G49"j>+ըaz=-9XuiOUZX0ORRɰ8DjIE<0F=f['l11UگHHjA۳_^a|is5O z}P3(gj."+ː6)ϐgh~Ϟ( G;i;cw2MF*cyuWu2PK>}̰إ`?zDa k}ϱ v@$!BxXZlJj.]u"B0U\Jx|Tw{RPm 1 ,zD?[oYlS M#J1M2`7mu>!K_Ybؔ9ƊA|?ʐnix;#sⰹ <\wY1d;+J} ,agK.ުYB19i*&JF/hюg0}M7x ;&w P[ޝ¸qJ;˶_V{nZ77?/?^ Lp|zթ)cDh@!tħl:( m#w/_ƾn7uuv>Of"fl!bS⢲jtI:onڌDN=V3.'neg^h 2O7!z>R"OǞY?;Ɇ"!ɯ#a{G5qwzA? N`CnhK;}y{!KwvFi;L0oS*U\bk&S *6quɝfm&97W8*+v,HuG=`핻rBP!2X@9\X.G(xN ̱u eiFmC-5k2Ȥ@@6\5´C&\U"LzvxSJh2me B p bXdi!WRW-kԠT JՠT Jՠ>AlE+G& Ĉup` IM6b;yE~2r&~Z`#_yۨu<5g8u:܉&f$o6t&ö|{={pB SCA^>4tz~d}5OF;'uwPYTZ^FwFpw DsLI5I7\ßxb*y( #eavg2' L>.>6]4gL O$}<"s+c̐ݎeGm4%$"~tT*Kcew1R/GR@*E$-7RnR.nƹחv l%n'AʽaГ/Y7!',B{šQ[RtFe0䳆grN`}==X:Mq^^6b\jFY,>D/ӓ QD1"^-G[ 6`j _6a2|Y_66& 1\l碆kkkkkkM\\a3p OWnJϱ9u1}^p+9[&L/67/q ;6nC݌:a|Μr u{i;J:Yf/Q49H#z߇x_.l|DK'(>E,ϔ3-S¥(~9 /0JjZ\qL\;-H̴O'}}ncs{~E8`9 8[6*$`GT a AIo *ORՊZm\ dx'G3fŧ/7N?+I|/x:N xf+DCak#y}%!Kgau."Wdl *nR/ƺ:$a" V29@"T#7ׇa5PR"sc߇7IqD~\:9=q86HHCDqv-<_<ѫ:NIփ'ܑ wbF1|%&;L[]؜Y9Lp(mK38>Ǹm8sʘr@ D @E8fIʅ|c TQ9)qI\΁k+$swmmJd%UzHYTǮcl. .cRHʎwk6xFHC"%._1!@!v/#re%’ 2p1qnb1 P.&-C{̠\2 Xj,Xり礪ycRQxz th9ObZA^Ee[mޭ^7<f@(l,qxS%ѥTnnN(C>p-pQJql a2:ȆKN2vr)x2ݣ)k`lN)WS ӿyע}͍[1.l!}?"(` B \9WPs[ZURTZISDp'G 1G5 DrZGr-Q ̹4$A)O}rϳOLdbzu' Iс>koof_8DƬRfP@2˽"gyInsy}z"P7s`8Z \]+J3ՔKOrᆔlOƻ 4Rw=?]Qɥ):i\ˆ I!w)bK?S)U-EQl`dȍ6Yf}u1ҩ˭0a|70>ZiJ3]FIf 㓚l*+XTF .$K Gߊm}StI, 7>i|AԨŸ/|ڏn!!ݬd()9edK=:a蔂QIFQ')ΖL'F,eppbP>zOHtl ]8 Ȁ2Hε%Lgg1ZyJ&@+MvJA)g[JPIyBwiM # BC+'vЦQEI|6$sѱ(IE, 'k"G樴AD<82*!20@u4hl4ȘbBgi(}抨;6A [ ւM34=!9U(m & !yQH@&Fр9g@#ĀN_JJxhW:8 Q,JnB e2@YXJaRl4qsy-E\B~ͩRn.ވ]Z$Oi/_.5ӤcTP]BWeQ +fYm#dm.fP*tsy7r$Mfaw_ ]O ?ŝ'?BMlQ?=:v]SLeծs3RϪWHF)lT|wk_"񤈷@ %`,U^N(99ɧ^h4.wo'4jI.8uonTӴ\z2)WGb|+WX06訵<FnlͿ<X_'#'pCd8Sdv\3BG4CYAS܎:SCn:/U:s=ed(oYA^r|TQ]W(`wO1 <| ȝꩣ:,ߥJmd|T"cO1\2?vV; )Cj8x ޝn38 ;K'f?.O[$G7KU3p/@4Њ;1 0ݒVѽZi/Cy}=ȶp\__~\yYip\Jy3R[p?;.\T.>\w_ mQhqqd[#evo;Èc`kX_n;Cx+0Obl= UjN:TeuW7ֆaƴX]m0{u楾WN4_~`l ^NHfUF0 NPi((97l {0C[rs!%cQPߓ*M"Q3^HͰ(ARk-XW]unh\ݞ5f Xt^5y{[3jA5ٓZ*yhT|<.\> e(WͤY-?lTbLKIKy*2>ZwP7ydڏꡲ/V+QÞlsH O^2hSy-zSrw>c~n$6;ճSyv#1%6UU=4hϡG rr.-;~ ]kVe^BJ +¹ fUXMW'ghB0[A Zw^VPǧOZȯ+Bݺԭx8^5GÒ)yܲuMuDk?lVƷNT/X\J",ő2-.ܷPϔe\/ e缶'kIpZ'xVUˇg-Dc1n.:SխמU =b*K{\o{w?u ^仧{Opu_^hnvN'Jz 8=KR4jC'R28LBOzzGN:LA,m[Q M W^czlE`P*eL8g(ԭV^K?{Y47kYTM]bi U;Sw7:սu"\,S}i]L<\MD핤yvBCdo-3vRU'hAc<"}ad& e 7'ٕ,d$q,!&5]d QCX4[GL Ja y5MiRi$#V~9웋d;f+GUA;)4r: R&@rD7=ֹy*=== "hE-^\#~B W,,d"aa U ,Q #FpS4JT(uc?\0S}x>=!J &j `8:շ7ĠNOS^wi 7tt}plcED5oݹU}^ 5]">jj =x=>Bn/w=j뱇v͍#?;Y%#0B k+DZEQw\Zgĥ|[cvu\s-ܝ(/d EAGаcɗ`D=輠I7$1@T%Q>qOiVPaW~z^k2lx|q?oɐ Ap^MomgŚU =0,xgovRke죛B ήoܷ`<s3]G?q.Sީw= .ߟcۋO5KP93^+hUҒF7fWpε6Cz}-+0wk+5wε6wP*vt.@cY[^xmN YVc-S^% ؖ ?.z6,mM8p9 ipim8I? %KK,5ovII˦LRk'-̬w΢ ̷彭A~~0HƞqJp[UHFсqett{>,!Gz(X4.eDv1nMYN1}@z5XOFv[.)sMN:]ɵh ]Tl 0tI}sN.yn7\X+NA+ 'ΟܾQťew0^Z=G;]{1a ME&c9 p]`1{ʷ[)6Hs΂.LƉg`qY8}vKQBȚ));qa:~,VO%ŷݚXֻ/^|kӷ~9ipF^w|TjGwvC>::5lݻ̱E5735/׍wǛ1wc'|30h9 R2#NBj̎/z(y>49"h_NGK^{+ȚIMg .w#|lB \i#:xв*hhܣN4*{b:(6|v.X\8=.S8 ܡP 780?@\>Lv9h4(h >z|06+߯ 89?_},kH 3|0#)ZKUxAM `ǃT%x4mF#hL=|9ۿ`=´"mV>t+0( }#|svS |'C,pYp_>l4H k?xdpv`4nqi <\ɨ B߂/nU %䤳3?{0TWQ F|]~xWo&ן޼w?}vk>H1t8p&:w$pn:2@+h)qHÕaeUGrptbe=S 3LZ>yT7?um/>I٩,kٛȭ/svVc?8am5A3M(_{=.|'ϧɧsșK`m;{ NeƓ d>x\q2;!Ϸ~£x9 CS'!`7}R%qN0tM-L+&BzYd]F B !)-vypr>XSƤfc7Sƽ$,)gI4Y3'֕,w&{p ˿S^k??F6 l`7SN hUqa4M`+~#w-Of:Q7,H+XmyK{t+wગg@4^~p!1;ESV~;~vJG kIsav}[1 @eh|˺e4sROUijF8O^T5|܍>}cP!ʼnUdB8_ yinaSpEY d1VH1J`=Hr'4N "$"2륊 aUpYE`xD蜒q~o;qB9/丙g23bX`PKI0Œ[i$bȐ M%sEP &$4 Bg'VQ&ԈxV;/eC1EtMEl*0KБ&`bX[Ci A HDc C(D H,PFAPkDkRߣdT1Ixj M < 0\ (Gp. {9W͆óL8~ lJE芪ǒ4Lpl::e3T U`<A{ɵ0 lAXPM P,% ʌP^ ،&cE0-ƉDY{)-E} fhMRN'@ "ARdY3ƃ&iWqC8TIY ]#\!ޗxDzn;+xEE0Wd5ߧșG'xxw0.Xߎb#)DWAUڀRA!i5RQv/<`tɓe[ -dȗ (~Dy%rV2X}8E~3x 7y-V {lȅD'~)`AYY"RPəZw"u(bKc;`1[ru#&6*aRrRg6NлSNZ!#N;QY k'j  Qsl--W4:!Wp3LKBVDy+w-mt olTySzmbFt .vG];2mnlk(rhu٦Q\ܕWrk|1 3ƶ zxV>r+ɞ)"]>N;@Ld[Kf[; T=&^;&^c0p-'63{ FA5}}Q} "@Z$4A_i|Aot0*n̓`rۮԂiU֪-úe7xyͥ1i if%J4,ֱ3#E"C0LB " Y,t 4ETS1Kf$XbpN1ah@[[ʶ$K,iP LbiƊ1`hmBEa!Db!A$ : @+N E$0H[M-pE|HΑ.bH04 M$k!I(MXsJLhuB)0D*T'sX5SV\fNe*Vf GM\XGٻ6$rF~e?,iAVWDʎ}4$gCTL؈y8UuuUu=|(>}%$D&baWTYVC=cI7hDbBJk\^2{Z/I%U)%-CC@zֽס 8X2h7i9=.B1ԛn-s (+ [+q8i6Nat2'OpOk%3F52f̴ٯZkrƒvls|ykZ -pŹN4λr̛L4\gV,WT5^j[e<:9ξJY_^TD"=U`Rpް|(v 8 vE ^*/`̦$oDlچ|bAECUC rڝVXMuM,p_3J A)]Bei5P%:6yLXs[sѓiTӃp:O4 vZ;dHC 2s}YxDA"hHE$JoȠNI1g2<$ ʑ> :ʠC?D$3Sd#Ӛ{5PmͷvGLny=jfg[wl7}iIU# ,L6YP2A5ޞ!lz)q*s 8)kz9<:H#7JȔQ- .&⎃FBrIW( hEC1=5WFhmB񥖬\GL+Nhq3RkʒɿGz4YrFvjj  i%sg^zQKCcx"T/ e"m>m6M3?dE=<"`C҂ e|TॴZ %BrsG'M{Lk{#6Cc`Fظ2ߞ+:0V+"Y2]ײL$;4Y;ڄl,xX9QjR8 z "ѥ,}D[6tv,gd`%JdbI! F]J TڧOZ SjBmj:uh|`86fꃴFV7/ڲMœ'cL4hkDۦO찱';َK+r3iGWNDQrΘ (c&J>\b3fvnoDG&JEcvQi [y,MT 2@ >9:HmӇi)lu4ι2% 2J7}yP 1o_!iekm,&rۻM נfPLf{$z#B٭ϙiK8eCwR5:92mx}z M/,W&ȝDn3M_Ӷ@W)o5Շt˜˥kJa&!M-:4I@\ɾb`;BqJ0KF%f/ɊB@M֮EGEl4 @SV rLbgMg r$nEa$Rtޒ*r^o{H[3C)qkQx侺j"%4pws(VA,_/r{ύaΗPMCM =EZ^o)=lbv}Z5Hd,|H Uaeыde#|oYT|5z-R_ eɩt6^}GMxG /1W;उ2[{lԣս5ܣ&t:*H7߷[IyޤmU+|U 춯Wlސ:@aksAKT4͖ %_][`x*>ϣd$t)iau0A)EKTQZD5x&y jm?UKȷ C43ѣ=OjX>Mg], 1:ckHO3 8coæȡrB^k~ΖU=3خZN98_GNtt<]P MKF'[t!WV7x;Y=*{kW]GiOVWrGл6!k#7! ԫ|\&I>d֓_~6&X>{^勎,$|t.#2L{ `/KE^ݚe!E$ʝŞ$gʺ9ts%=S=v`S|ET h~֑:w!9[Ob9QT%TI`?NX5-g!ǩgnUG3h4ĵ5Qe>wPsf4Bb+CX%2LH \*7Y9JsI.U -9v,EQ?jj}ukfm:IHOKK4&DSzjT&GG{VC .<^6NHl}3X Bw#Sd3Ki4} YL#OZuz^11k^ۛ Td(rJ^#?W0쁱JC9Aܭj:c6 6S`Mg,jw ?pR?KOzET,[VuuDaLu2rɆ= m-X |n^B?f̉H-\ S~mk_/zătyp fLVt ?>Y#U覗/w _4v]3eqӜ-Dsm4=d1. g]֐$և\  c^U^6Džʭ?r^+߿p2͉eѝuB^cQ=9F2uKjTy` ||]_ګ}lA!WW݀~)#, jO n'!3wv;r5:}76 ?ik%^7d^"“]wTMo?]'^w\~h[vz. #uE35[3bi>|>n=x\n)Va=7ۛ`o>6;ӶֹM+`@Z -k?׺>4tk]F"Ruf8L^V5fe“8>8ywL3# Q8yXlaՠ'@OIgW'[ %XKv )_MFy=@iv[Wtm:{-(bL9$Sx/-z -i5l|Mcw o:o3[$ƚ3]-=Gy6)(c1EؤH6`W\Gk55Фh,I6Ӂ`= k_pfXhcRbAID"V6rV0aE\.6 6LxN9fysۻWZV38& l[ɺ3̇kxLx0XKWVi˝V'#(Br4ۑXk.U}We}NR@{ҐJ͠x&L:-;w4cӴG .k?3 { ~ -H$(tFn*pV*Z㉅&reeE7p]=A;W 7 cӪTBM\} 6@ҋ!݈\M^c%")6έd(A2>EqJ!yɹg_pKBL_.9{yxz3 tsϘ Hp|djɓ5:la?AҶ pFMzw<}&j ca )wm~Yh"䜇]$LuFY"3]ߢ,-[ݲd80D5ɯŪb \RO-}޿m\/bBR-zgYG~둟둯q)9 5V@2Kfrkχ%oLfOiz,Oj;z31D潸A9&qr_>'ϱd;͹d[^R$x'\̘j9]q)wS:y DD樱1j0ČxP'.K$jg"@"Aw(߅Al!B+YeY=]Hqk V"y#t(q852E'``pMU)BUBi͘t2Z&cXٜ_cz[Q+l HyO_N*&"||jXRxke,.gp9k.ʜ)Epy$)e([å7kOuN#i jvޟ%K,u{ ]eez O ڋ,z!?\bA{Z<aٮF[[HdpFbUoI@ϳʤ Qj 5yq[="db ops@52xdiLJOX<>Or/c\UCB$xXTѦīA]k}IhȤ˷/g;,~t8xznnOrŇxyH`I|wE./ a!i5m_>;Ap0*ï'oJ Mɭ{ᡌI HA}Z_s%KB,54$b<*3U3ipi)K%jeThECElZQ攐GA$c*&XD.Lrp;eZqnLDNDWA&cJCTDjUz5x9,$h YzlpFK/j/dUnf<"@/mЪ@Vk6(nxRfckbmijKu-|$C/s#5x9 !MTGd*B9MUZ*IyV2*>R>YqM) &ooO~~8)9)#Yv`_[!;EcicM %#]Ƶ<I~V#9ooO0of)vǟnp{Xz{B|3 x<~Jnj]0 ?xJaEOnSJB:RyK6 c9gp# l90RwsƇ;#u< nPzl(m68B$,69ką8<8)gA5)RAZ>d,*D!eK҃saHϹ<>=tq/J^ZQ,l_ B-}l,yFNcY@[˾ y9o4[-˚WB.!vyֺZ9RJV~pU ,hU bOF0DBHKw&aYH]=MB-$BJͭNEQH#GZ@9E.a7c2;6'OG댓̃ qy$J wc1xznҚQzvc <*&O7^Z\N /8y 2A½H$Eg s)^3tCQQeEOUQ?WcnߣBax~٧zTpyF~S `1j8՗\H~漹2o广|Kαh]}Sqc|:h~6 2vOh[4|ogϐI!Gwҋe;ELE!ZG_^{'a+dB?ЖS-'6~T}dX_ߜ П՗aaHYd-íIkno\fo_EYg"`ˇ |>tg)6$:^GYFP\AzJ.MĐꘑflrVN~: vE6Ÿ(@%'8|YझnvnōŒ^p>|L;@2G_ҿMeGt}ob[w{=IɕZY0Npܭa.f^iے|]Yc)Պɑ#$qiI; 9r釸zsXFՃR1Bu=B(]PwIGw#0#,WVh9>u+/71Y_Ƈ-´Z*uJ\}<*wzAI^T ( ^de3&߭wŷ&o[}w,7ln0J#ݐ:zh_2$.׈Qw٨>85JlLp a)_s _y58J3v|BK;^HY7ŭͥ99"4IO):iQC΄)>I V)1ҷVϮbrIb:SZ"SBgX Z'<7>q+'K8~1*TϮfrU0SCv ,1A$c=$&pI0JFsH;Y&%S6Q8%mɊQXvv5c9 El%>)YjZd ="VIDB%$J$Ɍ5(jDkAk+}..$o Ϭs紙6+(BkגƯbI^!ȣH#spvBc<1VpT$)DfeH$qWw. `*YxuTo'tCWϮrN%J9ZlA>KbO crKC 2[3MeZчYtvS{|l Qk DE5Jf= 6`ǃyF 8F€:(>DȞCg}Rn~87mC~9 |/Y1fL+sA'8RÍuH5;Z*;=eU>| ye }~Ɇ2Y:};8O}۸8_Ą|M%rY8ˁg@lzx6F %B#ڵ :E2B(gb*:MA,$057?!fWKFX'|XV"p-nfl\/ety2 ՝?U0|Kao)e½7JVb]kSX;# hA\4H]_Q_P {Jb:SGR"}])hOBRE[y`Viw3:MU ,iI1FF c)+)|@ZC2Iː֜B"P=n$7{Oa:Σw$)oWT\^E Pg1(hЄ4N% 9Uvs7 uR7m\f<>*9ŃI̢%!:dId=f}L Z5ɍ 1͂ï(T ϵo>2=#M#Z#W+dlF`y8AZ鏢$&$Tێ"nM"ȁԸ.hyEj{z>E8/Ἔ"OviεIai}AEVd♓ W9{v|Qۮ/_RLK0 b+k䭃v[ v^:XVp,$H:j/hwI' \ѧ,EZM.f^"i,WvLHyt~r_QeYw8D֌DVͨR;asC!'{"-c7x9DE>z7`M7~/Cag ugg/ Lf*6˘ЇL--}, @;vӇ=9Xf썸۹RZ={|~1*$iQO|Q"@3CujQUutgӓd쁣V(dΤ-|üR=}V=`r6Ŕ\h4(P1FԎ1`DPKv[&$ W >5o5*$bB :ɽ)!~~Co_&x Y0.LX=Em*G{v&؃eGEVKn^;Y =F9"dBH@L[!wFLS=^CuIӇU႖YdLEJq+빯2בX+Pys$GPh ۠!CzZS $iWJcIGr:~<]dH˦\!p9@ƤR(szﻋִL $(%*䀶2- }Ԍ\z(xS*6Z\{@3r{F3f>F82/ru(Uڧ Cj.hjU-+9'`ڕ[zPdr>38(, Hjhe<;+2D2܆AHoKdHOrnLTdK0>П m۱J ?*pIַîjQf##i93kWz]9/Nwct@+yT|tw{*\C5PM7a{ValaplONmvCs/5=ܹ)&M׶tڮą?m /(\&W=?ޞ8Sj}DXВ >z892y D7'epGEF^' /7{'tﴋɣh@̃'{]k4 'GRG9"$㪽%{]XE:cˠ C+ 1eN$ (-Hjd0gř$N :A_YNs0gPhYeC LcD69t90B@s: 1J.fpS.˥"K,"7^$g^$Xh|q A:Ɠ !̠B1e[Uj-lae ;/[y} wDd</)rI:LBAgys/,XmDڵE:}|g@@6oj6am4vp(MR!sJ$I \<8%C@ h+R2fSP` m/< o5Yhdmh7;+iSV6&N4itc%&5Ise }%јLs I@BsV&JNZ C&ӧ2G%rPAh&Z* % wZ$-S>0ZAiJAg5bx3w4Ğ쒥zbmpV<7Dᷲ = 4L"1k (B,"J`( H6Us&Z7x3A{6#e1i u Tm Gx9Yϰ"3 w+9RY@/w iXl9P*JrVwe:*YW.4#GciAzQ#%R%bXqLJ;B25ڰnʛ/iU=NDePwlN#yN/]N7`W;nc.ޅ5;ʩ: ̒A~VkX%RM6g811%|ЀKûAnŚ58nFq,S,ҁ+Fĕy܅(ըNQM 'w 4Փ;YqD19r2i!~B|#FnT,1wuֵJIw%dw4oݹp:h l*[;,~(;`I֪f=2iohwg?+.ieL}S+I`8r v3S<\_`ZB>%4c旋~"q.IUg[)xg>I+Mkː/-(!?^>||W|#y|}>==S HW&qF;KodcRVY͖R\c+.m{_xމrVTOtA:28C\3=$OU V#{3p`vSrwٗDN j|z㏷)&ߪ蛻rmr'>ߞ(58$#e{Skqã6-0^J.gmM +@8ŷmmqmY&7EMryL&2(GgCY @ɯxy4 gfm˗VhJK7)O6Cv&q'#D, !#m(Z ;ɷZA^EoݐƇndV1$=-@(UR2fcZ23?탞a"yK}t~SEǂ5[:)(뛇{泋UJK zLN2i$s-Or]KRh+gqϳ}.یϥj9y5Dm4vUic4m @qOjI.0IZ,;Sour=cɨdo Kgc!)ƣ[k^mt~^t-<ͧ|}E[APDou^Hxʊ|c"v0N;R7Qk^P2WE_RjSϿ߃r b]u]:ˤnń}F~Bmh&ERQF|PpdV;ۆa-͛SF\ &u VZ󎯃a(c9=-]y•it3O+e7^0 F->\ aeަ.ɼ0p+a$\ԩ$Oz W Wbi;bUzy{q+[_ w.3zŖ_<+&?7 2xHV7޿,bj5dF8! gCQK .aR9{ۧ>r3҄3NދW9~9ˇ݉r(|aMCZ$p#az ޺HN@TK=U+ȶFY["NF+l{Pḛ&_n AhS1$ٻU.VQ@ Ŭ0gw}s__ݚm}Ԍ񜸕 +A;Lh`4|mcp`/4卋i-ڂlsP Ġ2" P̃\o D[0:-Dj: reQiEfA"˔" DrP-yC6zЉTU!E쇜0G\u9c ew3 } ALJ+E_|(P $ttUWtr8!932Y omv%'qY) aIWA ACDO (CN{-b^݇4mB*Q}-&;l5Jm̥j̚o7\f S&l;7> zJg PU5Cǀ6BbXsv,Tda>~aكTl0>ѵYKAZ+qXwKEؠQW0*Zn?,R Fkk2Ycr5Q]_l%w";s٥wV//>f+c5KjОsZ9SĂ`ϕ-RRK!<7ɍoD56-$q˨Y`)`UXUc v-h`ppuXXZAzyNwzՓj0svWVo?|R:8@3VU0>ո=$%ƪNJ[mb=)l>*:_~RT5ec;,SD {,Wq|ճ,_"Yk<*E~n'98WcI";ҌF,hu櫒ˉLM0'nD>R5Tz tc&bڝhjg}1 I:HO?Cl9WS-FŸb@:sn5a'Ά6fvgxN1nC3dRh̚F*nuKyl80xD:Xhq@h9ZTm\gD;!{L2&)OmtTYxk;c4E1& !ԍqu f!ԃ e}Kb֠#&ݟο bV{V1mpJ;J)qb`HW @ ǛQDϐm@ř,q^~sV zvԃS$}$T>^is_mPp:>!6IaRu|>3 ^oQnyZtJ)6,P، yk CD2i `()3Z*,,ӳ^#T~3W" O<8Q2cf J?Wץ|[u7=n$X-|*į(\*8@p[ַ*.= z*.ҽZԧ]tѩO˧;Gx3ݧ>q\l:5MikqfW" >ڒxs,Kc?ݤLJyK5Zh?,eJe߭I%N~;Јv.4=%v슨FQ}ZRp>;_OL5pOu])TG:] W-$[2zjV_eڼBf\MIR`7MVDvymސ져}=v:Oq Q40^kC s>cd&1F"NJOl;K F\ жL hSI$Q$)*2Ncl)AwQeݹqq*,y)5µW8k)l2RjwwΚw,Ǿ`_c2Its:%)$4f^'IyDR2yet$enμ}̽H3":*ƹ[DҹSܓ3$e߶l$<"#!:a mAwATHudwز8k Pc 7enlbO|sodJ%aO;K/J/mdtpz_?~Ï#'ڌ/oa{5}OnjyF>zՌhFeRW'z#4tR LwQAD!e31dt&q9M}<[L`t 6MzcC1[W?B+8%N@v|-,()1#ְݕaRd`^ǩń,ki,*wZH(a*+d`Iɤi @2%xGr $IbזX4T!ڗ.bi|:.Q4l=^媯WFX3hxI f!+aUq fC A6նc4á %GQdL^{(7 lOw/]33iK zՒ32iIUFO, SI1i,@_^}?M0Qq=3s騜m?􇿜܍:(~<0̯~29Z~oG|}tr$8 Й wd*X@oGM\xmwU>y]ܦN;.wK<oPRc56R9qm#vI|LVtUڣ~0r".,|(lx1:|`R kdPRa%`Ǥu5h #ހhVB$b4I}jPҴ'*R9umP2$نݰ(pv!HnˋF:0jg3̖XuaJk3 zn֍޽go>x>f|1]}q1:ǂz6q9M=5Ex yPDAMxN[Y1á2yA AӤxae8(yP6az1.@c\bȱS*XRJCL2- gz8_ie`'À K3X`F@S -)^od7IU7YYEa<+#Q4Ko[TZccd4 aVc%^ayv]ƃ`p;m(@cyiA 2Hd`\(:Y;붑y"ICsio A`KDL ",I#bi! ?$JGCȨ14Q) ɘTGsbʶfw͡m%́ P10dMRM}gIzQRAE  9e=K'qd"@/'#根#-&=؀em $HN5H-@eAGB@4\532HAs ()F^F HN@V a䠱1#|M pzPV0%h# ,!c- [O'1ceovg6Pm$Ci9vE)(>m0sjɨA9qka+[8Xz2bb9OT?'0,sLA8P&vm$AHhЊ#:FڹZUX1skOw=Lb,=sA#\5C[dSg±SM|Z7+Y' (B}:J2eo8vt*+)DaKu# ieφ0k#IB՛qT+D#qE[f2(JG0AX{#0zZv\ 9%7S r lq HPɘ|ß`#-PwCQ] ,bJ uS'CIz %kd=3j ghRo:h!UoƹDZS8bhJI'Ҫ]D֓\?^J~߃9I%K'NޜtbGAvqHm<1 C>$(+n5Vx)S)EdP2)70 GN]f}̭3im|p[+mإD,(K/|~ F іAކ8_ AnjnG~򴄕۬T{JBTۜT7_I)iaFIG8'IBzžesXo2: jAl=,]Qe$]kpu8 Ԍ__]pWir`;A@pMARNuFUF L~Cc[v P%x4QX9aᜬm)N:9L#DOnOOԺ #e]6f1 sQv$>6d z"`;4C(iSgAF:gd <ņN럯N}(ҝ\,Rq)Sf j,Na%iQ*z_(DJZT\tE D@zQq8 g>c)8SX"Nٰ2v/NB?ꍤ֣I gLS2F_/^]@h&SREo{.:I5t,&UBrZ*IV3~!\zsˋC/rFKjqWk_x=srqyVo^sR ŐE=*8RL'}ٸue5HH,$̻R~K¬&$S!^]un45 |:z0v- Ϛb4]iqDڴ|?vOc eH>Eܲ)i=!=cP!9:E[ݴ\vO(>5Bݴei5KׯK!s[3Ɲ''yۇ"+sZ[@5/Z2)/bcIy_o'e5/mO-D5OjڏWg43:lE*dB7!g9mֺ9zR.!RG1cӹt]yn3v= B^A-UX.k=I2:@)Iy{x$dHOHAVoUƽ%aVy]mx$@$?"$S6DŽ!Q=%hQ!:8A %ǔ/CGᣊ܁Q!<nKI! h$+Ixn.]HQlS6 eK}8d:,=i}۲5L]HEgY3"'s`k=r3(ד*r32^x# QiQ"pm(BpTh9 i<" \xi#]4YH[L5*R"К̻t2R}/Vz%oKדt~X]a(h?e 3N m7If-?};E:4OzA.~L(~>0ŧW' .|, f~{s~~v;T ۋWL2Y|Z3Fgl?MNezx \ݢK+-? `S0b3O_SvwvuʁZkn"`ʉ扤NxĕJӿ7n9d-՟rTfۻn6 '[ޭy o,(y_Lګ s}ft(Y Q"v$/Ӽ?\M4Jª^'b[L\}ptZDyeR QKF#-DRE9U23^F(ŊA.j+;lF#-!4J'c'Qܫa::EiMr~\T GUwWKuuKTyOLG.YzjuARUa޵) u< B$Q[%umJ(ˑ9 aW׿iB97O 1?ι\Y@zt$а$|oeb#ew ?(j3KJ$)/cK />(.0 -O߅Oݗ_dPJH^/q!2/2Q)CKl֗0/+RRC%4QTeN]U4LhXXPK ˡ(|˅tDIqQz$3CO!FZƝ:+0e汫ǽ ջE ;"~ 7~x˝˴YL[K2}#4k| ?{HҦ-Mh=ZeҠi3a b7 ׇ$>$-n{pgWG;^zF,ևC0:b |kXDvm(|i=~~jVt)f{u_h Zř/|0 ^. MVnP$=4 ͞14>ZA^%zANٜfY^pJ<:S:C%>7\?zn}[,mُQ`r2eb@ 9V $TQ1/*8%*̧"XWx{g~(TvyDLBdSj=љSB&u'qxtc8# PQL{ 3I]Pw9tEi㯁ʊA0` z*0*m{Dwmml6m~qR3ޭfK&,8(l%QM&=岬!8ppg1_דu#fh}<يa]Cc0.S*.Xs dz >ؖv}^΃t bLծDõv;f*ԉɑn6.hVЩ8g!zd2#WK,1(fY$E\*Ϙm b)%K4(ye֊4}9pm`Ўx$g^>9e b wrn=Po.]HmG4BXS.05Njȶ)NiZi'nȥm'Nyw[w 1h1["QڲFE,wؘ;dޝ_)#RPf+ЬřD`BFTI ҕ}U"ZIZ,yuh:S>uW)y.2r2ipv7"W1 lé'|a{OتC$ kA7A"\z&s;ov]~L*{f%jXa4=Qc$w_xFE][h}d}o̯X&hF]M'!qr5EF}~>[k/$ڻv1% IM ݩV Ixף۳y׼JO XtV%>Qv@6a W!ˌ(g*Xn¤y.\DV9EJXAQZr㌬j,~}˪Nj*ݱY"TYި/Jkk!#اhgUHXNQ 5pV.!&HQ>l ne$}va׈'ǁ(h %Xw}+&o] #̅>!k-RhvPFi`Ηki=T&b{/mIDLieےrSvPK9gYAAS3+ K- ʸq#yF=.ȋd|~n-؍Z&M.&Q8mZ\g(.ul P %#6#}vWymw5U(޸\T&/웹'#p}~nhҌ]HU pN^bxiy?DX;/O9dom||˔}֧%X㈫d%x.+MJDW٧ F$0LhEp(tl }d`]%)jFv)ϗ ,7\p7^OܷbfQ_#LJ|==D=˖Q iRk"_|G|7˗~xXf yd/.h6_<6n<M.?^8[ a/4W}&x pmiZrP?=G~F |5 wcCq|PB$R*M#Ekɢ{ȿM5ki5]9CJKdA}BYJf߾W͜Q) (_~<塅n] @茳~(rP.Ehsb4c"e:Ʀn/q?1+mVԗѧmG]fr~Ԧv>%sowpAkǎTcFHk7>BxYc;Dݴ-emȑv[e1ܜ9o -j]>vb3 EJ;g2Z kz-9cܼ 1^@}.ǎ9XO/8c?1_2#(زF+͂P/jٴX%$ηFKKiiZD&5'&nBCRe>Ц$S;U>x J_gd>w?nFۏ7l\=DK..OUBS Tyz?NhuֱwĀYjK  yB2RiUYpo`ԣ;%XLXЩ&nٍ Ix>J2c~|^,ګcJu-`)&r1LSo~R Uh >>{ DǩWT⳪o9J=[*a"ٽp ,辭eJ$AF؅>%Jc hSHSRsz)}NhTS}bn}ܚc%CNm+&+bICuj/aߴabwߊ@m>e ؜GY|5:Kf1WtU sם_%On% p~*&%5S>F)Ni݃h+)YDjқ_#M iDM4a, ˱80wE6NRn ץHwn+NeuL#}$TMK Y6\ͪ}z59L|D^GW&Q9F,:] u8eGaʨMxDү#Sg*p,cž%4f*A HؙQX)i3# .{څvQz[`dMōWyq-'{1gQ+e! ;O͙rL3ˬ,SdʽIxG.4 =,~42{lĔ`Vh݋fq}nqOݽQ|ڲFYEhoSS^_Ft\햺3ZW;^7;>{"V8EMkY+5MUC"mLKyj2л.gm-L"oqgXTvWaL(C8z+.azϒ5߃bOÿhm0] ~nz4&ҧ߽{WKx.k{L1 (&3gߛȾLx?]&PRr/~R1JXA cPBDZ3^;$7 /t3c*c>V- DDbVkJ-Lx\c$ %+Y ˥(`?z.oS Dxpx?\Bˁ!sǴZpXh˥LB^J$sԃMa/Y8קL{U5|+_"%+C&-:--goDFi,R)* 8Ŕ}VRxAmRDspf>fcb[Hdr9n7F*ukUMjOGp|@%K*j`yկ/mdyPcl i[f0DsC ^(liLHlhC3(OBF94?~|`oIѪЇY 18;؊/Wsf/嚃ŕF?(kN篼 P3{{/`캂ΙƂ4&M{j軈!4^X^ =Ah]9؉z za`7d[,0pF# [MeFbiuP˙Biyc'xXP[u񣷯jܤ2`;DBMO a0Li4޶27Nsa0,AJaC#I섐" &~oW]ޮxˋ!q[QeѼm/GSH0.w>J҄qBTH.;,6C*NNȶ~toz9wdԘrЇ9'FKpP$Ң"嵣cF` *~U}n'$Tm]/fi x;aUHĜCV0rX{49 GY, `(i$sn+'1NV7:rOS,XOY\U{ r`K{qH#ƷǠEr@ȝ t/ӊ uHż) ըTRCUR\hmQZUj+U|N*j Mp ?{d-LRĮ'[̔K"'^W{Z ^F ^O%D:;ݧϫ$A,q K/Vi 7 ^ZZn.@t0(-vvb'P; ENs>z=ho \-N0'nݟO(V=-u^t Nˣgո.2JinۗG>AQ0\ ֮ `%+jkYZ dZ6 {e|ul| lFz/8@ [U`/p"dg;jgG>{8JΎFqFһmC9$SwC\Q^9?;Edžcql8)cȸ T[Y$5nm+ 1r|\ñ8[ñ8b9R=S@G 9X~3YzUܸTi3V DhI%+؋Ej*qVu 25<#A#kep%:h%MlzÅⓅVnQOn-?ҀT.GCg?cr#^Y]w'zy:VqҺJϷN,A踕> ~s#h=G5s"_Z~jJ]٩phyp`loH*K'=A`da\sv PbgWن!4_|g^9|oi+eNRN${9>daKTفј8h*hL)Y8z7>-G8}~ks2Gߘ$`0ƕiesGy@tk_&!y׾̏ nrpěJ;o:DPf; xx=r9 S0!­ lk n,É<4ezwmS:7ʝ RF22<(I=}1;@(ME$ Ӄ˔`:kMhl db=ՔI;7 (Y&zT eSWiJPVp] _0ΰ\EH! 0 ,[w=,9.i~qk?t 옓*[,)9#>ٺr칋v'NZUEjP+O'%)S*ݬ'A% \L\Ԯ%Ps'\!( O5. jOgNo :~^e cM=`u㼤({KRS'1%Qp)~ϔ`I Cc0υP)b*xgCTO*#>$?^v Xâ5LҮL irЄcZËAJa X^ Ǖ5YWF%K[Q\:X|-~͞V+SJ@S = Z (ΩU+MlARsr6QwNR!`h'nM<^*Cb#z,\ Ogq>r_5_G cFN ?ՠ|Vk)W!SON)]͝ZH8 k.&)pn[D}$p|`7zEb4( J꼴vy&GJ5^6GsѼ 8'סt>29#Qӑ8*]x%ez- pR9UŇ-Ϥ]/4HîY1X81֤gmw-k]YF͆hHV Gr_4A2 _=@v)iR܃O1L֪9jF5OBp/|I4fQVԳ?dL梞GWiNpzP1I|ሩ僮K`Z_.h$R"`(?C۷['ɐmB߻yd1jFnϽrXF۵ c{%MڦyZAvQ%+<&U{sع֢+ӵu$]>ut/t/ YGx wȃ_dNVK֮{Ú.^>ul bQVcYqW?yC9[›)kqրTFUު©#ًRJ.eqcq3mO6EH5~nPu d0y>NV[}z)Gә^tVFǴ*5MX60jɛo!PWJ809HdI>K3e~3q>Z?@O·c޴$ `Y`NL 3Updt&4/H>{Od@)NjeDrw +_ Y/Kqbe*K):ڛrX8A%'dؗN ,Mρ%TCxݰ- kݹ ?db!&z1Vî^g,`>6Oɶp? 7_ ~c`fHjZ3̯$\0,kU{Y v@-_r>uNؖ9khЭ$ڔZMY)76T1d8վ1 bXxN] '7,)S?GiaBoѰ.v}ӂD7O8t98Hͅc[(rա=蠒:<\ɴ8dB"ǒzQ },.Baӱ]C;sŸHb\D}E!GJ= JMeV#EzĚXu7*Q.RO3VFtOv(gG;VeJGó#K( #fF'0thsizLހdSFߔ()$%HT0^$rH_ww,?ӽ{g(}kBF[Ǡ |[:k]:Рć~o9`{eb5Y(ϑ* a:.4@?ѧ9/cGq oϫA.R 9H+?@ xYrXD="xk4?NwIy|p(ۯnʴ~9W? l{~<_| ^x])Iyq?y'DaP~3Q^Yk% å~aGh=³/\y h?mqJ/?|8ɼx~9Wߞ }} 7UG(MGO>Dp݇(@&&Y~u{zz'SS6(||*OGuxdf{FOez_-Kt h`f%у"cerퟢt ܆ ɿ%جVakt Mdx bT_*a7rW&Y,8 smcJN=O]_1_>^j7ST'jӍi*b8t8{|ݝScs2xDxss Sn.msǫ֟zu5]F̂M:%A_Ǚ3|=J&AKQ/d0Ѣ2QhG}*t 4=-k`NP7-Wn?ϻڲ*O0o.<]׋7OO0[ ?~˷ݫ8z|glbF],%>z'^O{ Ic/`c24٦pMn.t:ӎ TM?+oI )'08qC`C;t_6=[^y}w2,aS;|gO<>}EPm0U.v@ro\S[N6$ʡ>1 vNp=l2ڭ:TMuL՛S0Ku>BO1~ *ٜ'1C1Ô0an撹`Wr)IĘ`#W{6= =3 VIN Ȝ V\ћs!ВG?)?AH;el `9PDSZFq>1Vy o3$7o32蟗7,m?2@v]9Q]hֶׄ<ؿMUh|zobPGSʰNh?Hx*MQ&)³}5?\xl4fRMbz Uޖr{6f(= `ҁ rj: {46QEr67PLIPdWhY ,q渕|9x~Qʚ R,p9"51s +k1vahAo~ruFcX$EY{P$gY:y{Ka?oN~ x.vjgV37옼iZoS!鼪ۨuYS:2=UG/0OI\ B{ܶͬ5ewh&|'4=NZeI%'',HJuDRFI&)\<@xŠAgI͉qɘP_bV:8B#rӍ [$<'ȼnsJlM+Rs*9gdL R_F9QYĵ%W <gqLPS(pc']lփ=ɳ6ؚ<gñr/ ޗ [<ܮ܆Ԫ+is˭-˾R*iWJ;J8&[WwEPY,ޢGdTGwiX%/aз"| Uvnwp;h\P涒^B˽tR:|3t\n%QVҫ1*MJ拵LU8/[`Tm=XMp7 -c[VGQ$v$m5BUGa!p-_#K9%Ul(5fsUvb \؎qrv.6@F&ko<{@5'm O0V|oNIHtp6:Q)888twpT^n2Gaޔ|-|M|sC$<ʵ&tcƕ\2[J&V}ؼ ~Cb;dµ+V9;cKmINUybY1?\-1S"cՔ= ̢\y0'W;E*OM\sRVk ׁ3h- >>Ў:R P<$K<%\BfhdXA|@'M4"}п6pf/QA+P"|> WVC~  ,[99m]5ʄIĊZ. kL*(̙8EC"H eH7?4)Bqy֖,pOY]~?x>!#2yA6b/Z>8X=sE(1MxC]20f}ƹ9x҈m¯"4*_+fL2adC٤  Jߎmfq$7H=e' \ӝPLzrT^M^HQ{`-]Qe9=/ߜ'$e[/4tkr[1"L|t$kh{pbx 0\T20bGald3o&?8<#-Q}mT/?7gG E5L_!Ԉei9Qm85A 區3d3Mb k Yɡj_׼{175# 8ZPo@HP8mieY'v #ٙtpPgf"9&3?>x &IIqͼ5dGyEv-"2ՊaEEhT ^GwGESUIxaY2G̠#@N .9'S &"4mKJ ,EB&+σfpZdH&iu|-͑\̋_j85Sk )a.T;|%)hE<"›TEk.jAV" y=jA#e@ s 5y1] +2wnY5ʤd8k2i|y34 /V,pxy7 șȮyޫ/ݎ}CۛC?;SW&/ﻧLp4M#2EZ?vg'*8 wd 昉> Wa2ef_-0T&q4)[uِ\T8pCl0|,ц˸ mu{z0PF saW6'd"):K@W6e,< }/'G>pWg JqovN:Θ-qSXpbO%'FmƹB4&3q<װkTA1#lJ ^8zF+a3+Kmp}ώgk62L9J , 43÷)4)4pX,Po%ג]SqMy}t5yt%H>A8N5uT)o0Htj&]8MyໜwOE/"\m𘒻1TyuwB/ށ67F/r,U[Uv-͸%5QBq\kޓ~$p+:^彎iw!<&+y}7H U̺Վ⩘Ⴂ]EyaDg?̝9O֝?IHª . ީESWl&92C NIElƙr έJ6#R&^%Ϭ:WTv\s/Jv?V e޷ LUTK-aA2]݅'<$T6_og^F=]}8TW\WˀжC|[i_{LJ-rJ-Xb&ۃrdZdhS8i)=l~̒ maME <{P Dh/#=)s|c(s3N9vŠ)J5p%$qт8_\x5:I8{S8LJh@ӍY';QJߎ}fk [?[7[_.޿?ߋ뛫˴7W3UO.׷?5Wޜ#q=uo#ە{:z~>oga>uijyN26U~tl".ڭF7qKl<:Bw|M;i^nDu.+ <  S$s\EK4(=uƈP&FXRe oWչD/{'OQPoW?cûH_J=C'D'32WpDzNHǧއVrt{a;~(Z"OW UG6Rx23a+쇠_eO}>Z7˯o>Cm.{#FwO08_%OdG)ɉ_;+B?Ƅ A2ޚ UVe1:n|y15a|@uwywN/{uiN+ %77ߠlj6,:LFF5Vyxb[#-W$=HZNL%+vf'p1&,2Xٖa"6@_ LQ{kʰX zl%_SfV6`v^ۚ5עۅ Fvb_4Sim ]+xp65-8?`!AE,wLt9t~~:31Zf ГLSp̎5P-V4JգCu~P/l}RJyKfOVGR I{u\ ゟB%ô0ס>³Ƃ9nwS16зFbPi嚚jԮle5h zt(~ Ϭzr!O~BID*&SIWٖR0,`¥d\EP+4yH r!ve2l80y!f(7s])RG;ʿ$>sːs+䆜r6 5r!;?nl'Qա 薻ՙgQt(ܓSov~7-ÏaԽ>zAħ>VJY-((%i|ID>^`[NMK{vWq \ Ow^BTSצ*ӨʼD_.p^+@~-z: ^KIn5Lx,By7\2l!B/obupXa)dy".$D9[0ۂ&a'FaQ_d7*=|wMlGg[rT`I]Yo#G+B,V_E2dK°d8h) 嬍GϘ#Caip$E6 nw Z`Fl`\1C#sRu5B ML|y>ؘ$#Q^3t&Y.0b<˃y| S$kn B ,4 qPgfKWhF#t3ĥd݄]'CF[95KI05JMI wdbE˄ 1KM䁻+ =eͪm>!ɗE!y0+M(89qiSkѬoRh(d/AlR %!4N[r%yI5ԓzݖ{N4݇@')фcUjȋլL1*N#j.ɛ9rd/(">9]L XYC<7xeK"{,iiJIKSJZRҴKZj$KG!΁S_ C^#W BebŐ0 9"7|fr7 ;T+;|, ؎bEBfb+^ =߱dFU`hH\HI/N  oJR!4F-.%-,HR /iڨjEf5u\fUVSZ-LZCĥ^\{*glƬ,ģ] Y<$JLi@ߛC-9 -> 5_s.&{/ߺBƥh//"BK`jPVTm#uc^|<>[5Ul̚^z9վ+rk^^+#L~yEFk鸼*F *)PIrJa =_+2.F랉^JþWOvTH@."Jke |zKVZP;]w} F|g~]|T´zI0=Os#NV2 :ȤÔ5:d4HkblBJUeF F}CWk3=İ# 9 90{ U_ΦgvI+s6Mdd˴п/]u;Ȼ?o!\`UGhH 6<Ak=s]I+H$(ÿs}|[%=GDX갳-H*KSꜥc|鿇Lg9mQnf4#7f6%|wb]%ZFap=sю04yPTؼDS(r9.!l/QDqW`I E] -'SK$[cq.Xk6=|w}6~QW6 _ނ\/j,b\#>iST;35 s Ay _ՠ}FX4߿YSr|M=k}GvW֗ZeV5Bɭ AHI\C4<ګt f&kH d%K" J-ps9 Pc0H} 1#ْ&lAӞy pZ-p7J DԀK', Rue j 2VZX6hFth25$P@ o\iOgi<ٶg~5h\el.OǕ[>JN{ *JYdQ3 iUg%ܯ'#ˡsc˦WZPKƫg%TΌ>"%Dpqh3ղ^6rpGt9N".caa֞ypb~OmX.Ohgj|SB[Wgz9v-e|T+rID+&adD+rjR IB, a!&FYc9͹2Df5f#ZH S'%Zv\}|'SLq9DErx:vR `U*}xW7w_\f C$ .ޱlvo{%y)GMB5RG8Fj@+/^'_}畓+\RNuY;Y ƞ 5 F@(t&ySSU C]j}Jz9lc7] 4^Yɒ@B2mCO#bU,qʲu0q3,Q*&&ҌA씞I~bm ;^jZnT/[\XQ?N[Lsg.$ν12D9szM8DgXǘ ?B;MV'W(ANiɢ uh h )l,mOH>~. Ò%H+PڨǵΑ`_!&{/S?LWҝ*1{ oКFQ \4RAj7@I9z#w0I.:xd3(}BP(}p3)ObnH`)R7F6'L \4g0ۚ*aSndWgǏ]Kc,c߫ G^ CeS mgcsIjS̚dj {{^ q"5f:lR$qM!_Q~1fBOq HiS-L)0헿T%'d@tĶE36y k\5.8ؾaZbV; 4ˋ|+=wCO8EU9,R{|پwU:;p>;/o;axEsC>) FaIQQ8H ei113ξ7Ξe;It2 x2LPjy5U6pQBC9,(AęU: !ȬRFz9*s4v, .Pr,0k`{Q͂RZ U&rUȸԭ> oKSUQpZ Cп?W oU 3LTbHՔtd!)$8)ep5o(<7EͥVxZQLq듓eȨ8 (˄uJT/+`_?m{`%*ϴ<#c^#`Γ#^_jubG7gA:6ѓS#No~x# Ҡ%Fm|te"_#bLh2 d@r<$}Aiʤƽ}>*1Xi Vp_ӆ]B2xj5J/F֍tNTƨT.tێIgDf5#e2(dŵ68 Jm4y0dHL:!dbcRbi %)wz#Au-|]Fa*[6B @%5Y@aU0 HF%Y `#|?>|SǍ47+ڷfO&s}۔U' qG\1\/>Lfx3v> ^;|DDH[,aMքd ~ƒTc)֭.8x+LE[,:A=33 )8 (>~Y yBqMuE@wdwi/뎑;4\ wtr T->T9l-i]uXyA.X$yմ,61RY޺u2Ef)ݥy4Ѵu"KdhyRRL$_ 8XzPQojFnվ`6Y@za2--'9q.yM9aZ(Ơ!WFiXjOasứ+ŨXwu:Ǒ8}(Omm<.#3$HKJ,IE\I„"* C)O32b:Cѧ}dZa,r"!☣X '{èvF)E1 ZN/npf+=5rã+$7Pby%qBk+<~T!Go~GN^]0r9RC| V˥n0D0i$|ZeRIˠ-FG@G]e/kne)*;XUuT9bSrΣ9\`-㐓vtX/ޜС6UBI%kdNbF E񥖘yxMCYosxk0;1.%}>ݎ5+{K{ wq$lJfމD'Bek-,^\x4׿m-}Fn0S-8J>(3xGfQ6V kHqof]p)#<3wy@y7Э t1'_}t RxEL KQSUUFMu41)y{,lyn^(υ4fmX0,cO/i҂$)"a$0H ]} ԅx7uJ$Z&h89E8;J${A6 <MeO'Pw3ڬY͸bjq#Q( x&ԩ:N U n $O]D_Z~г]̸3b<7 .D%2*92[A6 sw);uq)Ӧ/G-GOS_4׈Mߕ _Cv.rT x3SMUVH~-5Țw֧3D{a`fs((<`IƼIҔ >3J87˧..ЊA')'Cw4 ddd'O 'x4}VY'Jׁ&`[m 'OjQtn#(:E[ oCO H.>E\T9:EjDm^&?Jo 6$k]Yj;)7/ڽIoÕ3e0JQ;*Nb.D#ͅe3onARU/]!HBvE< fr4V!l Voŋw4h 0Tlem̕ >,'$t_|6iߏz{;d= 2@ڽ? YL졈 Q} ǿ Xu \|R%aO]IPi1A_~O{ɍl̿7[-l5a`ξx߮ *u*' @HTAg+2mY\DXҔE]<ˌ+s0GR)Rf ,ՙԦ^9/]'W gd:P8`^?@Ԧ~ J:U֏v(G12j$UXkiL Bls^YaIoR)I0&aLնUY<08G&u+VgREx[{o)>,ӹ Fc[~ ]_] |12/*2gh͖M@_X%=? &`ѧ^cHtJp"TF֒ȼ afrX02VK`][D Qv kEȷ ~WXrq+BxdA|Y!ӵA1%(6D%(h GDDhTwi"Y&6}sr60J!bh%m E ~=^{brO YJ2 rXxRavT{NqDx@\2ha@zabuQː0k  aj *Մ2"Sf\jaI0&vT @ątv' 1L1keSt&çMRn4@ֳIG\X`p\/>LfxH} vp'T="Koqg>~xm<>Owww-XdCt2gx}.`l$,^ S8 01l.q,y L%J49zv8oF`|T9#GrhOUF2F񋪏e#GR =1r$F 9v.d /t׉< ;5vV'C;>tni\5 ֤J90aT׳tT0Z Z'S5pwtrMx1Qt $eb`Mm&c*DQ J\V5I_p,yԒi(fŰ؃_pXP2Pbq(84,v.F#a,^!$h-v+kZp[0y+gA޼pz~H"B 1ϟNG $y5(L GGфHĻ8V'ZL5!u? oٿYl*׏8,&0FI .ژph'1{hoh~X7~5UY4p0Sn !6}弟c*'9x0 fC⵫0Y, {?MMq%@r&"+sFz렬iAa& ~w^: nG(ldTe5dt `?7 p⤨-*(&~?g=\Є4ؙXt]]Tq%on.faisU(\KbVTwKFD \>N{cp&Dxx>Wg=?Mn]&/2Yh+3|oEQEr:K x;6#yt`%\*o+*j(M c2rX>ލm-y1f ,:!NCk/&\Bb#yyi. rJD?{WF <"i v6~فmZRKTOIRRQ,U,EJtv[,VE}qdDFd(ud܀yA7"C|gÌϨfX+[Mv1BcN&b,)7<:Aq,g1j{jٳM0E.'n.O}<,XWһ}Vl0f<5 Ԣ:xw^tJ6ґƳ"JaP=)"Ckx#h2C0Ú>Ys"XcXӮg#{ԵlrذJcءl+Vc9 6ݱۦWj F*azv| ,y\ Fo_Dޗl˾0?8kb_9ImPC`$& Q2>ڕ}P fOIz!mB~LF;1{L3]^q?Vow1vM_&߆qtlɝr[dYFP?Cy^13ڞ#j6B˄[8 -3>3V}\gܷoąF"c!uV{~k!aE.`׷ӣLOLoJv?V`mP[n jۜ8n$?BCIەWmcZΙ^1R sݧx&X?Gc0A,!>\qXb)7[5MG*#3 n^¦OUs'm/SlbJˤ DDc cͮJwaAUS_ގj>>ϼR3Æq[٠ϚzQq 8_Xl~6nRHPXY;ZbZtLj%Xe1#} FوA=]LPU:cF)-ğތG!s$ed'C7 xd rV'ų߁֌!92r9w \' Qz&r2*yQC8ϏSؿI|2 TA ʐۛ0XhrKV'i$1>s0*he0[&]!JV??ߝ%cKʵvou'g'V*t1. A8ĘXrNɟOpONOޗg'.'-d ):XeYY'y٫}{g|>!F !"JÔnUAzh#+)Mͽa&g:Zd dG:le[3P "KZFOSjёe B>.W],xbu?k4uh*t"IH啃!ⵉ$qH$ d"O4I"@\M f ah׽.r$!&2jR3sHHz#q ժRSr"Ho=xKpӃ¯ʴ(hO z&ǡ˚&[Z^MiT*}7T0PXO8:ZW#Xn: :|A+w:֏G|խB`g3IFedGGpEm ?M]u@taId6tuH-gJ$:$'sRLd\Ы\janfft&~cN&71mw.0:y)Fh͐ hб Z XF' -> F%Ɗi EO"c3(-hbdIxNcTtcRur~-c2:OyVZ̖4(б9.plgnR@_ېW|_B.DԇF4L}B+ ?`.@4=|~|&"|;\M XYd>κi8zog~[#AWOuǿ}[1%K?>?\p3;|k~kC&g bev?;f2.ngZ5N;~gn&Ms9ۋ Dۜdˇo8T :w8X&psA^糍貮'W3\^_}8e4:<(Wߞ]ߔM;?ˋ//NBiNdi"n.gjmSڹ60iBYڃk/pl?_|m[ˌ⏯ͳM6+qlrJ~ :AN77IU3id+{F>u$7}J͞K\wȭ6Bwi>ZJ NGn>btOJ?/z-̞U 3v.sfgx7_/.ӱ55y0w7+;pT }st3wkw~Oe>YX?Ib|ZWe;ڣ?P}Tŕi{t(#"vKű?&essX4kG!2N#3OB&_G:`_83;vx7?`p+x9iMd]U7v4>*: X5^0k\pOCm}M$Lk5!T+Y6kL'"U{^rAŞ(!PtBjCFǚ"i=Rvٝi,KjoqǮ+H_[biAKB QXrz ڈ4d/5Ԋ Ҵ;xknJ'ʡ."H~-92uU8+u/lɐ1%XukIEfhI89SRN 1y(X\>GEn< ommD Sƀ#+ibm~v3y$潣'";Ku($KDc(ZE`!(iyp;K/\PNRc ti%_4Ղ[;)% A+s ` Ѡ\ in֬]N+ וUeRPIƐbN*X{#[{Sm h65 R)lN}5z1֠+ B_߯aAR;믿6Ȧ%"u%CnErHcI:\,T=l=+#tBZ.lN_י,=w~1Mfx3Mw (UP7}W@jy߁vP?*kq9E/A6˙~?.Nk5mkwFR7~%D=(5< #QbwUw)J {s xe7ORMF\h#ݍ^πǠЋ[6ɩ%ͣjdU S%: \6?}d?ٲ /A$1<65By#k)@i(ggj0ۄuTS/0SUAJ:5$q*m6H-f/Yy:,:t6zlH5RM4t%k4T k~۰vK;"P;>s ,yo-SNb?U)<8G| fU^sbH !mEAbgTa]ږ!:}Ј`KY#vHݴ'$. IPb[!Gʥv $CT4I}c8 Kx%=siӝucz.1YT"CidFbx.-KNcdMUD({mB!T fY<+ɸSVe>>fcoQmڗJ`Tw$Bh03󱊈^sM%!G X$0VFRTm a AŖʐ26& g n! 3&'5DQ>q%űUB[N)\f|R/ԇ4<}pc9 nW*4=ieaAU9=7̱ʼ5ڨP@^8%ylU?5<{Rr+(#ւ3 ]^lcvA;)ހMP a{#u k+XLb.6)Gop0~ V'PMci!hXEeEVE4}ՅPc5AzbyL3'*YYn}mzz4=+-?ѮU.0зkŘ/%h;<-_]S dCڸs3%v:o yQl峖, uvٕ:S2> R6}~U~F89bF20gݝlʳżaRE)=e`$N(gjnr7YyKu  p[D ʜf6Yː\ZxLeÒ",!҇YC~ |#7mHe v;S 88k#-;&ȯf31np-86viN[CNu !ף~~QXyAyNАP Wbu\X@R][³m^|ۺƖö!bKbƣ?q$}UeoyU\q !W`1>$=N+JH%Wy?ہa*>Ygf3|Eu9I/؋iE  WLl>n2zW[ۧaZA1USE&1*ڐRlJ:dѵgmL|466X`hz~KLxPokȦlȦl"`EUI-.Q(d,Qԣ4psaj?)(Cy'C< "\ x~Z?Eݣbʭl`}`}Ϧeq 1k2ɔKS;5A:$iA KbR~~=&d1[ͽͧ=x9,K0Z2[ y+TuM֌9#[<0pdUOKњPJ: B ,%΀53䬤4[YzVfc{ƌJʄ1LRGnQ*KҗUf"5kAߑ+%l X>1L$<KܚXak`"8OJ-XA=u]*bJ3OA08b}WeNCCXN5RϜu/3|I_L8F$b1ؤ ,9 I`E!T[PO)`i`)CapqyIEQfTJB!,EJVVIMK|]>*X+"'/ g5S 1:gDfI,L0ave%䰖 :Bp&)U+/5aes.|0ҟM9OѳYm{f/ ah,]2(#EbU0tuoTxe9ct_V%2;?@C]usL tp?71*9=T"&ɾPc:AI59~VZ⼾zyeSƋw]Vu^QŌe6A:`M~˶!V1-jQ"J;5&0nZC֘H."=@ ^t"5F<6eqo4Z˾9 &e٧W7wOaܭ'Aw<<.g|9s˙#_e˙?|>ÂC7`uHkXGTqk3ݵTQyz~1#V蹨{楶:ap;d |: tR;攧 }HI?-1FO!+.풲{DP>%j_:==ϥ'.}+u o\DkgM~hNwTng-E9;Jڭ ExÅڌN^y3Ɓ.7z|q`%j)\; og%aQ+t A{; yn"psԤPktU{=sG3όQF3Қ69Ȯ}>CZH]TȲΟ~,uEY..s2Zuחꆹ+uμ70aq){Űw'۶<95 Ģ~,֊mumNa[ vs:#O&ҝ龶HWb!O,\?u;g3OIļsٝ[s֭5ags*4FDuj3xfvR-M7'k3M7S#Ȗڡ{&_wFZwSfnLs%LaJz}>lF[c+Tgz6Y@xG!o諆L}?%bH>i:s4w{WbBcmPh0ZyZKd-#mKYYry޿]Ь+]]i8Xysn I=`k;]|{}z}qRs]vC7!R~núT S`Fimozk6SԹTçnY?<ݓVgF@gmbV>Iǧխ-m9Rrz]Z2h [2+2QL.t U!^j!έ솊*D FI-\JTĈ@Lb *A+L]Mtjht"8mQ tQ[ߗO#`O>*S<. epH#:x$Q('52D ~wg =ytEٗ~kh尵Q~^74(i|'@`bEGs 7Edutrr0+_0)l*zYBD{Cb B`(PoF`͌H;"X飼 xs~ W `\l~e7=jqήϠg|!)8ߝYi#߭&h)8yzV-h)] T4ޓIiL[O X&ӫA ޠj[X/q}q^J&n=iȕLnS*![4V&w<l\d-h}'ßgOİ!3Hҙ -@+J4$! ,Ÿ?y]AsSלBN`IYoEb d~5> 'D02{0 vXB> {3}.Sg=L6t_!=3/r(^j8FRpUN^ڏ']ixsfKpz8~-H5OL_!X~57ʐÈ-tB6娯rʁvgbd\üx{{t~qgv%Z͝z$ vzQ<}$iP"l{r< kdû`lo{°(^o4:Mn_ڷ\VQDu.:_QK!*NM|yą޿~` }[,Y0E_#Eg-ݺwkp5W/Gq?Ҕ[s9ٖs9;(n880F%a򘒦 w_ߏn&݊7JP"kR64ʩ -]\nh%FZi+.RՉ`RK}=U:^ݼ,^}"orFY5+jfyM>g]WWPN@@/ty\*+I/2̛ϧ cK ɥ@hܓ^Z:Li|!֔c0C[?.v9c-51zIM,nx0ljY'P'> t&h@ܛzu?N0+ҷdw~,˱A9MGyGHf4K lǥƺ/5}E_r޻o8ҶcW.h~| R>Nl/NFO .:) @ <>DC"pђ#i|ڔldJNKo9x"S}/G[1o 4_I+gS(,eF BY 72 e٭꾨V+/?Z5 +IR\W=椭ˌ,G_F-Uh@ ĬP*`ZJArF;(˭TzЗgX.FiTmuDZǃB`$HlRE~cÜao?N5VBiG9Q؈&(g]d %# G/fvRؕW!)BJf@ ,( V;.1X`&.uA Qtpu ߾Ya\a,{ `MA R`vNZ?o`Ph(xI ;J$sҤ)G#$j?f'>ܢ:hJk/ Pd:Bh W׾YG<9MQBm`8-x\Q%bHH2%J #'T`Edl=ĸnF(l~}5(J;a6Uѭyz{UՙDV*$A"b4u0 =bNs\My!D}0v$BjIgK4#70,lscX4QFo,@vR=(am i;hIpD% &lbOIC/1lk11ylӺ4jK>{]`>?ާɤV.~=[<=#=(-)(Yݧ_C|ү8M_Da7 ǏW`>Mg y<=|N ^W3c3}s?3Yx>)Q9zL`hA8ncI#ѪUYQ ۅUݧ?;dF[_ef3x`aoNSi-vx` RLo)GHr+Ț@1n6Sɨ*kKxJ0EJkV*Yކm1ӶIK EE@QVOp$a[+5\KZYd8nzI8#2`Z 2mCUL+ķ O #-D`P 4ӺKpɾ:^`R4TMT7K0ٗ_rl5I&[)W=?̀kDB>Fd߁gXae:o}{[ b]iO>]pe|Z?Ȅ;}̽`2ov3e4ٱZv|f=䯃_?<^'{ׂ@y(t ޘg&w)1Tkivz~{a; Gpk JT&!6՜l }.,R[E+a1Eh5ӉZಆV#>/#3e(3bϵAꍁ@ܲi{Ysg 턮ó=+>9[ڼe7,J^sx6ߖMv WnE}K򍋨LIuv3拁QGs@δQ!!߸,eUqPM*M:&RLɜ5&&bZvXiKQѺ2iI# 1'h߼ IьjxS*N Xi7s͵yw'RB@K7~Q4]õQ0l6Øׯc g $^)2O0%{xfI1~'劾}d "U(-XjK1:i܌a&>F3cf<ښ3 _uXeRw.%A-T ݁6Mp 9IL0g=%㹞Q<{R>rANPyR(&&BXHJ$S0cc IYZݢt89"5N tvoψ̀1N!P{ 1脿sܞMpwph;i:~OC=T?^Ml%; b42N~C zh٧,|9 :UXe5HC{Eǖ|},Ռm0@ - TQ ࡔV: P·I$P:(g DҜe"ŵ"<0y\ %cAN2ƈ y˶rL[;@U`W}^ )7@zu[g%{m0?\b@۲*3*Msgi{8 T"Heۗi_E} iY Sb4NSQD}o<񝠹&G@ &YBiCiW d,< s› u]UR8U2epif"9 "$Fq+%B*2DTkX3Uo{tu- = \eP;X?c'#8-8XR L~~X%f^4]2/Mw#͍͍ܔ77=5 hA4',i0r,C<~.^Eomn_1}K [v2w~0|K> #jGlm]zus-R r!JxYOE,ʬ$IEpP\1e5yk!&0oik\{gYN OiArJB@dnFDJ0G 9*GfYO1_7*[nμmH0Z޽k=֛oa\*387 V@9 R<]f(FI9ofY!!̾fG:k卑iV3x^ .h^.›˱''/l`,[o7nN.jLש?UQ[^ ޯ׸6 rh8zO|>1gP.̴/Jb/t2էfO4Whu'HowñVVff97J#t (Etl37TvD-(no#vH*+-m׳>c2q%St4hLhCڳW\sc2GvQL\"+ J5C#̤F.S!8ѠGWOiAnN)<,B0`~I&=QK-)͑f^c{YA42LEǽ/wy@vnozH L g#.Xh^"}(opI>C>ϐ^x›=T`V٤rٖĴlKtsU[bԗѬЋ{z7Ww OF[=kp2Fjpmо;FjYeXB.YǗ; 1[XQ޻ιEDx<5;Ä#4'mrTCkNq 7'<(:!{B[(QNRTff>d ɑ)g%ǎ_+Po1e?fH d܂  ] (< dBT@$Rbrs !HHDs Pn9X GoԶ{vv+T,'L+¹\bmT.$K(扑T+$,! hyGP gɭ^Ah#nAg%t;F1+3BH}ja=;eTcP%?\ư/sTStj: ,mbqvsyk|Bʞis^ɮ:G`Aw/y#3o|Ce5emuq4G%Eўrqث9L`ÑEOX8Z؜xu4OH`h?SMPp`KRҘ[W f9}8}y tu6.XOQOc FӌfmUSM% h8ޟc˒]\-l)Aᰭ z}ount2crzӝ)KL%_}]廓}G9aGuiF)Cݹ׳jkMX548\i)m}TCC P4H*qPяl٧s^'/%>S%ewAxx`CY' ({nQ*hBم.yB 2D)=^)(M`;^qJPL ;Bq&6ʠ( BHkbʝhG/a-tfϹ ~jt` IvU `JE99J"%djHSn7^MP/7 ЇjQʙ<_!B ٻE 8V]db@o/0RRo4w%Ճ6j*a1i{m߿y;Ra5y)#4<͵m>a>(, _3}Q~kJ\(}s=EXFzӇkml6b9G.Kj%g'?3JJf40~L(ifL= ptM+xPriuȃfaRJE&~p@ J] /e`Je͖b=y;$ Zqo 473{`q[p`Zz03 63Lmc҅AO0rE>zH/^8 O]sr%}TJ[n1_j&5&@9Cɻ)ٗ&6&l="{i.Oha +D +YOkJ>O/ufQFefk Jǒ+dܖM+x84/ *uA`w,i^'ޅ/~']rvuz7I2?!no&`w0^ AM߉?N.Wzq'wrW9ncpĆm?PpPOr;0_Zh)I(ճ^ P4.SJUs@->ܻ5%/dY 7#m}^m å$kzX1h]?+pK(n2xwDT7>`A/0R_j4!<=b0nAF/ׅLcV~.6C&"D62 ԇTdPRLIOU~i1irI)Hmp/A$2b2i$5>y؂Te>'suLf^`s7f)J.T:7b3&6Ǐ{Ǐ{)?OPC9fyѳ1g)bi&y"Nsdo5$(Xh 0I J~.^EoӇi t]uh#y%Sy1f4ӥA?w0E$֮o;J{ {0H{|w$m2½i!/[!Ve tA]Q-iwŘ\\鲷dJwcND؁<'X#TDr|~`s;? ؛W^oAQc_,_~ N `˩Ƕ\)Wo@X.}dw׭IN8˘jL[kMj)$P=?=dp]6]"j`#}^; M'j$t)Uail[*eze2節gh32ai'1c}}ʸXP5Q!R)O9dT# YQ8pHwJ"&b<;|q4j< IB)Wwh\cgb0Ͼ̕4CNyEgs{np5Ee`z>]-JK #ey̰F ` zdw`}?5zw;~꥘q}]k6+,n2,P$i6 &iC (L뱧L6(Cwɒl"MeJ<|yxy.߽{O?Kݬ`㛷?~ۏ߿yͫ^|;d3ڭPn}w|2t4Qn ^hz82FcmKƁ!Lfg:NCadA֌^$SKO:KQo8T2 >;ܣG(]eվE;thA%7kGgw? p%Pgm&k#d#~XWTpQ.E}>P!"eSRJ">y!&cci?BőRIt5ieɲ YH*!Άd[P=PՌAX%kL6fh{xW}l+s"=ty<'e8sW%\>܅ n22!aBZ'T)yYVYV NO ;}3_3t; } pm}.ѧ[7DH> / B>|hXB4R3L%\\oyp|ŝKϻBODپ%0?]V>8@)U{=f]| GgF‰ƻj2n jgS^JqΩ(k]f0X%a0y C?_L -˰Phfa9hYJY€I,BmeGf2vsJe7iQn#W۟\ 9H,—@bG<_R']ɘ}uL]rrCn8h '(# r*\ڥ a{ө&W)s NLހԌ xgd&ڭ731pqlrIÕ؏\O]+eW_ovh)x9}vť =A,7{"Xo"=O\o<&!qauڥ:[ab약s;XF"A=DJ//־n: D}g 9Ti{w?}рőLsO3@a#"x l(Hla8ğfE!+Ba*&b9 Qw$Nx."JEdbTU[wĮD,\vKXbnSL2aL"XctYؖ5`׀0ݛwܹuHU}HwɷN7dUsW4y2*i[.d55J5r 5-GXYݢm$Ҳ>vA7Ӌ"/:/Ыo0`R 7+fyzX\|j{] r\y`,*Ioejq%N E85C([9xt$bղ`6sudTU@kZӣ3r ǖþ,V/{L/[!L ,csKbLDy_ z5:[Qj |q\T+ Wvjń_6G>ޛZG\KAՆfbʧ,yEEL -B2cW19?ҰY *I* mP#%2c]HJ!FZ""P' lai}nAu ,W|/M8:%>Nl5\}唋&IlccOFykWLaTsQ \LƳw U&爈s[J@u\C|F˄؇iVL@'KO^>3PbIc|ݽO3jHyp8bfQRZe?)2 ^<"oC'=@.p @Edm%+J {+9+N~ÚgVi?p:>fJL֛W)a~$Rt>8yxҠ-\ Br-(KIۂװ&QOA z&h|;9!z#+3Y\2y.̂} pG!5A'&qCBg?]t\%n>c_%kEQ mg^cy<x-"fכ^Jrs 4~Tm0&L/v#5~֢~jkVZfw1uߋf5|1UV]6d+_ nglhHX^iA+ki!j8/*!\1ad%o&?W"LH^eʤ6jоH<D' 9O\Qkσ=[B`N{]Nj[;i/Aw2? qTfw[TT cEYu5-0֔ *Y'2 ‹ o뫙/| QרAo̷܅m.l+wm%[~* 2c#,2±I8)pX{!yz `ge;nob+aAm0V zq);MD(yj-H G^V"O*D\kܡDpz؇Η!=ђEc ։hD;NjbIfH'ݹkKXy>ѢGHO%wnҹgPMFR1s2={c\luL}9VsQl.:=_䰤pqѪaKزlyWh:Eq|"%kG]w)lIKeߙp2}h G_qByGMьjU8gʕjE.L Y3< _E, E_)1Cx5"sD"Bb1FBFjX}D}ѩDJVp%$>2Tu&MRv{?;c-d Q[]+9n#4sڔPgz7>?DSϿȞSPF xtx,ģy%35y_PT}BvFD1g| ]T#V6)s!kl"D.jx=Ll>-O!e3Wï͚^ @x7qvٔ%93gCfu[ongv[ϽAh/Z?_L -KbR/i$24q}K`j >A~3[h2R^\ {s[B GwtXcboeGgf2vsJe˗whm.BCt9BvVUL[:?{r\qkZGƺt5%uKvDk;L9Jknӱ_/)esRw)4) w}uESFZ T=<ņߕšr#ʩbmm1a }231+g`8Mn5Q tGL#!=AZ"hc?Jf3e#U)",Ss78vI9ecb YN5Τn6Mm{϶9ȚgIRl8OmD@A(k9zˏVy}h2Eh5}Eא}L 2z<>ڶt~$-; p:yƒMa .V"@rkbZu8gN  vF.YImd v5Nl*ǿMǜ֫W['ZãO5 =57y[ٷ#Pr&/Q<hP@ v6Ιq4 x!|9EM3x9f zq?&7pMo_^- ?a8_5~63?Iw 1)XPA>{=x;E6:,|P\&[{7 q_W`23l9c @9d34BQ#G/H=uzUcUU|uKmBjqK)*{ƭ....uP šzUMPx8h A :LoEwX>&8J\vuemvu!Nr/{E~Q`R;&*\Z#!³0tɴv0"T[CwBSc9>iQq\ v ٳMZ.tP"pHQݚ;"z-d\`)"(՚=<~,4nn|-'(}َ ?|7?}z̽h\`?FEaN .mDW#&Mxz>5a>dw\>XS, 0.[hFHi9<ȧ:g.s)ϴhz৒S 5HJL;UTwOLMK:nl_4%/)8L_N4HJ o.0ݦb%ETM3Oi8Ii! WIqШS47yR]s6 '!"Ktfv]F-_-&bH8'>bdϸu(xj0 ~g$JT9vsq4 m4+\;P bW{T ՘3!CɍGܮfDdsilw2\%nA5: Iy UEŐca$6`VY8%D6TBӸ{xšͼ1bAbܙ<9{ *VRi IŭbĨ BTçX@!U\FlL>ծA;zu7 of'| ԔhC.zz?, >_?z "%gWX/{\>n={ X-2DMq<?O^ h:T/N 5`o~D!Dr `lfohjyO!}A%[dc88˽7jAa0 j/3w_Go堜(B(m hy OaI3Hbb-da-%9ql|9Uu&xNk:_2:d߽K=d'5yiFeb\oVoMs9Lrz8q7&h8 qƷ ²ԥh[on޻rМoޜF'O TCc|;J7/Žz`GJ Px9Bj XC^tG>#T#V 8fIMAQ\ Ԉ-rT0ʋ3i1ь'\'I`mfvV39 Q_E [e=F(ێW0CfU -/.3C#'S>-6Z F`!mf7z*1C+X.7b{7fzux 'wӯeP*8ab}Rr fۛM-NˤT\.*.FZ00zR`+^"XJ6Ra;@%:D>{Q k^ػUf9"Gy_~Pd-?,azw;5lCի7C"x{%PUֆNZ _CLJġV*ovǷT7kqHo~: _ 1+?.+ubĠ-1RRY9If:ϵTW߱Y:U*y1C/4'|U:HΈz 9&k2aAT3#kהмb?Y>fJ`\d/(oUwǪ{Qܩܓ}<Ѣ;g&)سH>&xU5>ڟ?1ʎTO|ZH. 9vpsqޙW[ϼ/#4aDّcKDSgE"Ez8JaRxqV'|zW;H狲|Vĉ  0q/ixIgqd:MK Iږt\ U{)1 *}~zL Jtm0J?IoYt (!6ZxB 8W-mWAg㕦V+!:P*0#>7~..tm177*(Wi'M&5T,U2}Wz8/]T$%n&KqOvlK_E//ILq KR2) +-'T"@MRAxT'/YF!U]j)o!yX~Oջ/[>qkzk}kIZBpPb-0U H˨$j8NDdBb7,3} z~f!}Kj>O\>|=px4 3f~>|ct_g.=M,‹l-2wu/X L_G*p} _88Ss$Cc0+:F +<2Hr*CKXfΚ>ĺ]+sCRiZIj(X"P8'Bs'&Ẅ́3tB )r%,i6D%cO.&X @nYw,)"_ee6ťד?0 N|+`"* 28.Mֻ[z٬Tw0,-kQvzM,~.`hQsțc|*fWn2yLJ$ۙ|\+cI˓k(l1#PI1u ?['Tjz3p[/?]/OOI7{wWz뽇1Qٻ6W}Y,v)w)] Ff"SI%1^俿CKhdfz%"$rn6S`)!Rd(*1$cx ?`kEB (Tº“(2$R9o^F1g&:[p\ 4/s6tZa8Mp\ 82kLG(3Ec"MQ(qkb;Yhhgzzwءlng^+Ծ?Qa~z{D _ +deMN쇅 OX,*Y@s>FٹpW|]ATN ZO]c5Ϸ2T(UGt|K10>텛gwjIDc=Tď]Ix ;1dwId6lXmkOFnާTZ$"lyƩJBؖzᙑGѺ )<=u&:P)cc6W {T>P9Ц0 jFY㍲e7kz.fnlI9g 3T2 j"L%s>U.ۇZ6uQ݋Tdt8KŹ%wR'SÔ$%@FI!iQDIlh-J]3n.IN(8D@}ZhĜ`|2 %dR- жVz'< G(AOJؒFc@s&Bp9 h*h,FML Z zc۠,r& )U҉JG4g `"K([+R>%sØELʈwO} 5r[#ąFnSqdH>h`cI˓)}޼ƨFxy?S A_ngxk Ͽ_}EYu/1is*mdF￸@}:?p ) ?B??Ɠc>dg7Ж9[aP\O>] $A*O>ᦆR&)b)=q,E$vA-=<Ԙ B9ǿ!vbt(O(꣧@bTMk\m|@NE=?)덇=2b5bW޵v- {_뚑V0s20?قj!AnzEd[+ՄuJIkf4 EOQuMn4-jLW)}kH٫p!P@[lL-E[N eZˏuIBQjŤ]$h3-IDE9IA=N,j/ڧHӥ(&#Y *d=݄"iұyzzOVO>D}J O܉Hh@0@Nj1;9q?&fUb0 >Oψ 祾{Sధok'7ΧU!]//Ğ &eYUF‘ LoIz- 55ۈ76Rg[x@Iu`I9mb9O`O"s!-֡ɒY+[RK I-Đ&JDi% K"嚲]GC ]bY$9\`-}R58lK'$M/Ys"x Pm?>i!0me0%#sj:'i')gfmՇer1C½^j(r:k*y˱<*dfp^Z"<0[?t̥-#>lf-FONRZHOOS8֓d1)_@ NV?c?J9{CeE+v ӱ!^V:OVȃA> #)r8w^*J@y\37Twoh @GJxΩ&{~-Cmś g>hSwG85G3Ρ.QBYxoܩ}綞ORN9q[k,t'J5M?&& mjg\W"5b]DV^rÄSHjHICnBjnj NPvs륚V*y7CIPK!Y>-E?37$1/nN&?i>r+ET*@kh0TIp36˻N>*y.?K٨:E>矊yʢV&NgRT<5IuM!ĞtRRD Gg)muN =shC z>uz^i{(/IZ?(7f>gTo~Sל+WCb,.1$%&rᵑTJ<P49'9ʘ|^N;u)w;ʂT !Z*pH=e4Q(ЦT+W 09~KM`Pz@))o%gUD 9"<&0LۣY42N1![ATGU% 7W eQzvoJ56p\DŀjS\6M!zu!P {Ul@(kWWkIS8S @/Y%ъ&) jW"(rHXS J3 57(U'nH$q)AqsL(G9@8Q=@X' i *[hb{N\/^|ݟ" rWEV 6–8MP-z}/olN?y.] 6W* .* (0,>}=7xe*tfa g+7_wfn&%>)GA]]LA\j#:Xn' x\5>vǖA1~pob;6Q+_Q>N||aC=mWjq\-$w-Pto8@HϻOb A5lj0?>qÕ+c][EOg~?~~{?__|縩?wWSÕVFc,_QE&&Q }Qnػ~]AAP:}sBIMC&+ k^F rÏ0<Բ֔]xU5y(]|!(NqxhqޙChJu0 .z$Xpz/Ժ3BZ;۽sK%UM~80%Вl94wmm~98gW2y$ A233$;֮l`!%٢nv[˭f+XḰvV"YmIE[XHv@\wR xq]34b|zT~$y/z9xaUi!Te=^h`°=mf# XaQҌMvQ)Lwk1DjĒTPJOVW[VҐwedljuu>n]BAQCH8ߕf``54LdhsJ4GOg腣\|mj2(;hs/E4GOg˩&#?G$0(V I֋Ό Bupz;G)8NV'o>3AABdEc%EԛO"C1[ Se<1" eDK)z=Z8ҎE8yv'NClsH-¶]%Z^CpKNdiGɓg09nl[.;-DQ.3W|Livt{>Hshr|2r9'8kWicfM}sh l ֢BAo p, l; 4vT CEJ.GWY8tZD %!c3TAYk=s0HRX0bEmgŕuqyd@ Qi C r'gdn|_~c!9 <`kcdT՗^, ugyW>,abYtkL /})3fmlf8Z):ֽpk5.mkCFS!һ}k.k-Ga\'D!C+`w4A12KYZ#,#c rh硴Mh٨s]l,C?Ty?*ksKqޜ;ʭٹnQȺO9m2qp66$m'.xUH|j &zacq;x2ЊDH @A0}f` YH!|uk#ҳ:9.*Y0݂ޑ[Igemt,9k<ĝ@ ;|}gk)9p$) N(F5+ 5RA`,B# Ғi-A0` h`)ZZPIzGpTn;~\⫽q9U8 %TR0 q\! Io8P(*t-jPZTsJV"c4VcAI$nAҺUSj{`-( .)J RD`e4PHVBPJ`PT$j5z<mRۥvv6vob C>]ld!&x۔@ZB0x+$/hbA}sZVsc,g{9h9Z]A,-{nWj)wl Xf_}*,A>G ,FWY9ӛ0KavfuB Ů-1W  o$6^/QM3~R"*Wefx+fuB ,atK %h:C&vmʡdJ$lN]SP\ ZJ *RLEU.*d> "1jP l零X~{EI @08yr-j UKj V/6yZj܂<~Ẍ́,V}[~n} V#͂(GqYg5(-(ݖUG}>\sjզ /q/k3\YFU & b7\P.4nY߳/1TY0s9#PT/2AWc98Pe]oG辞3㖃SܕݞÐ;&$qٮN|5&[6a)x7ѣۛbys}Wyt֗oο|V>Q'wcLI]CPqj骛Bd %IlHnYQ6%}&XKbXa 4:%gaʃ!☜Z}K2$ 7k5s@I+SZ1,$- ؑFI 9dn}o(r׈HAJcM1 HFxQx7?B.ue<_$b9'k,(FlOt*~@ &cL1cK!KM)G0Nj*]4W5)A3v4Rt*:F܉tۥVM{oïތKud5i{븰-2vk|D=%SR1U4u)Dl.Ƥ;JBDkP08ԥ6VZm-EnhF0qK)~={3d}OrVz]hzK0qK?gj#"K=TR6U-0ug`b^^3z! 1 i#ߵ@ 8PhB0S 8C)됝52)ܯRҷ6}҇ȖH!1׊R\BQ8KN(Q~FE'P|{&@o} j/Z_oh$ $x%x%x%xJp"ѤcPE!`h4F@5ѥUXF(K*~>7xft\e=*̍Rz3 p'yNg.}݌D[o>[_f&yƋ3J ,v$(NC(&2>Ჰ`J%1XNaaKdRdQE&H>wЊؑbc0L Y$lˆ 8疖PRSJHrF픕[}X!⎳*>=TjDa`ҙ+;ܶmƼB|p{jeaf"x_,283^9?OuNf^o1H2R]Z4>΁RVs?YZ`d+YXjzlUOq(rYu56GA L7ֺۛߦn3mےɾbe_w軧Gvx鍖wx&@:mEL6xk?քE/?;sr珷S4q6E p$e@:Sl4W_f׳c _OcعaڽO}a }t&[3[RDD G.2>q'6 (;7\o *1[[@ (o$ !Hq[Z izP1v:P\?! @9,8gރOzji; F޺8F=CAOG@*+#UpN\ ql)$sr:\}"ҌΐQ[cr݆.vmPtTf8 Ο Y`dꀱd!1?X~!y OF~5g{k<N>N1yΥIޝNgOg)'c),}&òʎgz3+~Nݖz5˹v(kB%P6pQpu<XCoυf(K.(,кem!H(tQO HXȮTcS ,ªcBU5HSOT\CAפAеρA1Hؠk?0A4BeH46ڳgàk' $wkdoK9H0[ dn[Kp#S  ɑb)T;!-1v$c?3d#KHK寛t&ߙ ;Y_}n?kg/'ZJor{tNVރ̻?Zf_6JI`APϦ/<ΕĘ۔1mM]]8QϜj: &&E>,0ZZPR=36L̤g gZ-kFfY &׫TNbU7èEo1یԢT@@tA :]6R c*@U#%aAZ̀]~ fuDhk1xF NP 5ѐ;D_v߈;ݞRܛ,}:~VqG␐NwAj_0f­z8)Ӥt9 ;Ltկ;0:A- Z, ԕB}Rmy XZes:A/ޛc<;L`vWaz. w{9FP`'1H2i:;Ն,$(P/5+jWԯi^TJb2Qi܎F l 'PPaJIb`bjpQDJ(U2S{$zQ=dT *-`Xt_Vagws-7}sWmv֗oο|V>"*|77dDखJ0d,e QH9qX  +a(9PpDV|"IRRː-#;JH 2(Z d&%FA K*mI,@ihlA˸$܄Ek%؍nuiV",uC~%#ђG/Fis]PBKea%skܨX"pN}BN(%Hr>chӷ)%.Mu [_kf\эǒ76槃ӭ󒍎P,S;7c<CN|aؠwq%Pxu>RWd.ٔ $H[H޻Mi8/vɲ!O7FyBs>O !I, $z'($nx X}9 涼7'8V ajﮛ5yD߮l +ڙ:ACq='w _ ASA?.dhQxh-TaPq̏)POT>(~郂 PZR'3q RM-D`@й0G#Fk6}cٴSd# 5Fe3*F9"=dVBVa֧戭?XQtsYѥ{\eW ÄICAXv]PqճEVlMPԼUFvbDN]O_j9ql%)`FWZQ`JD'aWI qC&Tk)I瑈bZm;vEۢЩT"P1PLڡ.9k_c@Mѩ5]['L/&K*$דhV䯕g[2S_ ֳxt%w ))HTH9,SۢmjMjmx0SɚS) U( j D*AQmbl%2MT `@ t](Mj 6%|;̾nUKh(M#$% UC1&%H^*,w3` 颪̋s)LYmЍCK^ds)]|w:|.TA/g2}lLųgWJp\h!-cXCSjH Cŭub@E6\2]oم+*M .f # YzBܝyU w3\Lxb䇩u[f ӛlqroؗl!F*nrH)!Op<% 6ICImP6*xkJm9s{ _={vi%W]}BNYl}Ž}٣yy.6\XӇ?uKym ۻ_g;-ؾ)R/?]~kp}w\=^_1O[5_we2@xh&d`V.0f\DuWګ}S*h~$/լ$(k+Υoy祰?ֳgwf.PM.gOK%O'JN 9"$* NF ]. LA'2( ON %10~-&ԲƦaR l*.uUѪ5Ht-^Iע-dx9x^gm>Oxpٖ8CYI\ՎlJY⺡*I"BHeWF 6Z31c5hgUTAjTҒj RTDbK pcKВh+$8YA]!=tF;Ѿ}=&aݏ(2h^S\1z"jJ2’/~ŝ??+00;aó/3F ZrsS@P&Ũf_koN (qAՅPhj ΠU1lmOv<OetW#N\ x>j&(ٜ=dg,;ލ>KI^ 5'Ϭ{4`NH0ŢaΎ,!9A~pCRm&}L+9]qnWIцt9OZ BaB4*EEQMijZ@mu)nT)Mc$Thc@Wp-5X*#0 Ϝqc6euͥ]!-k. irT65@U`G`]Ga JQވik6QV>&}^m.6lۻ3 ۃHGt[<|>xu( f~0ЁucJ;%pW{ʸO=bح8K3^! RyXmWdۼ[l7YmyyyyY, ){| %/vG]*wk_5_./ۧren07vF@O3wz%g`rowm !^|ks%rƒ\.+7uŒS䐶5&wXDݖR*ZzR?HT-[vXk 3u?qe#8˜Fr9Hl͢`{wM;JMÞ6 CGtiGө  QLA| y^|U"@r*u,xvQ38W=' Dʏ/#ӏ// FZhT$Qf dP#T+3ƫ+' cN|>b-k<#Lp}*6,qh0G!xpou:;գpv8(v=C3DK5KH6ހM![.3_y3-]HQ`\f=n0t۫*'%OP!,KTZZz mQt>9B5Kc6z6`PD+bfPL0M 0(8L|{^%ZI]1]Y1w J#*q[ («LҬo&H[i- 1y6)SDxӪ!Ŋ[iv䪕gۭY)3հc~^ zC_ɥ}wsهr.9ri/p."jk?~:7D*FFNtyYԀQ*~TF@k-gb,Vqڡ.A'WWSrZ+OmUr9jc"ԃPL#Q:>NRZT:,⢛MiJ(&&~CH`&$)ypS-$n ZP!%9n8@R$W,6_jLk}ܔ=v?/_&\ýժ<Xo\%o.U˝Y4p Uh \b᏿_8&k?}ƪws`NjQs6JSheX;[ӋnpŎ'm~w׬_ YM>[oRp%m?5T„-Ӊ}FwnZV{zr&eSXen,[,>66)P2wCn),M4ƦڥLlmN"12gy s$B ݢ |[ 9smS BbJ=0֍LSwZ*JXC5.2v~u@v;{ ;{Y/wpz`%_H<^y7D+5]olK ^vAB/cs ><$}=8ECzo,`h(X FNՇ={C"rk{! ǧI%-]OVE8N {8e$1a0!SVU"859V8xDX ܁}MD!B[9&Rs>iټ>2j^/̿w//ptÁGo^>ڝO"@^>{+1I@!it|Oq2SJEMW @A7X HCKJlK R0ZQ̶t'Z cNZ?Ԇ# 0qI@ `ՔnjAXBKz%0ӣB $+^klX@1ՒյkZi- k*\󺬡eZRJqTh ,AeZ+kQLLsVW0TH1j՚b%B؂`hQFD\R]p{0]e|1m}M5BÉGZz5yۖ]ev 0aZ35 I#`0pm26@ ,qK2#qEFJ'їWㅸ0+s)8rSJH]RXgU[/H׾'둕 {:y"Q#.բO"+k@/.hYˢT K533@$#ъ$$4O(^R23{pD-x2Y|RpҜCNoCA2&ҳ[㐂y˯v8n@pč!>ܪM$Y+,&jSm,! A48*ﶽ/҂2T~UVؿ f*߾0PbO[{e-:C@ZMӢ@3j-Ͽ<( *, !JcGUT񌵡rVs"a U#eNӶt3,;S%caibqWxh$ռZQhjXcH5M_ϓsZh>`#Nft]abK:lUF%Nٯs1p`(~v>D5La'{/5u<EN{ݩ|C_{ W sɷj[uεkدϧ,wcVvِ*@4fJ{P~K<\2 +|CT-AҨC^.u""I0›K_#d99ghN}>.vs!'4&kEsFrCbE5=$XS&DzEPxU2G:C3o@'\cMH#s~c]ch0rhPJ${0Oq` _P>NT//J3Dyoַ i@3ALZI4e?ˁ''r_﯇Qm_PHk蟾[0ˡtt7"Sq8"5b<VfB(+\(EZ [?#&=Ĭ&Z .eƫ7cH6̣\|1b菋+vpv,~|Z*ߐ7OyA&A?y*b!l DzL]ab "RԅVȓJ.zLDI$Y < TldGfg=PcMvJnx]Š !O+ކdߣT34Ew{4Ήr ( y :o7, 5[?-{rH꣩7 )qj/zM(Y7=8Smh٪ 8| Jo-{nP>~8ޙ [c|kF*p_Iބ?B ~ t_n֯~e~69j^~~p!܄~7M #-h52iL \L)K&)bmNrl%?|_\L OxƓt9xW$u 10?珰m^9K1>`5oBnBf}\ p$Qs$a:%< YD7}YbҔ+w+M8%Kg {elwXve뫨R W0 (e^ː53^\I6INDʌ 0]Exrg[ǟ{P J~QI Pdt=.KO2Iң>%KDI38 ? o ( 8iK4{ /=aw4:1} Tӊ{IsUgpɜ{gֈq_5,q`rv0,YrHa\ \! 3V ɭ2rO()H)lakllw`u\(+L!%3oc=6A1"gb+z/9KRRIz%PY8IrH+,\.2,یvHL*Ȑ LT| QjWA @jhAZꚨ Ϊ(]SY;M 8V)GH0Yu.XR$17"$RzEIXH3BL3"1B20!:ji,,엒9!PmˬzԤ &F X'}C*'0dõ`9bG ,'bj+8^e:7/@羼jT.Pj59@#ּ8̇oսr J b4/yQQ:BNlto2 mk*E@1t"$P"ۘ1A0EhcKᅈIL!1pElʯ&=$O&)HDdاbuZ#dhTJX029|L>NTnV@4l@弙o106HOegQC8!H/FVX(o֦1 ɳhG^rlO 303hؘW2w-o1ԮFdG:H(Dodj-dc[=^.-ZY~riyaE]jݤ{V|2#ܴ݁[ߎTNY;;j%HCMB?{3cR]3؁T~|bw.8UG bdIGƓg*P~=X>“)U!6GTK/ %aS,ys1kiq#8b3R1e=~JIC?JTWy %Y7)ms]b Uy b(N:2˰feUfF)YLǤKHAKgPXO_QI:왧KR t%,A/jQLg nsgP2Ee(7a;! Ea6I^a]usq3TW^ф ^ڢJ9Uiv)>rl5uX E|Rc[w@24 %WTh/*4r5ڵF6勖P[R`0o"Z冽S}, a,2)BsM9B8dpRԡz~Z޽ħ_ Wsm/Ԋ)3#|xwm?8R#f,y3ovaS1j|rh "KB: سNMO{W!x?,7:5ZX|pH D;t7͓8h;pa6{޷|KhwLFEhU;\ƜTyKB1NR/1T92/#sg-ʥCی$D3$ y>fjJѳ>P$Bi9S-AAۃh2 *[w W 3 ?[bi5_.!uaBϊgɚԄbl -_خςрd}N=ENJ3 䫎2EUsKhL)t:y[Rm[*1:Fv}*znnmH.˔tC;kH}*M9%Tg 3۞OA)\ TIQi^T>50jH11R~h&>WVcQ&D T#2å*5d]GӛˡԊDKV7AB>^=76=G3}u;^-~fOZ)Jg9פk:ϢSr +I0xa0M & [s`Z G h=` J >IU~%E[SC\r Q#Fg/58' ok8K:N9R˩9`qN8&\G$`#l$Yﱢ&eKJ:!re0&9dŭ#fgjktjih`/KRj}%w/oV'lؼ`MLݩ F !$f*4bvh jxnnF2Ud(EQi[nŝXf̍"VT2E۾K,Q炶5mI`\/cJ}MSj 1.%Hms}n`@Ƃ RG6U/ B-WFWQrJ&Γe='$EaՉnkB'~-R&ÌQud$K؇Ǜ;31% jg66p¿Jn<#Loݢj[~3?Ld/-$NHȉ'QRZPk|icC>؏mVPlwp2[&̤Sp?T嬂G,*~$+L {fG5E:lQ(;<(#xe/~:?Wlz _frvrC1nWr4#&eqeľxcx.j䂠?.>gmvSGiyLr8nv:y&͊O0_? u`K{eF MǠ_3iZoa' x GG|P }=8|1r1aa<Ɠ9H-I YjVSNT97}F̈́Zng+ uͽL^ߥBu^l_M6jV PE[0:d,Bl^iݙ8G W]'gD,A &C/]hBoѳA]`gn'Tc֖|v] $7iu4!\Jw󠶝3PcuDbZqV/Θo'VD%yMÝ, 1z.x R4C#z^NtkWw6{:S5Z*d3._)ϫa}8w5&sXOrC\a[CcȌwHcc09rgSs195~jX ;qJTrㅃ$|o%MNSkJweG]Ey 0OxakȀC'MH},6X^=_,P=4 NϦѷvB†(>7KuX5@%ϹݻitcT>}e/⏷pޟߵ[T/64s zS˜^>sʻѩ4yK}GxkMk{gכ_ ,{"@{qqIab}JC󄃗ЖVLyikҐ)u7NiH 5Na^O}PkB+촎`e2h1Ё{oSp+-e *TKهߺdRKF 4PBP W A ˉ{Wa @*!mǏ e /ֵV2Q=m9ut1TrgJ/PceFO10 U+?ד7u(f(!Q(>PPDKLR⏡1bʬ] Fc ,SQڥm=6]Rb8r?LBd A6a$Rh6*0(ؖ#l]KH~*m J7@-m 1/1U0Ι"#guiuN('TBɑO# RJ9]n/[t+E|W_|= WjWݢ>|>^Nf$1S-9Ǟyƞ=.>=.>@Z,> 5Z}q* 9:[-=ʼn|&!נzhR/j@6rWxg"/{嚘dTzh,YQB#@RAAz)5%x'a7|V3PM.$?EόwQ2*6vi-aG?j+]Zxiw4=WqvYFcI0͹4DΨ,12c"&C);Vjm0*jnGŻ_1qCM-͊O77[goAYН%66߯6u!m=,J Oxnh[ e`*gX\eb}ew񞜵vϙe.s&p3n&mWI&LʩuF{F&" rp:س=v=?[z3Az#.CnxOÕoƘC6]Y^.Nd6Myfg([izA9 R_2\ ,uFQ0;62)<0&EʇL:P%6HQÆ?BзX +&&C`<9/9Ҷ7(a,3U`<(Je\&pA٨S0w`cB]Fψ'Xڥ 1a$жd?+M=bf:(ipxi3n=CP$&WPzJX' f}+)M!:qɔ@Z@ 4lunX_ f7_adDMaI:-)oHfT"ˬˬˬˮ:%D)(X34ɤ@F*DЌ W:JK>D| f>ߒWʝJy9 qcR=7ԑm}z)T뵌(h"p26H)$U@[x)i*)Уwo!1_A[;PMZt(ksXjB (z.fJU/matSpB@9PP޾9}c>>^\ *)eTQ-P䶟#SpOSrO] AZG",-Qzl* -od107[5J44%LG:*iMC"f8H\YV5r 4-ܒ5Im`0~#8Z`=͖gܣ[Vt2>K"LXdt0EF˫sCťEFK1*2QF{lwp4J0}s/\uxê [mKw_ X>H$R'{Ć(t/9O;ISU2Fk5]%u]鬜47i}Kh-'!.:\n d!_nSRKN|%TU+7ξ^-:Pw`'닺? ا^Imku2>/ R?|,s3VBK6B_ L aFMlGhҺM`vvFKSu'PyV]OF'=Q֟s_|UGeHbJq^8jYkW|+Y$ "e:"n6@Fg{(%J;=0v֮] ;XىR^P3' mbqlN՚ (&]|и%qZzZ"'T(ߓgqA=>)\Nt_JC)&r,ΕcAN)$gaT%]M1LayFJc&q0''-FuvXa$K٘XJP ce#,͛ l2] ˇ蓓T 8+`^ P_ ɪDL n5!Xs9{3h$Oݭ,FJ' 4pFEԬ!DmgnfaHkɛKBNfu"{s-y$Yl\,&H>FKR3l.OS FI)ǁ|xAѶW :Pa2=@I}G2= ..ӓ~:LOU4E$ߨݱn^XT bT'u 7߁S]$֭ y*S޻֭uB떊Aꤾu; 8-mm҃"[U4IJ'}]OZ7u[*1:{$\|L֭ y*S}Q@_ukj1P~9`Dvˢ8cwY%HSs)NWtQF& ==Bt8 ! K\ I׻0g߷ iIز݊:mH꫷.%t_U\/]Q&AWmz{ &:}jAZ3$}QRZf ^\6#DSpPӺz*Ƹx!b/%> u!˦5Wm"U/BKպxBr;+1E81C\x Bި ƞ1qIg.ˉ6H(B5;I\ pwPۘU#)T0 BZ>rB:o2r𵨎,05UhZhuL=uqM5n)LY!&AtO6#R6$a%1a=zϷ&naYQ0\!Ƒz#3sE;28˱"¸' ТXyt yis&l(!F2(#N*Le&8$5!*-5B}"[/&M%sY?̪;k_;~cGm{އD?^rW9(Jwe4XUp\yܘk !z"=d*Sz9Nn7<]i5f+.}$f w_>؆yj :Kh|=i~ÙSD^x|ì<6zD0z@%GLrY0˄~wx_f|8i]J;rc9QLu0/_6ED G=\xU󡔗YMQ]|σxc\!4_fCsk9sq5?g:Rr=$N3^JMԸjz5L3Hi}ѧ8c35v iЌ/ށ1 qre7o"bȰϡO? `[~:>W}TD띧OnS'WYu=hO`p;tbay5~6BǪޤb$ӞIJ8Wsg|#J.rkBk=AG_JgWNLH[ʉ1ЏV5Dl:.sBBYbdݶy:&{pRۤ l3X?LnjX`BX.|ɆZi~i3S 視v君Θ28sK|5).%!ĂjՕOEOR(8`aZ €NPVb<*BuAJ.+:Tc$ ΌW@BYjr@~ ~6V#hYFN \%KWÜ|08f<)juqZ QRrQzCaoЛ_3ˋ>ꮁ*-o{z$Պ o%Ċ5/&h݁5{C>yQ{yk1˝MCsRTsBU/r}`RV+^~[^ky1-$ r:asX~~2`eʌ̺0a$8*\_YP /?~??~I1]|H?D fp?p dɢ-A4.ݯ8J=aa ;kqXÎkgg7JNAK FJ["g`ZzR[6;'LNjb7afW=tϡ/\WF6C \9‡ GCfs94軆ac?r,$!͔ly"!;Gl9ea]_Mg )3G?zx?:jD@;X[oWUӺF< Ghq6ۖclL*WZ׉Ó 2$j4ˊG;'BC,hBB9O6Ϝ˻fK Dx#^5PP/h#߳(^JT9}R``z-,˙`et3 SԒxCfH+RٲP&amXOm3$#:XDxKBBaYFL̉R&)bmNrl%4H)yd'q]{^ cڌyX̡bkB Hx$A H?oF "}.n- UyFkB%@t zPqZJ[WDLF1H/ *Z6yRv%d:\LEA a _o먪A)Xpd$Xv|-o5]~wa6sd ~z /R߱d(CIp{8C4ļf{:@ja=0&won dGP?)w8,(y];ajvZ?(U:6j!5ho2cFᶪ_(4/%lhbj 1 ڢ˜P*?]\J9=LEȋ~ zX@>^fZ\_d/n*?8z8 뫕?|3ϟ:Ϭ~4\އ!r?3.+wh0 2f/t&8$p`\IPpmw1/8Pj\ U81];gguFG*x3ϨߩIΕW7SsEWȓx. 1 NP]G`ݩ&4^-˗iM Lo P=[sY\O'A\߽yy6BPdӫ!W A(1Z2h'AdyxjkhP/&?o$3rf9HEE&RB^YSV\xX5 TyfcLZ/<%#8>˹-h9F.3KL-_ eu ڋk/ڋzJ&#֡SҋcU BexC$Z,γ1 u#c&au1MNB߭S m"+CG}Q(3Wk fu^cZvu"u>wrFTYYz!bKZX*EjoE>b+1oo UU7s3mtm8+U2-U;v-)yb]P,no]P,}6>TNj%gl={<9[W-A4ﮄbt ӝX-FS%40I~weG4`^,Qb KAI9:,oduY}YYE9Y_DFFE VH`5R.kƒf U*Ek- cúyr|0Xڶ?rNkG\*LڢX@Lպp3._=5Fp;U8tjt5c1]ؾݾf@BIԁF4K!X/I HG,!Z"ٿRom4~va4 6g/z?[amIO*qC.q_ߘ9Ç-߇ {xtW@G:X.\21Ġ N]/[S$ݶҧ`SeR!3J ݭ[|UX6~kڅW~Y>>8xN[dBKSާMX4_.Z ()l-Gr[:Ny7CfߜyWߗׇ,X7"xv #zR rL].&6nn5,䍛hMI6][[)9S.G ޭޭq-)䏭vBb ޭ)}GvģHikwhVؔjX76>E4L9:ILrh&H neTf_{ xD?oKGwctgۛ@%Rb*}+DR0ݷu_^ßjɕȫ.aߜOsZi"LoR=lA8icu37#$HV~|ۤd:DuP= +\RinۄKԁYSZێO{\0\aXA" )=6To~ Rb:Lz]ާI2%X]vs0p$NT6"?<Kއ@8B`̗Ui`϶Ͱ]l ϽԹmU5p0z?^ܽV ,-љnb76y}<2Kr_5Y "fwaWWI)N=?j=X4Q}wQ(s=f_ffoc;0j7uJV}3Ou橶<֙kl"2&rG$1"l>8Ͱ^\v+qRcHH,$|@%Ń\&u²8*i+m|mݟ2g׉_^xZޗ{ mGKk!2󦷍#A8m@9ŕSKo:՞r^ RXdU*Đ/?oK@ύbJla^ P | sDb>сF"#34$!0JJxJ)QQIT G Ӗ)#+Ġ^ޘDsCVZ&O4.XIbCeQ.l+S݊‡f[)2VjX7"˔7OnIbc:ݎ8 )ͻw4׻հ7nm(%'_,tk5.Hs.N"*\b*dHΏWn`P ҶWzEиR!~mŮ*mU-qwYid aUlhLw2)uDu e߮P\]]) oo^Vl#"{Z'XN;ɩ6(e9m}ߴa%l.]BnyU"M~.,1]83\c/?j.E5?MԸd|nVg!j4I)ǣYߪ eYxǤЈ^e垻f5gqpOf:{xĞ;}O{A )DqUE{"]O9>LʻFM?XyK:NEf9"0ߏxS7)ܵ2êDM?Ft8MwÓ^q q9뎓,`L?($onc!VǓlym `jE)Fd*t1DHh`"!-ur2q% ϙbn5asfۺ-w* %& 1aNILDInl:LJx.BLLei6@f+'H̒sY-ыBba1p%"*e$ =p*5S!ja""ehI3XRKZSWAF'ܾ&$H1yh`^qUQRB$&zO&E!"KBKQ`8M q`T4b4ZL-"Fe1`崄o& ^p9< pGB[5a\a !6PO-u.q"f yfZG%X /M1"'&adIJ FFyOMy#0al(0gd-4l#p7*i~;s k_ݹBǻ/su^jkre􃈘#2!)9~q~7.p? w4@|} ' t=~{ß0\sq6ܻn*9xBa9 #q-soD?yFK=s!SmO A+bI9`|cp}v"RdW[ûQ4Z`|L9rWh%A`tZt31igGGsK_)7 PӃsÀ {nC0j39P:X8`Q͢wZzJUDHAQ"c s-XjS (2HCT9\Ӄ+c I?{6Fv``fp7`iJt;ٹKnlKmR$%Ď,*W׊(;E> Q. +58!.f(@-.&g}T @ ޶n^#1Jp duW_x3& 9DeZNVN S*男Z%mݒv{KKž-ibjIQA=,VQC*.\GNy{*Jh#*Oyw#+n1E;P^KکkWc##tL5{J 0n9G`C08Hs56E᏶ɓwt%Vpb18}SD.RREsB`rs 7Jkw,L0fXCU ,+N`RJBn8+%Â@A]jkKm )厓X ci#T$%AaRQc -B^ZEcƩ "I1I@3HL)&v&=Ie$&mNUL2H|/zx:;S9xb݉RBu^0. @ rB(ywEWOogoןI"]~`D]fSf:'~F/rrQĆBRL*Y. 6V% \bo^Br*% 6q?)ӤArۉ7§aiXBe U_<ʍ68rpMN ׂ [h]\YsfYc5NpOJ"s$ØA%;ؠvSݾ-b8ӁsBQyi!pʚ}zw;vKo+bamko,G_ Pv'bh ]ֹctҦc Zn(n ?[A!(S;QX%u#[>L>L&mCVWf $"K;')P"n_p{)w/DAjBʙ̘lVPNgPm/BgZt)Q] K3.RCdΟT/Aפ?V_KWmj,_\!Ko.y϶Ne?+5Ư/B LҭN TڿbD&}Nu8⍊ 3Oٜϕ?.B0֦+)U29[)>/W; W/Uf ZR J*.uEhQ8>ZJי-*fAp𱺬qaQ$.Һ =R`UZE QD-JgձZcNG,†2;myXL6Jd y(.ӄ]]줂1&D*e9cMwS.ṮX!J.l1 +kKG݉ CPE$Q3܁BX r ( ʉ ] Gxɻf6zfgi/x*5Sv֦YMt͝ѿ6/WGѣ2y ܁ h֫F̕{wZ0јP|2T겤3V=ӕT3JT u{NvuH,ւJbTiBE͋JjʥaEB*B0%:d?$)t?nݪm_Z*]0yUP4)J H`.NmkwҀXZ+%+%kعLsAd)Ť¤,}QD4'V0 įQ#$ed,OhU>L&61S֣I55.Ԉc%Q t?ߴH J"!ŵS!TQ%I841ZV3K :'q1sF1N^ycOO~? enmӷjVw ۶OiI'TqkňT{B]R=E1·⡸)x{Oe`//BS;xm箚of󺸫mDaQ<-cX0t% ۫v1R@k;D6PImhR=@h0zH)P!RҵS-V!)x=ѩ W߉C+OR&>Ϯ}nX]HsFn2 GLsq8jT?}_gx .Ä6S==If͹IQ>>^܊Vr>毦h*g7ľvGS4](A=ˑJ}a-whpVϵF$b"3GI|) űHЊTfָV{X{s^Ve:9Gb΁;qςjF3DHMbhR3]TZcCRs7z+ǔF+!*e02I1c0j\! "fS p\B JnQRb %dZGF)E U_jN)u4?IB1<ulPf! ?: qp H$+dvrm_4YԀi<hQKbwMQi E FG[G郳1ސChQHRUhQKzjٷҍuc$ R(מuXGiE5B~ZU|hY7A9kƓ26ڛj5zHك葌FCizpOvW(T3YֳJT۠3 ( %V3>gc7 ]7䑫nC%IM*軩nNx{7GOߪBq) gL[ |@;sᦉp^I|wIfXϰ ;9#^!Y)9 =ªU?rrA H'g4oAc0f_߾0(JY7dƨfI0x%46VEm1ݡA͔?Ow-Veme"2t'g){`0.Gg{MSGeNuJcpcim ||Tx+؁Lh?^$vL^Xj"KxIy| j+ }`my: oy!5rtH_f HP}@m &Tr4[~{k`mEmOy̩lځ۳[Z۵Qvnh ^Ͼofեy4gmx, t~ݎ2e-V<~MլY9C UʲR =gz]v>,zG`oQoV;kW7~Vo77wWx?$N5Fi±akuWw^Fҁ\ W|w{S=;!վ1W\&#sC $I3.+- AO'|YBA07 L$o{`s6nu D`ƥL "rIc@88%rJ8R|M!sJH.9kmnV}Oϱd/ɇdiSg< <%EwAu)Q&$" gF\;(c2d)C)6_X @w *jbo(HZ({q>_ 6vjl<`\߮BtV+Dl?6QN avQhC ,%0^1$yL0ŽX.nZQ̧d*mTC(7fC &.ߌѰIȇ(s|Xe!vAwU=IC&YB8q '30&I,P$VP%ZHS)x. JX V>$rP,U]Bo-Npbc@atXR,}Ċ'ȓ8I(" ƁR1Ƽ6RCQ `BˇXDl4&8`ANyP21#>gAOa]:5S!Fq!^Q#C~BaYk3X XPfkıX8D?TN*@UCڏTb3P= 01r1юc5u9B1X=k"TJM2 S&20Җ--u5Vo!d^DCn/C(8%,6AFdXL4Qwoo?#9 _GR,G%XeFy'?ޟ#m0S\/BW~m˴f鎐9v:P>v"*Q]>fX5) s^1 :^%|z.iuQRmX`R.qh9a׋xUmfJ!q !TPu)% Ry-0N\- )'+EJ]륒pDD2Lݹw#/bX3^/<[qz%2blS s*/gdM3~)8$sZ"BEEBESK_֫.:n^3Wಉo 筓/9KS\t*&V+$ވ ;δW}kV"s?46x0d>uJbD S Bpǀ:oKoX!Iv3x%Pؠ%ͭ٭}3E𙴛syFup@)c+1!eR:<%HraoQ4Ա5;Wmgd}5[6(6'Z*QNUQ|bK*wf|Dg{A/9 MlŸTlu  >?aV# ۾:l3h.6zWBՒ~s=e ceѮ8 ag&# ˰ _~k㐂n>uSU>ZsUT0|_$fB{=h9Ҙ=c{Ղj5`lehE[v_+5o2{W!!ByLEkggd8\滞&dlIRk?7ם˟nFoÏi{QQë4ݴP$Qm[[\8:Fe(t:xA4[(0jt;Q@‡^p\ {\~Uo wMCϘkK{l<d7&B*Y:qjS-4*nJ|ZhxgTgdܫ*>]|/)ںqS@FSЛpM@1fF0#Ck![ r$f0ؑYhnqF)7zӚ`m%TG_XKxV  N5(O&^[n8-qb-L QbŒgFПz#C,Pz=s v11171Hc&y-$9%"N5j;ZʉS =6EauFoBIe-SNcΓIR>9q9ݦۑ/0/4 Lj}t+ Iw.d洛[] NoϔLnhU ELvvn"bPEt~cKi j7vۄ|"L.6$Z]AB԰0HlJB+ZZϟXLɓ iGjSHƻMl߻. G#SU$3Dحu%TVUb.! SŸ #u5Z1x\%V(NS:q#}𣏕(Zsj͂mtV+"k? }Gi\ y)d h< Z;1pLy} %k'G!Boqr_Fw0 0Y~\0ֈl2{SJbȔ%qSn간224M|8mmEW,JF pҾcN5~^ i&va?w{cn.}D8 TsU)^{|.9.˺ĦtJnDKm0_vSZ,l Ĺ=}J(<#/~Mji1O6؂h4gfg) U76|#쿀Vߎ_|ּ_})M~+zVny$[f!Q9v )zz N~nwrX3eSaڳ"XeE+D(?:-g?fs^jr&Z#3R*EYژc86Ts&'ذ8S0!vtPz%._3F\WE,"zLqE4͝8NGhn:mRJꕳ8!KQLkԱ((m>nx']!mTHa(Nr 27yg$ qLLILYUHb|PSn6 E`p!BT% R:x vq% Ht KK 'I-#lj͟ ]5BNd 0E(caܳj082#$\A^"ePCV㙓OD-tl1K7@"AF Rves%Q$ , co k@^J<Mj@CS5eV辰̵A,TH}H Td$ Ժv/CS%L??-N2%&=0mҍ@@OG֖ZPt-_PPKFBٓIL[!LRӎ=F0Eáa6~+˛T`?7<ƘS$T7pUmĞ i|Rl ;ᙷJDűBVKSKv;xa$^L1eu| fU)5ɗ4 ;& !Z#Hz4ˆ9㻿՛~-kEaofԺ%&`xP%Op}#C8=xׄJΪ GW+}Tsɚjw)`9cͪ^0Kq{xJLՊ^rl Ѿ[ڸ2튇*aYm/]}ǚTZ R |kЖCUxq;yQgH4L,~X,۷=>WיSdH$rn̓U2C6'5[ڍ_]}8oI}w"‱qk:rk٢rn->[el#r.u\L'%.P)D} Cc+gټ㗘4K^& F,=U8`4mZպdm^6HZ(`׉i{,Lm\9SiHN58$c_U:8@\=FAj\CyCBha\*o腕t.n 6VolPE @Pu XZP(Y #O(]g2Y=5ː\DȔ1HV[Q N>hŕIDB ڭԌڭ EL kKl݊bEtG.TTʴ[8ڭ ELɭs폺6A #v;)JnůmgneH.FIoG-\H|6p.;K!, mc~+꣩a V1Y $vd}2, Yhؠ$Իi `amJ~L|X>YIa*e)L(Y"O7D( Y* V |X=$^JAK7ٍِ$+crҳ8⺶xWxG@]? *%-HV rAg(dy7񘻍[~ovfrW(ru$1|li{b BìTաi7ry=X5x@39l-RП=[Y^"EcƼY\~ĶVX]Sh4φxgGǎz_gmtѸv_߷pUy86Ҁe1&Q>?ϳhϮ~}Oo~n{c==eȏP]zL6YDt`=F|HzraQ+?!=W}|_q2&uAݿ>|8xsQ.^yvuOx?͏~<ߣG|u$z>>BI5x~ַ~XmϪz4iSUojhK&Rl7SGzr<ڲփH/*yE) Nr?;T%uQϳn˳::Q?Ou~UTiӤ7FDI=z"?Dݞ](c"ٌ_bO7Nj 4o5Nh36o؏zþׅwq#ܼ@~%&aQ`;FK= m/;ԁE)zݧi5o‰"}Nؗ`H5&ԓ<߿^+tᣜ҈҂q!L_%>N')^oØ sQ)$xQ8\Dr)]~p:_%hUp EI2HvJNq?͛dC7OrӋ^է&M1f8Nv\}ϪsiLGKW #!\/ڲ f9g= }bOO]e-{34Vۥۺ{ ldrrrmh:DzYÆ*JH꡷'˥>ZAeJҙ>xq 7M[2WR!z6uF. l!9!%pqR-1K!Qo]dDK((rp9`!(q0k3B)u |sT…BD=\|CKpWM,0`/ Qz\naG` w F~'Hf&w8VA2Z5"M} G#d=U=`?hd?o,h'jZ*^}>!#I61lKJ뚋ȦRm{@8"ؓ ͕ vo#;DCtc.A-"%h8ߞ监Yr·'yΛ^bFfYb,W*P[ ГC{|٘UlC8QP >O>~kKm m ŀqaт-",r\`ppl賀.gDXw="rsZ  @Jv2BJ9Y)sEdiS뜇+eQɎ^B*ttFbmD!G!(TR,¦M%O.@ ~+f W2W[dyE"%[І @{sFڧ `:lo W`I_2[5仲!ӳ)*B>^@c)7Q(3 ٍB} FnIQ #ٱ?ѣBX 2X--ŹˠTF\PsdfXW\cx ԼXL鏵Nj :@q)Lo#TC.l7{)C"xsP晲g/1+eE|V"%7rV"%Sm%RR0 b*gfj=VV4+ĶKfVpǴ{n)84 8svO5:T/m1r&8m+m7^W{~[0iBVU}[ٖڸrRQFm|HLad23uL'z]J(z{W{Y+~(;3⠑fIu1F!Ỵg/Ĵl90Pq:\`XU"aalSaŀ$,+)]wv\cu |il`h*7/>ՇO w~I ڤvfhXv]d ڷ[O@Kcm\^rW.窚xNG\za@pq;!/r娴], \erF%:]R nC-A_*xc{> ,W[&yR&ݪNg [_=Xd,"#Q$,vѣ[H(7 ?iMBcTSHԲă7_#X+%6/ϽuI+!XIM !P &W^8 7֮&JS& e ^7!,=M-^S1eLHJgnփqEɫŖ>^KD+wm6^ߵՀ<]KF8p\/6! ĘW˽5{tioMizhLYD[@Sb}LomA}tC"NM!~9 =`EPoɧQtc+DZdv-M}&@d䏻)Э1q?˗DfP5AuMpy1".,vr;Gy Brwͨ]9,<FQcوZĂp`йKqVd͗3% 0Zzh&v7/B%C~<ۍ;㒕3lAb%*`>,v6Xh/'8J yJ%w=[pl[ke˄dkY%R6jX/=ubP@^yXʪzFKFzKuQRRNw*̧>x2k:jVB ~˥"p[PQY sQ}{PG!*:MWojA=׹G&lS=/-ˮ ցmWZ2ۈE^u c:yXC`9J޹}:{e6 ͳWxhVd.FZTZ5׶&s -UZs0a$Dfk6sqϸu|L'iٱ9kEUSBe;sp5>5VKBABuj J/W鑨Qp:Pʽ/Ba.xha*aOwU%̅ 0s=# !k:0 ntdd՟BՒ!ٞuF㙉:9xUb#}ʔ9@e:j ǣ5l(yz)jzrciEuf}<"V'S G %G`pm֭o0BJTFEL]\7T!jRUv`P3 G{bɌ2b)5 ,Zὶ\ZJR:8%Zr'㑫> ݆|+ׁk6$51m|I;o('Gؗ17%x $r %-T IL"˲r6K4:ՂʉN2m0(U\S򐨔 Je~-B鋙8,*^6,?k5ٓJi6Jy&u$g?=S۸J&?sw/&mRf)- t eĩ@ g%;8ߠ{=b˶]VҮO;*m}3D9Ha (m=il|\ajI_\{ w5߿x2,~tLL'iqwO06%SH1ox,F(WAf87Pbad&a^Gi֔?IM_vYNjف1+A+ SR_w#(d' 쎩#0@ĭstÞn#-ޗl<}.?PܶbURu@Af:PKrp A) |zu.7'Ix 'ã7 4ݠ~LM9ywYMT[5*oA2y/q+) uUx׷?Iト=?I˰@wy^ҍr],2>V.??Z\ѺCluU(߻yɡzW0m>A~a/3H3d6˵ҶFH2Χbʀ{#Mc1/0L@mlx&b,"-e-gè xwbv ֶirC_`i5mF--ܷfu͑ckE>#00v6 k̉dН,U;Űa;C KMN|=i[,v? UV[W>A' lD< Qb^ ⁴c 37` aB[~ǾE.97~(GncLe"'V9lԍjnoP-괿[k5aM8T8 lq fE0q SIЏi^aC}q"ƤhA@Cqr[E9TEwGqYi+і&ŠkWۺ`:h*\/oh:~ p6qwk&zw= f}کh+Ae p*;`xspy#* +_1b%PiDpex}FW" 2(3JR4o<\ޘ}OE?$\֑BƗ\:@~-L+~0;"dLEtQR 8bbbbvRQ6aFԑKS&45]Cdw|$m3+KUZI*  | H)J)J)J)e8'Y6syHUuĭi+ fsV eXn*5<թyg9y]0pmBvG枪XF1ް ~@w DE~k}]RJ,fb/`|< `m#w+ aȶ$-'Gc+Uژk$oׁ4OojY#iX3MƵ~beIqO|p'k&ɨWO _0Iˢ^JyO<>ǴZLk'<ܳn& 2#*,\k6f[xD2UMZ %4~1vtQ()}[ :U,ӊ[svIlv]":k+Y?kVUǸkusV,E ZJ#D[+u-AY=&dYyZEjXuJэ(_AWɅJ7R OԦiۊmEڶ"mLK8P[|i]1݀H+ʅƮDtC仕7]J.Wm|i֦ZKmr+`JsַQ0w;VEUv;r `$öE h.'1m1 fG`9.=RJY{ @}6D I0瞉=ˣ·S>q,ulžiyۜ3%׺/ SXInMDk1X x*,$ /!VD݁(9mBJ./t*iGS+o+(+ۏ}VBůIX"Z R޲OR*H,@:AU8>?D&Fq3]+ac:{6'n=I0EƿJ0pTD=jkyOܶGJ Cvy^$<;:հki,lt<<=^Hu@<\wu(۴A:H$ 9Dux PK+4{[HɪNv1,'_׺;f&VQO+W:J1b&A۩@vvM&PUy߅b~!ԗ~}7QRJx(GMC-qj6 NDža'NI0je*8Ũ90 FqW&Z`6~ַO;*m}3D9Hf (*l|najI_\ w5߿x2,~to?}ٟx4t܎c萀vC:\p%:ߡÛ~ǁq;/aN! 𹺇OPw j~䭾5QC:^t㓯/{_q8xO[ScãfgGa^b+ߏxUh낡j'wkzt;,_ޤ" Y pݶd uZC8xqfgtg?@S Lh7(pi/i f4`lP.V@*@;!3*a,z3S0E A ) | eXO 'Ix (Gao {s3S;#QoX՚+(o J',+ză ͪK㛴 F|S'KqR.ke@VT=!5S^Wߘ$TH6=Ϡ4Z1wP6fv݉ۺ[#ǝr܀)hwZ"ZȚ1|O>0!v7Mcntr=ウai. =drZbYKϞnH[!oZcWZMOR}~\ ~Ṕ\wKP5k ;^.0!q܊B4&(&1r^ zM{ "ͪmFUǺ=[ 1wBSg_+-dP9qsvE3=}sw`H:2˳tp T^t},b{R&yk\%峒•e*AqִRKQsY۔2_+!TK$1HdN MJ6&*"!KbZBGtI Yw;{1w=跧 $xWu `֧UD<4qQ%"dx?}ۭslؼ272k֙.'{̼&;fdGShv6lklӝ3pYxT".3L'G{vZmFեar#Qepvu0 qp?bY%@;4GRkSmHlp g,&ʼn‘XZN9&L"L`!2DPS Wet𾮨ָ@u1r1 eZ͵pv&[}{Vl~3񢷘=WK,]FNPϽ $d쒰,i\ҡ;NOv^exPhO7qYV^$GgvZ`eb& y"ZE&{DlM[U |DjeYxaiVڭ y"LA+|LM:'54 ,wʟJIqqCw%d@̚Qt.ۻN~iʅѭY; ZK9Ne-礼oGDs>98ae8,m*,6K-" -k,I0QbMLXdYcpi)AܮRڂ>wBlKKU VBGR 7qh3 E(&!se1Jthp(-Y.ZP 2N IQS++U/g U/RBĂp0<4"%:ǪMːavd^Y;z}gW`H3]4:s$=ޚuW-NwPMD{Y#k#|~\{|)xSf:#}G/״א+e^?Yp\w`)$*J~(]>MݟXvQeGX@{w ż *"{#{g~Kޑ5ET@m{ej{\]4BXVLcmV<:V ^JTl$d$43敊(mYFaF}ժp<5] ,=֖w-;іIy,-XP!%vn(RN|݆V3o*wvpl:cW%JL>v N4X1%2Q,!0f&aBaiŠaȥ2vg0\!wnH#Mv&ULƚ L J qc(J0G[1&s\D042|I@Pu (/F.B' dNAan7 IK4 ΁-N i𰉘^'@<\4kN= w.:³V ',`>8([Yz^Uv /S2ȿk$(HJŌ,0U}}7亙wf&0V{@pXNs88 щqý7w&32k`ytpOGI ^NƇa~:u8 *Sŧ 7_Wp%m85l秗oﳡr:}&A?xg4wef+~׷xl }{JI',ު9(d/9zYՏ+ Y_?ŞxNN>}<.t=^O/?/Ovzף/VwK~ 7E:&8ƹG@>k'_dk7Yo&8M'LhNt|iܪwwo۳'QxD P)wMx, 2?=EِQs3}o2GlNdqŧ$ގ?$cѥS뉳p9=Gdzh0!5-]rG{-6S6 sa& 3S^~Q_Eօ,0@븟` k'x̅Kg{u8i~pMvOC N뀣*^⅛(>^)~920}:xӏ?y;N`ap&\~ɕ$h}C"pn8r ڍkls|kxs=٫/GW/X}5_"-]<~h|1}=x??7?|m flli6~]űw3ͷ5d}PB1{.l`l&N'=g[y , `mns: ϫmD3TdKsI IHwPF 7H\ruK:E[X6/~i0m9eiS-WQ`( E2@*$$VJ/{K]* &uѫWlSË:#K^A2Ot~ab%(]AFw }:W{+F:\qB ] =yyvt^N;On.+/C1nŃ_fSz.VBPá.Vpv>**$^lq8Tbr>P<_񧜭(5dY Κz]ؾ)Ik،]@/(P3a|eH/v&>x؇Ms2gMRC%UR#Ƙ "2a V,R6n#Xig|D.7M,#@N~,*cK#iF9Ko'.6d XdO"/ yLl%TCx+FU:EZEXGû?O@^zOh/ijK1EH/ͪ'S;:BYn?.̤:xg'yI?oX|S'N4OhEsxÙQ#FDXX 0:6:A:44,AXv~u=zk/Ŗ^P3I@+8AY]ʍo-͜58X'mE*Nv9âP=J6Z/4cMTיQ"T( !5LHSL"+܎/(s$M؊uhKB]?JI1O Y5|4[dc˸b 8b\E"j$t$h0.{[;~4JO?X O E+$gFMi%,GL.7b”egPAgq$!H1R` M*WQՅ+@3$[SgK-\-6XBtVT875E-'qNVj}+DW"$#jh}K"Ts.K /SZr(Z\)OS@+'Z\@L,ӇVbnj!ja 㱫'ce!J0w\bChb<0@XjFQ:3@ : Mz2r K{⥻=^K:@∆a0JckJud(tDy&Qb"jD\ 2*n2XS6DZs8`~+>tbJ#4N(#HŖrݘ8NRk@cuB&P .+$Sj 5b-ZSk-JV[M, a䊗R+  F~2Q䞋 A4@GƎk */RT,n]ɂ׾U}mP:/]1|{yd}4eExq3c`p0INdr&lgah'sѾ <+N΂dpVlXoC]CkAv? \QҨTe +s}W[*l8e9 kL`$3nS.@pX$Ԇ6nqApC T.0kc [Njѱ8p|]'G /}jXL#gm/ ?liqXP!7&xUy0*śb;5mIيmꆸl ^3+wJqͬ|jeҍ@7q|*Ɖ0#lLbJ1 BJ,sΦ8$F'&YMf{uЖå$%??&}~\? m֋tef27{>|iɰ76H}|64xMQ+ ]ZEbo߮scu/ltG8_.6P__%*q?gy..חk4\tOKv}?CYrvB:݅U^ [БX@r.cxJW@U7rܨA3S_m$~=\+`Ucl *ePFtD(#&VEYh$5ؓ,W΄2}D 26B8#aݑ<(ӒSz.Uc+;SȓkMTI3qF`'Lʩ o/x`w=4aQq *Jta3]%uǍCnxnE Sr|/}Ot<ĈXyfcخ ъ05ΰC*B5 ah(#xH͕B ԓ QI[i ādjS@CɄ>W @Lp@GU4*M%)b'$0( bTwOc|DR QX7_ BBēQL<0PW *<\#9D,%QoEcyt4Di.O 嵜uz,wӰ*?זR{콺]nxY2YG#vor4b1\\ h4vE!\!LXn\esEߤ_ȪT^gKL%|IE??K7l4 l>%6 Uч,NQ? ^xSRtg+MdݾFe_yv5UR:v?JՌ2_[rᒙՅ9ߩU/lV n@x<$^WJɍ%)AlS{Ws$Gs^T0.wɟl?7.sYk`])upŪJ2 pK{EX5#hFO;DlRJvBȹM+I)i~S&fN4<8KO-r=l{w2oL 7BH?z$Ga1 z]֫])z!r_pB=3X厷I[v@tE F?N)hK)p2~l< ำDS@XLI 83aQP#sS,"6AՇ`G(v;{H DKH)E5N~7eŎ]4~kA"pIPA.X`.9Ca"@0J 'YeרT Rs<%&6cs y@BC%$F!W蠊RE R"b*1 CT76!S/[>fHt CsA 5HWb*s=Prȃ2[pRYX9THH '@3+^ ^V]K deD LxBƄŞ$^9^5 ۀ&N궝'2Q?T6TsGt &}It9FI޼4p)7{m=ViĝƹS滏?~X&$ƔG C."s{v1vrS|o' :L&-1o6S2֗Z/:{\ ?wɤohY>ݘPJof#jqKjf:^Tm߮fO ^5m?yʸnURe i83Gݚ pMF[Q,rC Ԁ%#lN2o+ߋ4P-QYĞ$Pٯ۵nmX.ka\ ߜk~=waāOX2Jf,ˆ#:LW@@ZJ.Ĕ䨝JIM4Q ł)́2}Q0)Xb0 k0QKMvXJI-u3/QQjWP8TG@`P,"8\Q)A,@cCL?m̀E4PN(_΄JsA=nʘm˶.ιEdWbKǭ˒J8w$ߺ{JURvؒuz8ޠYp5I{X @$AѪ<@Ip:MZi)znL:cTIv~4^y\1@iɹD2E C3#f118w-7uSMeCx]1d]fc~ ̊ NEl=O.8 }Ym4{]l"Yc/;a7iN[g-΂pƒ~W*.ZV+̴?YʹwsÁ*E9JRg2 c~>T# 5̣S!(%}%Gv'Ty$= lw? ፑgK /ﴶCRRi&a̓CyOEwiHMCcoͿ 0Q_4El#6b)^^(&{LR~}c/W8Xi΃gOsݱx:.CK&.G4r1BҞ5`χld8 کQB#^/_3}^fcۿ"NX4'(}*tDPFY-5{j0 ڝ&|b !PDBHA]xM rP.w':OsG>S?;UF,p{ kr\4žɺ0D;:ak0&53Z5S-0lꂋb"1KбcгkfRtrýcX}il:b.*ԂɐF0Wȗ̗܏/aB  B4Ih+[km"HKJ aqR 1RJ(0T " VT'oemSZe@S8R9j^9?J w w&27 <;?gn '5ګƧRtdvS5<fz?IvËAz*a + #WDuHq эVFq4'am:|?[ڲVt%Z#m؞LJg-ψ,7Lؔ,TJʐ2:N7li/oL pFk־0WJ.m7$|!(y1. nƟObGj*u6}w9 J;'UΙ% dT|{a gY䄁ƺVWcUcxU#5^Kғ o{brG0$!/ɱ2r޸5+)hXbLz{8=@_D[%tp%V_LaZ۶_C朾E3qN;tz2-5ד~$-ِErv" xG&`eRJ&We(=׷0\AKoNϏ.ˊ#2eձ+\x&p$s_aA0ַ->>00L_=/#O|:wD*m"%RbJ~HvUyTx1zQju|֣5rceK~N.)70$nf1!WHǤ<&sbTrh&o?dQkӊ,0b7Fw<Ҷ~ٶ4ZXGh&TdN%ב߷M~r7ŬC}  ibTT cVT&1EHS Zj(:4(1eRaҩEǘ29j^D1e9!8y:Ք!ӨNc``A*رx! RisD`͍Ҝ[T&yC.Z1jE"e1 [0sMRׁۈsIȋY{x.zLP˴1/۶mU˜c,.|L,s?TBteW0.OUln _-|K*cp +Y5ZtPȺ0ira_uܺ[\w|ߝnk{?}ܩ^\cWs7Ox5WEGp@7w;e)n.]:;:Dq㻣֟ cf0p-zx>.[Ey.r8z[$Ѫ_YZ V13;2W^k/nl̇YY-CV[{L BTj-74U.SֺhVkڮ5nGOPXS%bퟲ6SU_~k,p?O%nyY?oBHTkT /A7L i 3$U$UT RI%bKĐng2!wI)~`0an -f>:{[Tc̘n<.oJn67uv_Ѣ]ivwr4U|-=loIĸ@}ag-)`3L: 0|pZ; 9Ղփ@})nA|gA+Ev3Ly|gU޿M]yW_P!iKy thHS:)}~i\ӰvuԈkn$aRO&"iR5Szxdi R9IV{FW NmB[s40M\Y"T"ZrF 1@!D%>L"w-vi֗n 8$Zp(e nf4&t՜_-J0D{l~6Uƚa4u\OLLH%y^7k]B:[F[UegHtov{G,?ޝS:|T$40%Q"\*-Ju謁xB^{cA洸]^<>M?.Sӷ{ |A)HsT0WBGUuh7Pe^P|jJ-6M y*ڃ ~rRլGK"Tli- 5}!$-ME,0!8;dnC=.Z]Q1J"ш)˭AB;66ZRat> 3*zs㋙$J⽯R! |rab8Z0=բm ǧIA #@Tn|7hɣ4)qV<0Ԕ{?&JFe(񣾋G*e߶!J+o}c߻__%с)(#<բ>I qՔtfϙX@T%k] s2KZ}~**YP`'KqR a1w?(oܰE:Op`R)niDXs0p wI΅{NjNH ;7`WC2üH%`F*уvGjF<0!5`naD !c$`)y1/0b;]s&C0]9p2I>ehDTGLI !&ՇэU6/rPUqI e >bm6ˆ;Z0RNE1Lf$< y!5̃+[v"V:)2yviDa(} Cjy*)yM80H3bF61hJi#0y|>2kz4p!Zfx5̇~!\7ZH;jԮ6c5Z{Xjp:e;'a/ޱt?<вy./C{+e^j{at7I5rHB[{3 vډ{[ӬWlg0W&ib[Ї?`7l xÔ_pmkp_rCnA7v˼->MkgU'n^Oxj5¼4l>qn>vV)츯~ˣߝ>rg;~6eJFny|֣G$ cQyD"۹}p"7eK@u ] QtHr7> 7 itNu;o[YѰK.;? T9π7ɏЂr\홑{}0_Uv-zם~7q)O zgw=H#0( ~ B^y mɻ-53U8K_&7/Y`ZR?φjٷ h~~+ON,4VsW6" +|kpdk |m򾜑B9-bm[8 YߎЦNfC -%m.%؛ SFSɜ-jrCɴQ@7Lu>LmP%&/ooy A,P;[j@9V َdb!Qjj<0p1ByRD #umf2h)Ę׷Ɗ 茬D;j #|E؅'p8Cjw:-CN*Vl輍gFgP,I}v0Иw7Z7r`")kW *JW2(@sZ!T&B.3avP/WPyJi͛HQ-4RG0]ǂF1XDc^<)>{7/ȼV{"!I qNYbBGNPK>ǑRZqGGQ2!8f1zgH u0O$,N8f֖F,{̴R\He0PC3,ּ1;jm|(`"NQqB >|H u0 hsC(+NH J" # $1fDEL )Ј1LԨS*}WH u0/V53o"KD94QqDyS;a#>:WՓ50Ƅ%'f"bD(@VLlf>y@6>ZE|Y}Dܱ L"E-ip 72qT(JhH5k::y Mk!Vs=׎LϬB@͕{޶nd-!!Hr۠I﷋ʭk6~GN,?dIGUږ}D =<. ,asaRb?S "CVX>`]AKr$Ep3F`6fW7ZA̹qKIfo5~-n_.EˆO碈}Քzc!fA,ev M.ӬM)%$P֐sr6ǤjZӂTSW,*_VѨT fM Vj5*d*:v䪀G /wsy^e-|%|Sn2a]6>\5>\cs}kX \h10p74s\m5VP)^=k~Wܵ9Y:wVo԰"@Oߨa5 k'f`Bi)AoW?wwW5v+Eo牵 T9R+.b/mڐ]*@$!*ԝ:84ޝsf-? &ϡ^L;s@wUdcpwi`jy?Ujxv6szwT"(Tr i[xȌْf|$zDgFOCMgn@zޚ~aXhccЀ[ ѱZ=?戬d"A,QJ)c ~;-*d\5隒Ę>XCT)ˆȈBZ.*9U(5C^׈J0Fa cRE *ArbD)1D@%wC7v,_@kb|.qHΦÎ8n7' P BoF92`wG}ݘP#\zMfяꋣ^su/<<߾<|]?a//,_X{l06\*\F AeUaȳJÑ&REb{%}%jh uŽݙŁFɽK}0{cwC K%|QF,X9{f cU@lw@#nDOLwj6xMɄ3 ^Gqc;_a Zl,. rrM.PU;$[j ^UkB=nog?nZY=V-&/xPfz#װg&P; [y-Gs p<+[ bȳ8A@ujV<9DENƫ G 6o=C7 S}=<~TZ/4;.jy(r3>n"G_9uF işA"Q/QTT> YaI@sb-Hv犚a<)W;TȨ TqY UuV;18mV)I?_J :;91 ӃXig<_Z#;C6@k͋Z\4By9DϿOg?|vz*?M`O9?lyy9%(^s by{(/?9wǿ\~pc'俓Gl JT5M-/zi:~ {0}i+$zRC3 U~~40BYB u55ُ8  h~K~r^m6na2ิ٫豐v<.=j /hT'owmT{Y a/pbcqpķLzA: Lx«'k[sqn@ܯT1z= _L˼l1n]ԙssP,WѤ8kTTB9Wr:Ψv<{q3 Z71ff -}yX%16 ɂ,px8'!jBVB#b1IՠB{39gFbFh4ybJў9}+Q:m#>5.Of/w/W櫛`zlyK] d!N蟷$/Jd$zc %P Ӛ P+Tα\HKQt͎$ō7.ݸVvqt68 zu}nXl(Y%rI Ъ"(p`2#Däv {)cwb>0+m*OMeob,~pAy"V?3-c;o -^S-١n Jwښ߽}!FbvEsv{zGV>>N[DL_k9"wBeg'ο<=[O0![OYM6΂0Ko/-QSEo~.h!746Ãr!B)[]^<;`E=btZla{8;06>YXC‰jmq;UԓKcV |N'jfߩSӴ G+^|ro &}#5HU&ʀWiq7=j{(J/}uX=v#JAH>RN}`-GV[hm65Ga\](E)5`~d\̐6'mY0%} dF`0Mp#Kz:[ݤ$)Qc%@"Q$j#bkݳUrް=[sr+ޤʜxj6-mä;DF^NlߦAkCg^Z4*G^yX'uhɹSTSTlvL`IQ僭N>ؕQPpC?nb2B~ofݠ+#.~FXWڭ*~}h<kC-联` ,2zUJd;D+f݃ʡ9.54)ÒxR@KOa~cMEK{qL`r8l뮍2LDhْrr7\^۳cWmή/iUfZC0W-Qkp)߽s*.{cee}`oF|-};,{X/D/[Uac"qEΤ[ےmYO*cdN1!H8(:UXE4nJJMpgѵyJ!C]cO۬+:mq=:Que> +l9g]ܕ"BhI:lSV;;WsU(rEyò(8Qn(E @i9`,c.*'6: ]3E%OC%!J ׆3 Ś9%BG3:+B>3ILS\OFJ(na8 E e!l`׌bO3LCUj^גq| @kf0.6d1&0@0(IJ!([w&[ +IO9'إl-W\dʥ:[9H};u*qp尒gFU¶ӟy*cROnV v搹͡Vo!IK&05ud5Q$4dvyÒ!/}B0/i\rg\2bxbrb:\B؋ٟծL0_CN;};bbv-dCm`w6s/ko%f٨ TÞ :Aw88?ye@͏Aqv]&JjupqjZ w6x!({6!?&y$ R,!3F/FN]d< Q=L*1:I$jX\Ț\shc!!9[leNvA"-wq$£8ڡ8Z: 3tT;Rd^X'g%i#LEv)y}uJ'kz]Ƭ e fZĉ\h4w݉iRzDDG! cl;WYAϙ)a9pE$ Q{4 7mUN\|ත5lYVBw*Go~KRyGZ@1v=4BY?ѧYR]u;Rysa46e9ZmR+S*V߃B{BZgD杽_YxDg5Ik0&5-k"H`eE;feKA(J'-vWT@K$G_L!$LCsd>ajڥ4qQ@}ևj~ ٿI&M(cgC)>=0)"0pۇ8|qe'; x^UE6ok-`a._Gn:B7"DS#1OHhbbbBHD"[l128d^0R$n6@A^ ,O7;:P#ȋ3fJY;ࣝ-Ĝƕ_9Ў`?Jĩ[u΃;*J9.E]X&U\; Qeh4282RbN"qJU$4eʈ[8H+88xg%0%##MTF q\%)Rq%VF!0cXQ˨5"!WjLQப&k?N.sgc]~ Fi?,>v)cϿN*P5P5CEU^]6;ɜ  gxIg^ B%vq'4WJU5DTCm(X+0g"A=dd*NxcR9+IVQvrxzweh)ٹ}S5λ"(pK Dmp{L;MZ0}SԸk#1|1 9C?Ww7e_n:Ng|NM໣5`N2oB%ah::$oz;םR0l290CQI3޷?v"kab44S9_ԋ6pps Ǎ?´}zƟ(8{gNdZOV4_΀^U3q^?U7ӻs|`z`2>hԷod){ ޿Lm+~gF;mܗtدk6A~k`lnnC'a1m,wk}oE캳nooqo.߾}˛wϮi߼{Qzc/}U=ysW4s6U1gϥߦw`۴ne;GuލwC}MvA1e1 kKO au֟4e~  NHcqѱYCdM3Ͽߙ`lJBŻ6 FVSZ˞2s@>F^DgO ?gyu٠՟W%>aEgM,= ;Us쎩KoFۍݥm?/p۠xcl0xrijMA|,PˋkSއ0ƟU/ 'i{w&gƿaHd~u:ƿ֘9' ,n@V*{<7ĂNY,8@0ȸzi 1^b(؇"D[ʘc/Jb0T9{cl$:|bTYmM S؋Z.Tg4J4x1:JI@p܂Y`&]h8Ƀ̑M\ 6-&.U"VĚ{:Fњ!kԞ& A.*aTFfj {# KP0BH(WFL .[VgRT SFha74{}*ط:F62dYG%G|SFTGRqHeaHlĨ!g3UBE0ư7a uNT-ܞw3]5[kFů/WW$m_#~qfXeFfK"Mfʒf=+uvp"u!:maJwVDճ+P2ZP:fMPSXX{r&.Ƽ(#de-2zݞ8t=j ovv}ɌON Gi'E"hڄ^JwEX,E9-:7#rSqݯHy=v\6+gu\ sNYО |Vh+#dV#g'v֐Q;֮\^\x#eNqnUqPutۣ)8gn n'΢x˃7祛[Uac"*3VoI y,:S$O.X1Øﵟ]r"9}D%'@PH^5.nG@t#8&3YC"W1ؤH=ӤE,ÙcwZNIOI@[ٚJᒁ ʾ* CG3 ra:k`|A!BhĴ+%!Bv[۽ lQG h'{7RrAN/Ev v^ p51#ep4wrg1=$-'(Z+ &8v77U:],Z cEVWjaU@0J5Iu/ۛQSlDC]P<1 NJ|$2yćC: `5L^MK9gnpcgiU/ʍHf$DdIvD f%q& y aƜ 6I Soo5DZvxGFXUSc X%0e9E9I28׹4H8 $QD+{S%F1Dbqc^9+ gT :=U OcАo\Et$8+}FvyTZySM[7R`GV|*SV5S߳nq| W.*xK)*}4mݼ;jАo\E)%4k}i.wؑf =IF  :dva:ۅiyΫzW0-B#{EC7w˩|·6GI.p7zuWWl#i zSprOz{BF>oJOOUoY^>uγ򲚌|k ZO+pj23w}|1fK/XRȏ#D h.A^U2ӁИt'e$0*Շd@ss^ n6T-}cTrS\_$>an'ƾ/:*D{aھ쫶ڊNowg魦W/Wodxe[*jݾ1P鼏xHw\~Donf|0}"r_Bw)z"ZYEۤBnpMOjnS_>GsM83.nQ*40ݗT ~?}j6G52qp|_bN?'דɢ$/FʵNb4|{\ҕ MnZbTռ{?sjk{m|OL+y4g_IV_+2Zzcr0[`*0н+q>/&n} \$؟7hfu#eURVXe%^ݍd~ߠQo^Su5Lլ|5-3la=b](JQ0܌6>Xv=ƎhvwhTbݏylGSE͡/eZo9J>fܾ^n6Pk /3W!#"ӖV}]s eVnuv_{-:|>86Y#GH$ Fa}Xu2|2 trx@B@RH bq ۀ2R$9cpIn[%BRg+?ڨ`Jtzc{:%DbP$D֟ {jxV)e!^UJd"tkUfi&#E\͵{zZē(H7t uT5$New3WvѪSs%z)plΝYC$cQ#DqqgK jvP(Kz]U!lZvr.39{jnO;׍C;d/$gu˾^⧸H}= ؋=}^L06Hn䇎?h|ɠRyp}1mbu/zPy4(EW]V+WZ5l,ŏ.IR"(()(qei"E.Є=ELx.8:uoȯ>_d׾wQiZ+l3)nz?|C;6bZ1>\& bRsC2sFD44M3*4)Q8F&)2IdCx5H^EIvU_^2QMIp'F߱)FH@)8:ĘqS#I%(e$K`$ L!9T=-D#G6APA끍wwj"`k0j#=euj"'|uMU!_ Y(Cq 7F}[6*>\hP{cB{uRCCjWP;{VxǑjJrՒXeFH*+gT!9ИpIs GR:4WQSoZ": \TUf_Sκp:Z:4WQ/OuyߺQW.*툫GvhАo\E)IfB8}p.wAGY3)A6:`5HKP 6:=rl%u䈎./ 6rqlGNS `;rܰ@)o-6FZLmZV-NdzHBs+Y%t$ ؛[8d?,A dD4DF4OYC"2ȸt @}pޞE%RF#,(쿛[^޽ꜵ~{~6YP&V\Jq^?GԢlZ)k)&IaĔp&`T<(S]PVa^/Kri"n6N܇Rv2l 'G.PfAf{lDu ݛG`99f uy>Rbݤ\sݒjGˣ{j?MIUgΞ^whƟڻbkUr`-]-T-ɧhmyV KDI9I'ͦj=w7χ>J?NgO6fܐ=!H±ofF9 %KDd1RH`(N":2cGlkj骻NnPĀo~WM[8& aC=&)6bojh (z[{zL8%֎6 a1gnw2.&2 LF+pf|QaCFINL U*JmIK@fsOF ]nvLTS{=BBMhHKl-*W S#m7%% K] s6[lm-s 5 ,YIb7t ` 7r wĦnBE;rM`QF9nbT2SKv/\BI[4  $+p*"gӎ\}ю`#8N3R!Gu\SgmҀ#:.uy1n}NБ8s6 ".O/O J.O[TÂȄ}_vltFg7%`IgdgBL0ꁶsd{L( dAi'հ&\L8'^PU4PSDΔ- VXg쁷re~pr & bAwor|Xx9I`;݀aw=߮Z#}E_R =ĨrGߕ`Q<Z$p|$jkFqMjaG $,XSQR(s@Jյ)<9i]RVCik}ufc|Nw̅#6fZnށ{6~1q z:F֑^}79!!m@]R0z`-}GRqwƆ|-й6{ rD %9 Nǻi5۾7I ijw8*OiiC}Q}Q":Ln#q,ݟ jC#:RCko ;*NG;r<)[M9})sž䚽{Ut(iwܰA&v{>EEQD3>nߋ#6aހ;YQ= g1wXlRbZi1wԨĞ־?j&Ik;qhw'*osғD:jٍ-tWAGО\=fiL_ד t7}o־X1MBv![*Z,qR=ЮiUTy?{W~<}Pz%qŏRKǬ-ٲ; +l.;=9z;ќZ|ڹ:_uK"M-I=ŒJYf{֟0Jj9fQGPo|ck>e/Y#Vy/,5-IT6Tenba6Eט/Ijص8 >.%5\toW_TcnѤ^q a[XyVsk䇚1CF(2 nF,PW,P‹pvmܜXvnf)??板ZQcww{(_?{%ɵpI8O|ܸs]m<04fh: \O ߃XuK|;/.Tb=tN4ws-7u5k7^LNK}ƴ&%fL:tdAp{~m8Kz2in-7oIwJMw|ׯ͓gG,>8>|x);y_I/7in3E|ܸ 侽qmqIl6Sty]n[Gt0u5A|:X'ٽ ܥL휎CsQrmJ @K;<*eZ RӥQ 9vɂD>yq'86SVB:Wp?-fy%G'plĩQ;|+&]@]z-m}gJ_؂$g9xz,*2T滋y+TV<ߙD޳ ls Htՙ^3戏h[KOw'^]u^'oT+CU:95ZVŗo\ L{z~G;՝5'Rwzdyp&_R]~^s 1Ɂ܍]?b0zizlN_zܾհ?KͿBcH%/Ӣ%|˙?/;#H' zrߏH,ǎv6ߊm98L =TFf]_mjJӆ}; ٹFQ6 Y嚼c|f~*Q[^=nʮ{&n޻lҧ'=/׫LjF5F܁[V؃oo*$~=R(Fč(:دf.2BQ&K&kE$ T_q.'kv%2*MҥBD?n#n|wnc滛]yS^6J-V;룯*0mZ~5hըp~[˿\3ׯgg'+Ʉz䠠v^puysx4Ew\bk{qt fp٦jfGFM/UFĂ8쇥mÒg߰ge;\f-[t EP6O<2\ٲS.O r-y.67zwei%ԹU׶lwj")~?GTkQyzjbB)ꐳUw<1a'wJ*[~Ufx]QgҵmB"D&2Bo$v>}KhpV%>eD˼?9?;^D(+C*QBk!Kq~W4CCp42dKBŹkzz=S\`CEK$h(dH܊Nb7`%ݧޭP 9^nE <`1Sw#)I]a?^ÆŭY= lzy$6[ݵօD|CSa)*s6%Xt~~ȍkx6D=Sk`C5cSO6LpsI2!M ؆ vG#(.7⧓7G~4ԥF {ua%͝@ӽS7mѿ@@D4"@"Ƃła@)d+Β0a)V )@E,(Ha CDː847Q C(YQH獪 qn?4I/f"d&^V(F؍k~e"€NIf3a ɄUT_ ZvY VYm׼Ha>*:qýekбey~w, Ѧ[%6{q_Dy!XfK0/֗J]mM ;0) 3ߥ) p0z4 lz+wcWgQB-P/]8oZ^yHiX큕X0訍<9%(ݩ;=54ZXMV&|֭)Ǩ=w ޽.ָu*,'nO',sލ}-D>#&̱} 2~Ssx!W@EQpsNWvu-ֿX}qL3o4[^hâtvlCȭC߉+Gv2Ctʕ5#JP}R7_}kŌ3J*X"O8FfEjfࢡ ;q_QݠINH:'tҽ$s'sHNQ־c)b=ltnN)[Q6 [<ӅNaNqxh&AZ!o7LK4]gxw߫ll z%qcFKTik+ 6t8BPX*ºH-A0@Qz:4u96rvŐ{6fnTݠTM=k)6=+4A才z똡AP# B0QL$'zjFI%՚\$ގig~^ZzR-7/[1*A լ|ʫ㨄YYA=V6R*}g)RsRmsh/]`H`*t\3g|-[36OݞG|GrAQn듺Sf{y/u;+] S82N peE˩;!YիP,lNh`yr,#jtB` >![1v-4(]KnLciƘ#"%Pbbj0%`4@0Bc 0dD2 A@( hu4FeVn*p`я7R}nHyY7Dôߊh-1#  ""DX KEDCI$J509tN4,VU+4a7 zb8MIek1RƝ+WhTEu/0T? 9.|bN,zK>y1y 6,h b C,"$I"4IDű y9d AFRYelC9W9D HmaITV?g`)E,$3d YzGƛC&^,Ip}ED0 "mDǩXR&-^XLn>V6!&׳faCMDN1{N=2OtLH+q®; Si5Yz74ErS@ 9S!;ws*9alM־:!6^`9B<1^n:I=/X(wB$:C'66D9ᙹԔkC}EEDTh ";Tq7[_./E8o^RZP-x|0e*6mĘҩ# FACx1pW((i$- x ZzQlx|+yKbR8Tkg*UՋCk^Q 3YNޓٵ[ N6uFVXs_lCNEQrZ1Z+-\Z=uI$>_k!O4Ջ RsM']hObI@$YKiZW"J0(H%c(d*&ޕ7m,MH}V[d'%RYrRPwߞHdJ$uOwLweȑA 2ơLYcf@E `g5Q [Kgeg\Ml Ք Ed+#.05ب(`2fEĢhW&n*g3Ur)ą Ƅ"iaP8!0&xhcj s bTcK`$zCQ Q{pxJ9-%=RNx4i&b/08lxZ>'#%"&sfʓ 0FZ"cye2 !IL-zpuU"(c6۟Kr%8j6=9L&!Ĉll/K'NC4|_)IX=9'`wݛV#{\nT\$}̧Z6R&åX)ݵ=CS0> RzQF# q A5L;t3L0~k@3'>9>eH?Ywc/}MI`zgOt2uO4ͦI짏ǿ:ht7n8ꉻw}#+rtm?7[8Ǻr0pbxql?<|nUbqsCSӱ'fo\J<{0KwAtQ w2#̛P{c9-hT<;c cLl5 o|dN`/7f#|҆ڞupjkPˋ썧s1AY}z:@}<^W_/>BonF~\^5ZKz7h>;w)KWnum"ϚߣZ.{kƐuۃBO[e$$w:gv7{m(RMgN%ww@xu*/(.elO\~ % %d̆}>;n䰬"Hw|JUSLv` -`mmfX?fsƶbp}|w='lXc8Q==J.ѠUٯ?wDXCV@w]XCs(cV(1  W57h8=BSv̂?Y5 ԠOˠ =w+v4UЄQ#q"`$&.R`u͆m6 '):B4_*Fԋ? v͗\fOl{WQ/..7Gr^X *4Qrvs0 +A5}֫$'gn G@S2 `B楋^+yn x&Xs=7C[b^AE}A)hw0+ȳG(D?&PYӽps"ğ,ێ GFlN5*a5UwJ9h%踔}QjII!jCh/NEGS͟3fMH9[%wd]7QOfN Ņي cI߹zJ-m}O7`2)*[7X6МbY rFI Ru{/ c,_(B[/~@f>nlH;qD3 UYU=7G=fOi9;q3{B% E;ND]\,>~0}Dd{9X.xAva4i_}릳pg"}Ԛ>LSlޏ5E:=p :zO{.! ;'I0 /Dޅ9$$L29FzDYO't>f)C9C1a|GtIY8Ƌ}`\עda~Fw&rOo, / ?yt;2&$*c!pXòQfXϦIf[c&}Xz={ő;o{Z!P.ɣYVNi8 sd U&q|a5D?o k&TkS48!ՆqEiCճ +OE*:T~k jc D {4Ig9*S'_ioUų1X菥LO.9=tM**yƮ^m^AH4/p9d͚c͚c͚cZ5n]1rX5 rcE7"[i1d"X~sI|:K5 Dqۓ'8ZJs 0Ħ"#W+ޯFT)9WR:Z$+h3m& X˄ScU0iHUA0xh(R)g]LRz2e.kY/E0H oӓ}F0`E7idxa׿ŲLc6NmHos%zpc$"[VF2ӡ)kxYr j9"2vmkF@^n.,lIvW>3\ZPǷ@.)xV_Tm0GN~WKSw:bh4Vz[`"%$j+ P t5Mͬ'l`(AHf1)SarF,Ƥ(.3 Ee*O28zX,Eu|bB\.1_.Q4gꗽJUb+"!wK!U+jA [2a|,O^PM DԙLa`0b"9֒,Àeu H0eb"m62L#g"g,8]zZd|Sv9\r%[r%Zv-IR&l b(BaZM̈V!"Vb,fъ 9n|enT]4gfh.ĞʪU)SV2 20HYkDIPǡBr(2T+VTV>2'3 bD2+/jurh x!HYC]'+Fy6H6džΜ0P.ph(șXij~i|F\M9/4r&(vϲQQ4&/Ù)CQRgV!]"[۫aQ"GBeKry1I$ ϻ)wN"H9i/JΏ@t{ 7Iy qw鿠jZ$izJym0LyBsb~jAYcB"B.P1 t9ej7[dto*\i@L(CA9(f2V&V0c5 Cf7/h"(ffGCok/uƄ` v|FR.1 #`.ds(&ђ 0%. )OV@D# I甧ZP_n1 N(F0x`(^xWkEm؅rd]+Zk^NbrH&dpRDY["݂9Dqr#,HNc-\0mV1K1G\ѐD3L5K{15xɜ;_QK~ o^Q3 |4~WSxQHIQU4Ϋ=%+mX}e}?)E<6_dI5%-QTWsI j@|Q@Ѯտz)M\pE:"e[|İ#Ie9NJƚ׹ѳяo<ܜc,;$I|RGm+]0ze=^{Rk=^sc%۟T%. hY3RPwQ:zYRӂJf aPO`SG! &C !dH@ "!q|JH#@tş~Xl6S$d!c _3uCF^Z9\80cY x:EDr˺=V$B*7~?kP_%~ 3-~YR IL8ڙm"Xa@$w? 6$hI\$yۑ (t3u~ґʳ>\g7 N ?o^,*T~+rd<\*"*!ǢaoclrQ\%5 JIzP%ah=6/>'vc@^@w*LѢ©%ߙ`BjRSW< d&R/UHK>%$ʑ_D׏ʣ* (m"*I(b (a'- @\>Qj\$bHjL7{5'| \P%}YJHHPI(" „1pG*% }U?(9U}"ูZ8~u>ٕO@U!CdrNy1SXTcJw[z !lt/^ 8d.ũerQ])fa1$xFU= k>[HD]'=ݚ'(HjOEON:IY?."oi-ȟΓN ePnjb/h2mU՘J|rNuh,Vjg}yP̕"b\n$SCr֭ \T3X}"t@NjnunUh7+:XwISU͖uag ż/P-Yx=+.t%com1:ٛNN2c;3r6 ǣ[19 7>qhstⴿ}>{y ܘH 0n^h/v+5v@Cq])ǶGIjKU\B/lKrMq)~/ s\,s j츄Ca~`Wx*ӇRI}!g/N.OriR%bǤvlylS=Z#wN0o,]h{˷3)in. mH~rl&Va宊O>sWpg4Xzr_ce>]uOtTiV%X@ IjaZ\sӒkGq3U@`_sAl\"6A#0 _A擡+aU Sa y 4PD^D@C qJ/N5c"%shJρhmӋS-1 zQQH/9Znd6PhEu>#u_@1fJ3jݪАo\EK[l}V,)%&qȝW,˨s 4W:;Q)`Cлb%r%"U;D7]&Y;3Yo DŽ"L**%nKw P ]}ǒxXI$۪s.%B7Ť&G)vDl/llP.L3_!p;r9Fy;r9PaM0/v8rtCB&ڑS>'>Gv8r0qDXCy;rj9aCF`A[lÆ;bÍUVFFѡ d;r916&9 :>s h9EGv.{a CEH@T1~)ܪ GwPÇ; =#dldR 74pZ\F zm>Fڢ-ĆnҷZ6zǻp+=~dv H_,`ƪypgnVɛtB~oߣXc{y7#Pz߫}ulԲ$"?ى=(18S9${kR?fŰ0*L_>^<|?Q[UEwshwCCkzN<In9MawmØk<Ƈ9'Ë?Y M?D;j^i.[;T_Ūj7*@K7Y=<]{k7 ߽7w(-@6]LNFdoM|\̖v,?_~}zwo߽9J~E_;O?^ogW\ky.IL?0y|gܢcYߋ OGd/onȼ}̾Lqַ&cmHOů+s? _r􂄵/J'ǫ6?eQ?>UQeVRl⎥Ku<;5Ii͈POuuh~8f] q)uIUe0Tx4Zg/hg~i5 ۟q<[-BcIͮόGpT#ҋD%/K޿{GJ{;6Ryb TiԊO?a6*$2aCB6Hﳌ=l[9,}3\zXȮGOo%{pmgsETEoDR-R g͹7MZ@ hKZ柩2,<,;&y 4W8gdXSwo~]=3o&s̾Frz>Ea2Kt=#O *# XNCIUaS92V;L]}2V7zB>~蚀^KPL^JmAWQ$+0*r,Hv&6Qc`1|JBJ~!Aґx]up& ` V, `QR?ADcTOkNU({*%5pUqLW5.g *ΌA,h,* ,@L7L}3qylfxߡhώKKŠ̞:g&t4{OL~GI:E)7'f{ &Wh럱gvӳehB},=i"bW K+ کa牪| s .ޤ>JtaM )[Evq黋&ﯰ%Eg X~75#>WN1*uI#} a5mTTC.oc5XP?{6O~sYߝyygx!uuV߆8Z`$&iݛ#cKG+iw53䟾 NB}Vxn '; $A#/yvҤ4X4Nb-h\&*DF`zc81gRǝqV%*J刪gOlq֖LR>*IGY NYK8f61J>>zQO .^NB6&C7HM/;C e'Z}Lk곲V%D$"8niNK#!'a }ґKtB$n,H;*`!Cn %ho-OHohE-QD- =5{g/\~pGMYIZHI$yR:%(_o$\pZ)4ΆQa&Fƒ8F#:"(H~HXХh11bvW0T:WF!QT Y盪0zF:v6 =Atu=>MhWŏ [h$Ǜ h 3~aH^Zx0t+O4*`@[oRÐ{TB1i %l"# &$Q+[yQiO9jX`*_y7+sME=QueqEnܓ0'L{kY7iI]x]_+^ |izVnW^U./Nv8/ qmn}*_\ w/U]Rëw)T~kt:OO_)?tDջ8PDY -tOnc5J`3>ޟtZϴ-m݌G)yznbIXiʠ*{dAӫII:OPeyJ9qK\V1՝reasAuH ]IEQVؚ:nQ;(b: C5gD1ujRSǣnQzô>ڪʭ(x0 fcاtU:2^Ze b 䳵AjJhmmeDA\䭟kWu'g29xk!ߧ}jՆcq71'zm -Ld_p-]6[pج(c/:eV{ ț"ws Ĭ!=w"5ʬ)QE]4t85o4M40yj| rw_ |Y9a~  q58zYxZw\Scb?}3T<h=*U4*ϱlp!F0}HWeWco{a 23m3G2T7wP}v~NϮ|YVn{aI+UF_סdt-F_tP$Cѣ/q} #:EjVzeR| ߩ^`ωq5xud*3`YTqL\1^ܪM3nHֻ#q7L?-xYυlDiX5F.ID0Z:A;"TsQo8Q&ޡ!Yec|W5WnG kf0T $w,Ok:|'{߼ T% ߉]b-8 jYDdNl&*na>|rFsVsv{Mk 9ulfq.\Z"VWqJ;W^ϝr:Dm~,o&;Q:[ڽ~}ިw̷:|7|7;Wh-m#R6m6cG c?j|߱coĺ7gTwwȵwM=!DY}k1e?=r|~kS~%@)rr÷PSǸ/a/$"ecQ }(8 QGA %QƷ2njtD+Q!Qx ^'Dn&9FݤèIVyVYϑ-j9NY}Ju=G:%",zaNKڣe$&9Z Fqng(v5;~{a>d۟ݯ P`1t'}M&A@AINfN^ nzo{v /G7ߙ_=ڂ7@RZ uղ{gv@ܛ^;GGgQzsb T:t1TS9~ᕽTb{@CKiҽf;"e۟y~|j`Y3Iw❸#D66;b;v D0N)uj<44>ЕTT@P *?=+ +"u1ЖDtMDU' Ilb)MD^$1l5R]J$GNrLY|3L[?'zǣl Z@PmXjOHZ0+2vo 04Y'!mb9B-{s5;<轜{nj%y+CoֶgTzGW p p ,QHGb1Ts%\:ᩯIjw!0ABho1ĸI]baV~[ aE{JG&P xSTk7 ).Nԍ11 C&( pWAȁ}::M:&1.zYT)gc)M7uYrđI0&dQӉCK؇Y*BF)*nK"J1 K+.CLS3aI! 7:]$XI\W SN&DX'806nLkH.֫~OɅ!60|oyOR9h|;`0scN.v#Џ͙zL·~YG̖MFtpVK Pn}4fyO47t=hc!+^:EMA]ki?aK ZI ީlϭ6)/|8wǝ19x}@0OWDktuZv9'J{[L≴@ީH޶\Tw+ Wc q}\6C͍ol~znl:4Cҋܝpue #PXyAe*U$n]Kbӌ*Zs^tw [Nض坆糵| 3 |R}& ۦwz\*2 >{EFò|w%fE4DMW IhmL0>PP^Bpk0Bn&/FYe l@r{ v:a7:ZoG8(_?~hm['Pd:NG9q}V{p w0m4i:K: cղ/|[/ ve1 A߂ܽuӎojrD7^LN@#uێ:DuMKv aXׅl}ۂrkPaWnG{gwttzrnhRw_?z3i_G~NߦZo1t Ђ0Hv/}~ܹ -}sZcN~ty]nZEdgPM W|4V[0v.CлsZNs(feemmFS@ʴ ˏ;LprkO5g]|Yzñ[bO-x5OSmZ 3/@s5 ?Lʣd|G} z{.v~vSJL [0> 0E;[) E7o[pSpTb{s *۫S5H7^x6r'mLqOHU T%i/߸ai(_]0}:V~ۣКQr 'Ãa2֍{{dS퍷]LoԽ [{~eZK;妨Wt<4Ȉ\}Hq%+ sýE:*uDZݰ)Ht8f0cV n]J k]$K3)d6Qs!E1Mx)0h[=d^ kmI3VSTP/jAԁǵk›>~ v]^/r=(7 Jɧ1􀇌q:>la8<ѤLRtE.vu#iƧ-WT_>Əw,7bŸw^Ȃud V6l&BfHD>zPXll 4"x 9 vULwzf8LxV eȗ|rz `o}~>K@M}YNgT`YBStOe{@'B_셨ϲ=[K&RF7/;l'Vpȓfؗ"xe9ɿMn%{~s3˘,̧F>jI9k? gfg2]ҡqX|wu~/`5q?C? Y4Z9ƢuXuZ8UHi2gBO-^;I2Ӂz&Y6]Oec~7fZU^)[*N c]t_t[.h3] ? 1vDehO ]qR }Nt[N`s3"8 PupWt[*d($L8@wfNם}BݷC/ {^.F͜nj06nBl՚љ>NQL)uSMJݒ+&El?oRocZD{ޱ۵nn_}#?jv\̞,ܩP$")eyOpVdX' 7}|eUzw_‹|YN]'*ȜD 2ye ? ,uolT)( R7I +Zm觠ȭ'I|V8="ﴎS].#U^VKY.i*syQ`^{VMƘ%H}1Z(258TLhb lL(V#e+iDXs#LAtg_\ORHC[>_R[nU"N~~L&Ҙefg,==ì;l̥XWbS׃nrxaD0<(X]Ǡ+ _O]qbɠ+傮zˠݬH0~OuNx(%&/RR,ehڡѴyղi0LJ:A8 hdJ`{Y4 RE Ņ NW< J{|mcn T}*U !wt1@F6ʑyX-F &#2fhNk~JlPQvwh,DӇbZhU7}Y.BhPQDR(^U}|`oқnaɛ|y|ֻ={`ѪNXJf7+V}nϏ]`}Zo۝\*VO-I #F.KZWs :H`ydXDr5()/KG#B#,v7eΛzAW0*w):MwJ獛*Yl'Y Iڹ;b,(J>_Hآ9BDΪy=e۴ HF?`7kcoA8,u a?dza&foB8 >UJ0D#r(f!L3qGYyZB yXH^1{~d!sq䲓LOcO/ضna\!vlaVW}0Q[JuaĤnh6<&҆`_4Iᑊ`Z3<ī%5cIb ^sIS0{iN-gJ3ϴ<{N0G N]YQ<[dZ7ҿD*3XQ̒ofq: #KGJGSA+0fs!l҄R:XT7٫nv'twh뀅gOKǢ>ӥi傥P1a7! "-9:X¦SYw'u:Q :P  Z[ZWpYmvx4vf:jީ1URQ$I r"Q{!KTxȋO-%acV&-s<1< I"0d^P^tK11:; ߹8qx"Y,Ӟ\0 ̚WC&j6*e dߑwo_.Ȓnfqý^_j"AѱOz>\la9H/Sz/l|gsw76 __2H# Bo. =WlwSL.b"ݖkws %҃>v;G>ђEa/2.T*0"q+gyP2! rU ‚b-ûr*<LHVP1eDAJR 4 Xs#A1H<ûBI32V#Wb T׳},X`g5 "16h61d,KMW9<+IGb- B_hc=Csdڟgz,申+Vn^yA|SW?Y]w̟'/(R-*UF D=U} )O>XL! \$҆9IaR֋!b*GDQ%D J5006J!S2D>;ƌ9`I%+lY79>{/fX;2:0[J[L3q3<_.ۿX7gt4yF臔B㌛OYXܖMzN00us18-&]%1Zn-6 <ϑT"Γ7=U$<__M#k9?ŷip&#żݵi @,v(&sF7@oN`=w$C=>:!QFRK4MHPAnaaMh2RQ{uLpsگ \Ռ67CF#af\3ҶH6|&Rbbɱ(V4(UOؒ\K4*P@l9kJGFH(Ԃ j-!SR :F4!e+2pX.Ɔ(&&DREH$B`ҀR ( T0tE?~=jw]sI8==$|狉|8q`]n{jh;V0r_(BxgC1U]w \W g=߯n>|K4lD쁥Pܰ,*9A)O.$ lWy #NH(r2zVä DLNC1?F?=ql.kXR99T 7UN^`0ŲSP[?gn&ΐ;{ȲS0%sU%K BQTFXcPFXx>nI|$mFma<+{aq*}2 Bp^`v!=|#N`6nO#^D~lN 'OÍNe!0:Q~DCܓQ-3'< xiU|خq!w(Ƽ~wzsU38 KB5Ő&,]ŅG~;j_^:?<#G(^ B ? ;흃A{W6C4N0EFđ0)ǜD415QX#f:N";znS/몗ʺ͠ dN7#5.v$V~S[ $8=C{ZBs3GD~Tx3B\<v䴟#u$ߎ` C${VO]%aΎ[|bk!Աe&iNo)oLRC $Zu{$n#fɀiCLDEdH3ЉJQmbK<e鲺MBRp!BVZ+hibJDp1 BO7#e&<]*N T xш8@ &žb)Q T,+)TݻzZmn-hDmj`Ɖ$Z'͚pFHI9[Th[RiPd>3u(2yMՅ#wL1dlTh.5lShQJ"r›-5򒀲j\r$jKEuEsǀ#9\ot/>+0QE(mP6 I.P9HV(u@mq=W%R#o6L~ FQh:J)h(%v}j0XcUv,lq+y^ן"-;΄L̎uٗaio2}!qkʴ Ckjf!L7x\,/_bxyy,hHhj3Qy@6)oׁ]86>xM-(D kR*djJz*@*Aߖ礉vnSA_| e[.Yz'x8 cW9M7 96PCcۖa͛Cu(v ma o74Guj5Hmh3sj9`hnfN3s7pn;^i}3;jjf6sc fD3ƋI9_ii|"Aoz749`Rm^KxJh3l(iX,@z*cLyŅLs]ȑqJ`\J^B5y^|!⾼E@OCNO|',6H#"yBlyCV2fb\޳bWx4*cx1FE#nZ{Sah dd" VxUm ?TUsW<]΢ RT29F{&QobEyI %/%+j9ˉve{"4T%#xt(D4 Bai(CpH<qap R 98DP#eN9ٗ\*- #OZޫIɓiC|U%<"x3{Liq_I}D1[rɼR}>@<N)X#`M>]ı̊+j:zabIU=1\r"1#(L?"$d>D<I,7H y.LTlpg1PŌeC0RF>S>#{,I)a V|DM8lb&+,VQpJ)&)x$zyM=\p$究D&D&sPƘjB0^6:0gzq!MUaU {c/|/|]9(#k|`m,\1C 3k_1 nZ% a$BJYN8)s _**LV1q30DSV1jۨq7{9#K;PÏ?jώRpdg٠yi\8P.8R{ BWDc&Ka@Twgi@QXj #|+jָ)6Vf+(uFx~梢¾Az=XXNSUmju/GVΎ Q ^y[`TtY&dMo2ߝXr ƽa&;ʢyF=NmP$K$mJ́"sXW"֍;&Gi6h1EkQb="Ɂ救Ƹ$F4 ֻpϟ_\IKymx* Vuo]euksňr_jl5VyE0[FwneSn9 >HnQ%sdt\ F)7srT"K3Gѵk`쁣 L[<}^d엂ohX:kuF8xZ"UA \MeJ iSTw4TJK*%2$cVmıt_i\e"A;ˉ| ieg!f а 9i(*M*7yErUB{K/iR*=$scrPeg#CkMDVS$bxTzK!]_Ν֙9x[=ƪEW|7(\5lw)H>xb jMÞZK&Hf'֖IږyR?lFam;ՌhaeP|"ZH2Ni7J9?V.Syl/jVv+CBq-"S ˃8׵݊Et#v.L@ڭxeGV|"Z@";jnݡ朱+jvp]ڝ&#^B.֠f׿4pQ.Eߑg߬rTWq+CBq-$S JTH[QNwnڴ[Bڭ ELQyvَ{nEi":EߑGsAnx8j2$rSM4tcƝTFgg6j ǣlbҁ-ړțvώN?y~?M_m oNOwA^Zw.O؂aS78IͤI:MƧa}/iWXuAƝx 0ܸ||1w_}tM&ǶFg'(w٩rCX@:^I{wf:^GwELͯށ8` ߿1]00 l=N&!ǟ9#iOחo<>JѾ~"z_ˏ/K|~ygo~}ۻg|jy6k`rwۛ.{ӛƙ! k^p᾽qoޓԮ52I0x^wx^gg@|u:r<y7nۋF o# +z2߬L<$??DvJ|aN -AȎ*h៽W$c3MџS _M?y9nO)m3x1X9K0E.LMYxE|.T='ހ:M?0ѧ0͏۬au^?h TYx[Vgoܟ{#O+v^? ap/lƽO0n~4xI$n6x6yC"pv8v~ y ;Xl=֚^5 =肄`Gg8{)p{I|4+]r}}[ Vv5|Jϛ8` mRu{rL5+ ޿OȘ vF:"K`rc[j]zg۫W<{wLpҡ_5Mnhw3K9>~u,nѬ%MJD{w=;WޡX7\!};_(-DT~@xA2;Fs1.0VMLr11\'Ი71cgo51$.Ȍ3qYWcVGyDS⢮d7$iy#yc2-4n(ԪrXeV5FȂȪ F7,}"yvԟs#ģ @Irx:"FRU~8i`:4  bJCXJB}@> }s_ 3%P1-Ӱ ~ZڬŦa-Mq,&ȳ9k .{Rk{kd=%J^Z }PX8S3;/k}N y>Œ?4S7N;6e z"pz򿿻+c{A U3Hr={.糲*BYȟr:A⏍X &=^F#5lJR9y Ǿ'Ĥ\k)r*|k|'B H%;q\!6s!"7| %h'"7T3*|o|Q&ǽ) R{NgW64Ǖ^SG9^(h;ml{v`}yF߽}c^l&ݭhEt__JG2T3@*oAMȬ ̫824ˬOiʺ<|[mu7 swt ju;2eJ\ϔnij<|' ߉;GT3q{~"X!Hil^N3:?μ=|y)>㻖#r7Jwbu]1HE55&n̗|GTKX~[=!rR{tlʪ\ayf2K!I+|k|wobY|,g<=:nF w>c#q}R}})y 2-MBT%;%nn#Gt'-%BY7/%o&D|3Вj Vٽ)4b qG'!T>;wiĔOP&Dnҁ6?<|.1w!k@i.GT{RCAKs⇬ԑjJC=ճB Ж /) aZHa bzG5 rpXK6 m}A"شDW 'E8tM_WNhN{RrS,#f- 91 IcgCWtɱTe33Նl[B\b0jn9 mެJ#Vx@Ka|~R0%|Ls(h $! h^[rODnmأY<ҋ8+ZNJj4Gq6 D^[njB6Mfʫ" (b35*֬(UGU2{vDi$ c;1bGOjQeKGy4|'^"FmIQyqEJCNq ~;u]Q3UD1JLژK`?4 K]{ !I]յ]93ҺzZ k^D{]nJNR},:%FB&U|탡CBR˥O\Mj%7H$ICV{*Ly6)@A܄"!`"Q25O+$·!՛@$ phP8&t҈bdbYkO[{ xA.cF4&?%)Y `lwgDZ_k`z󾱅m#27B`Bp1E_ jݛEAѷ e|ÿ8pՂ ӝ+iNJ [!Oc pؐE__&HpToV+pr{Rb < lby}A ?B(+Nn]$#l&AP~y)5G6KJB IV $tCRƹ/sejH>Ǐ 4*;pj,XF:z$P+J.>O%&2磰 T:+RT%a69>^+ UkQD8A8"Xa9RLOàRK ܮB(MĸRf" txZSRHR )/i !7~mP\JOX0<y1jHʉz- % "_g2>5ZC)y9 B>⡣oZ8 *UFnD*UFy;oQ.Ԯd+0dϙd4$T!/CMbusԿxC oȗt3\v>sZ8JZV⪾2rc @B&w9:?~^Diseu VjoNuw0Uj-,̋4`ASOorw{,hT;8u &F~64HsCs=;?z=,ӰEraJx<@r\0h_:?tEj|ׄڠQX%l(ϊ$!E'A|c@jTKbm2g6SPͮj~ tW V1 b&vظ~#;o@*5t5FV f3*v3%c8TޗI$$ЇCD[O&W m3GG5m:V׋Z\miv=/Ҷ6k7|Z9j֏lCM]tjwWj(eA̋h \Q~k5CǛ f҇>H@Cν"n, Za)S dyBi-$57ӸIQ׳O'IzP;%Nx뗸RSozu W`4i9w_CǶK=H@3FoUzõ*o" E*@3'Yس&GZmK>\] 梓)qү,jeXz6n[//oo>\e;<Ƿ^ͧWߞ\vg|*i}Pt3Eԍ.\ӎutOn r5I0D /{p0Ryti;A>[=uJy!eHLE2䤷m"%3jn[zIxʱLKI,'N%wm]9ᧉS%:җ꟒Qʠ£ ;) }fB_؂=_X՟HRXSV?ܱ?'{2}2x6v̩ mA'=QnԳoE_fQP յi~8f]ŝ(ok5*Qhhk,OV7gtkѝFdyp~FRPO֍z'Gއ863ɜzczlSL>t⯓'z> _`Ec~za5Ӣ53}?}~2ƃC-f$*~iFqal-|/X*ӺLiJ%SOrO^ArfC]g9vdtF ವi,5)j&>[ڦOn*Ɵln(kS8nZl&`.t~;47fdk6Jmzmuf51JQzڄR) .{ ;nـ2x[HT`R>Q%:N?;8߄sYR0\2YGIklʟt}b!ܫ}ּD+nMqX:gXV^FVQ_1R_͐k|m|w.r6Wػx]d xcRKmdY`a*tS~2/^RZ2ol\t=t).&c-E׵})jqnfBƥ˔]9)e' ו Xfb ,ĥ,ShH e*)%Y;-w9B[hŔ^j+5K4 HV ):򓗴N,'$Sp[&p,T -vrjj GI-zFo Ќ]x !54K.IPEyAs~Ym9eO}GJiP<r Mw|R3zk߶ͺfvalX-idUj?8eoc1[jٚ*Yid@K{6 .x Q)*`n*il &xJ,FtrScیЦ-Ѧ[kVT+öFR~ dmF>IT迼o7Qҏ⯢7+o*6?Fe<ϟ,|e>O),5(ڿ-wGʤAvJoN1pu.5;Tǻ6zaph8>EӠm~ŝ u3)-; ]?%%aN:_d˹&GDD URR+'uTwH!-@>xY#KI$oQZ:\grHKnoiyRji}e"7yCdV!rː:J59ת*f&n`CR_&#GRМhAEQ1G餹94|w&m5BE.I6cGq2d~+@LSQ,^' @Aqa+IbBR@Ka@ų!I[7%L U7-.%N18hn4, J +)([|^Ui\m 0쉒dh&,왘\ۀ0Ğٹ ڪ~~'7)/Qnf#uO(B QZe:c >3qkmH9fD", fƳhl!$d俟j(Di<XUUWWUWCcfǷCHR#͇xlp7zHkh O!Q<`Uh$S@J al YWl ǧ p ?~)"C9+Af LQ]sEY-U@m15)VqM=ڄ]ڙ``@e]=b?keӖ>J:Ϛ oh6qw6NG{%Sk }jh{kZ8|5G-5K9Ƽšce y,Ǝ7?U4n]l쿭%`Np!Ui?AfXMCsejZ,YK2aa+9h(f]4"Y֮ݗ܅z.Iٞ^pViȺaC>$AYmOPN&h^ 6& ._RZEhJ&fVEՑ`m7Σ@&襪"XIyVO Cpw\8 T%0T2Rȁz-Ic˅߭%xQ9W(J0[F0>(˦Mz.̟ akwӅ\}kr"W# 6pL1(R c04Mr$#n I3]OvOgR꾨b[,o^j~oD_‘Ņ1|kV7\tKY(NsJhL0i8Ј&N DДF,Q&J "5 g(+)ᯖu9e$z3],ozk˭"9d|o4ՒʟEfl0PJ?]~}pЫN$)KT~9ӯElxC]DM{n>^v@LgwYMoMTr4Y{!|bLPf61o@x‡ax~ZNb:8hcمb;&d ])U°' Q.xkK05jEDSV>biw}P\8p4nH zeOTDbʄ8Cr\ Ч>+7W3ۍ?v&cJf@ [e48E:HD"Q!2B "Q*5 Yk4L&"`;:W&f8 w VmڽC.5ECuiҁ$"JPa$E0HHX(-ccx6 r##A>fVY2Ab4R)$3$aad$H1F 8Ķ"#q$#1UhܰRJ (ǯ@# Zi4- SabA OG0`2) 8BB8d$sFQeėk_Οi5vLȎ$/(B$]H_)B9_dJ3k^$bhN5|Ȝ/rX3,U2kŊ=2 9WqMeۻcqxZREyh ;xm鄼3D'o3z99 -+L>g/FY{/tYC:? !DD)0B/M2P;9ZGnj)bUjHiПCObqRȺ oj+xw%7I{0n8BC!R ?w U$| y MU!I iXpJj3Z Wu޳_kO2_J5ߍ˨y(߳{aDg--qx5k]K @!bOŌ [%-7{=s(F0+ c0.?Z= F,b]Y6զov0+oը1|G6s<dz#0fѪS)6\ݦ3Sj%dVXoU\˝< bm]h`&%9i7'9qϜ, FFV5"+Y3aڧ0pK}X[ Yk$m)+*ur8n^}y1ƂS m 4& $Bi 6F8?at_opxj*j !áN`aVնf 1~zg%kte BɁ:5c*R-\5c@2vz'{F'JXAAQ֛ҡ(D$L(ȋ HϕSRu1 *jzsBAW<|1H6$ AYBX@$Y*$ $!"@@ӭn.1PϚ:g*b}"8rGOÍ:?\]U3{QWNuc{>WAZ8ɢh}nRC$|a@z ۢ; !~X[wl6=O0zzH?mLdHçd%@R(?ujHkR{Pb8x 50Gu)t|wn'/A qbۚ.Cסe'8\'_4'?̳c(v עBtn,1: c*~(W[\" ڧf 4%&jP8ms0wh;~8bX`aU /h͉ PpXZ٨ }qwv⥦"\IyCVCR^c}`ʷ_ AFj "5@. |9ar*&q{l>(<]%8RqJH 4 C5@T.i%Gh+]؉V ҽĮMPb5I xLa,Xa421%5Qk#ʐF$bi*JG*J_ ,렢8%$ 0VQP,0J$RߘFgAJDL'TTTTs(\kX^nV_5Eכ$L6vdp+-w64B`X)040T:cCp,,J"FIݩ v^Lny[8}l e5h+(%ˇ.G >2M\}MwTDje^Nv0DDn(=9n>^ yȻL~c$D=\|"TsA~g>C{2GTD~`,4IdxKo]tP);QSɯ  8U{=G]B҃,a~\o.@38xͥa >"DCv UaRpɎ\Չv"Hc_qo89m0m![Dw߱/߃.s!4Nʟni<*EmXn?88Dj U{3ueQgygd:7S봾mGwQqxNi|>*AMFLAk ?v U"+֓j{zT$aEWKC)T$a4N3kRr\5a;}ZU#ϚQ$ʼn]it7nO$G;n޳ƘBޓ*y>'].뾣J1}‰t1;~׷;~CqS>D19h4pa諸Ng{YLlLOtnoEkgoq>nj4pa諸Ny ƴ[k6zѢ04`6=DYN–k2CoYt|0-~%O$ ͟ޯuzvցφsVv _tϥwzowzolfZ/_ %S^xQ^ ǟS-xۥrE'MaEr+7NAφ*&M{4pW۴˙S{k1!6]U1^ImBč%Yn["%!7tmpRAa^h5*9Q}m$;^Ԓ}g3 ,^?NQp~5)/ӂﶦhp#ɱ.DY8S\3?Fff'h`W2+ߚF9e!}/t= ϼ0/#O־m&/{N}8xǁROQ^(ScVWMtqV2 ëF^n EMaaߤ* ?`0fvӻYnޓ6m,WX}yO>XZIv\q"%'Mp\ A:QT;I!JS pLG#'~IvWus]&d~}rMUN~svX)8 )&Y*za?P 3?f4'wcx?w*2Mo|QOG+OSb(;qcmp\u@RYnon?]`~|tgW?Noϯ;o7x\*?} K{t^ (ظw{Mƃ~ M ˇ؟X t2 :⵳p9-73/:9/IOX+ >6h2uLg,t6wR>qzx؊LI2'Nѵ?s̯A@qx3-wO.'㯽5T}OOap2'~|J[YLx!X#!`X#ԔUҿ(6IT<[}suNai|~;.si~f]=!yơ@Ϫ7M u+j%">{^f.{p(񅂺?? IZF_co8@7ş&;&fn<~'#IzWRF&?oYn!.eZA<}e !{?BKI2`p>Sipj* tiͲ/^!az܉IJ`I q0%A[WUzsV7_~U{Z?5q!w~*䨠9l/8~8!uH .#Dwi$n>ߍQk5225R${]ɺJutPH\_QG8kg96rU7穚ET}C"'Ń.pN}W=,嫖ZeL^;~~6&O\?"7lEw0Ǻh6ߢǦfGߙ(-_V?LR'YO>~aj.]mLJ[gRc[xrZqwOaBHA;B-վ4auˤ b6CM:}v$M 4+ qmP@R= pn'TӣfO]EG\pN*B8>a vB.C@m7N\7O k QKT3uCoy̐ߤlj|EZGdi)8hNP;ꐤ8 jiFQeF:MX+.BkSFEX7"[*7T*RʜV XЍ|PxKe 4Q\r=m;2OKTQS+ վ ;Kqbb::SZzj0Z%$7 O =TD>Hdȕ~|(;eqaU P[R4kƠ:#UQl։׆ wth^)xvq\]psFkr, I":|`\n@2,{K XTQLMCQwVד,Kh Rx]N7jq2$5_Mp$f½Qb Yyqw*D/FtMq,U0%[$J@C U10U˩']@Llˁd%1ege5$l/L@(?s9:9؂Lɲ+ s,G_G bzYu1`ր@l% y{? B6%\}79Vy mRaj"=98y@^O#3ӼmIC$SrJ\?GA3؎|k~dcPGtCoG63x(7N9~=Jnجh$ FR*LԘCŸmfɒ22)s~ya$LF1C5ʡM#"Joй$ RZ#79~6\Vo?!ԟE ԏ}H !""D #"}OQE%mfb ϵX,Rc"/qrxD2^UA%mi4sPK#j#KHSʡ6FV~#+`1*3S O jx4(K!PĚn[ sS9Qd~g"S=lIzҥ=w$Rn@ "u$ E(B, >O_L74s4]L"i$Lß Lf隓$}cϨes]5NXuiQ0sMaS=-.Wi!al89qWKq͍:җ-0{5GºNrisV{=bC6P;)|1,g!j[r)nshI9d1H8W-') B( p=7Rt"HG(.z(z0=ex1q@tj",\4c(P\J1aGwC' 'ѽ.Y,YMo:6ekV:M-5RK6Rf>+i-Ì@ ˖~-my5>'e솯=΄uPLLgzx%PԖڏ> ;q(fTOy#:ьNL4QjmF)y(*<ǢB~wOMxFdNrIB:.NZ.GNkvZer G<3,Obn,ZKLb3k5m*CtP෹/xLȥz3 ݈!q RODPE!CB+"$*Md- ]!xU~n56g!5 K5JEsRYvb6'RPR{:u(VӳaozpΗ0WseterAJ诔3n˷" `:<(7 3K lbd˴ُl/-uY)W :dU7 @̤".\Zf}95G/^C=両P"EyQxDz4f6Zb5v?2oX콉6FHA+jdd{dAdAm(uuiZ^`*jՈ}!AEh- E%ipo{(V#Gcjtu.emұ5 KI}M|q[0A)6lv"( ÀF@>&\R9A94 0>dRsVj Vpqqal>!UpA›'#% 00ȗ$x.D,@"|Dy4JBNr꩑dT,'m {.G"@zPY{\߀Q =c@=RP zDR Qؙ%Y[tK€VP8~t68m `mPSP La(RZ>X#WxQ&+.̝Pø1k5^e1]M浒 p--kn3W ^n>^uۚ__ˋG,?DJI|uh7!(=yPZoϕMiJk7֣R| n NCDm2AGOߥ񶚸cm@ FSq4ճ4`۟'&(<&cr͠+]Y?tޭ%P k*[}`utVuXX>*6֝>too^_)ޢ|>1Tؕ=+m!ر üh^M~Y, ZYZuĖ;HL().nhkn}c eB&^<;1QȰURu΄ҿk/~!! , γLG7EP+KJ,*RI\fME+vfF/Gv*͌+3z2?Z VcщY޺];Oy(*t:1v_W;'PA b\K7ωrk7*EH&z,9cyY[x\tM}#(-Dה|6 7òCkrY0 ͚uDǦvQ是 4__ G5j,CM ;?k*G/K.7f-2PD!|LV4 ZG}ZKN3'c_]|m:~M%0յ?S7D)(tSÀEl_r;:([$4M9}LSb%2 bMp:St` ~]m/4CJsVYOLju$2t+)<ESJVǒ&Ӻ|iJ`ATvC#h4r4ͯ~T3ל{ ՚}+j%`hf^D}ϏCe3Jn D"[I]A(gT'U\,X0*B!nS(hBv=fB9!Plw4d|Q9 PϫojE)bMRy{gu=5jPupI?ItQ-0ӇCCv}*@E5g-/^&ppfƅ3oT9mD4{/("Y~Z"%pfO0J%/mV~c"QdIX\W& $Nܘ\q1 a!"4Ai5$O% $sj4UVcUV.CqINPu6"`cY` JbA 9zG%`p+0TWn{t8Lj01n9~1Tmk,_Y-勱@2c{8Î eJ$JR0(-!b< j*#CqU$X!-WMMَI#{Q;x7ԟdL߼E$mzR\[,Sn..9&eT\b: /)K< H'i/_6Z iuKqav ы88ua`9k8ae">&`?FRge`2?*y׃ve6y F0GDO?"'S%/:d﬇b ; CFl32}a@FAq)չR4h#ٯ?X/S M,瓔rQ玞"ʂ3Y#]Q} _N-aJIAȠ A31A77  )U8B'Jh{8^5;up݁, os ބ&p+5E\gъ^23(]_ݔX| YdJH)2nHyy17-5dHW,o$j4=.5%'5&FE[rt8im1bEӉq^ܽs_\ k m}%X.Yȓ]d3l_7s Efgsu No#wҞ&'.`&FA"R:S _.~]tWNiqK7p 4~CZhϕsl2%W*Q}ڈ6 )Qw`gMALzf2x͕X9映LC)ano.~>M@L?VWc[_*Wpd 'u.@~o!f@5a<>ؐrI14lmQ sɎ2ID^.6 %W^jjق)fV1[Az:GUy\Cg6@cCƌaqAg FƚˬmʃKi3ojOrӠyx؉b{ ¸3 TNpQvLzg[ANih% d1!83LV֤!?S  lV偋Emv0;rCՃmJ:햣ݲvmUΪXݶ9&%6#Au8e =L`Z׏KGQQ=J_m;wO7^96dL띈%-0-+s,f5괚>70pnN>Y0Ix?(iѭaTY2sr`! vZ@rXhUaxp{+ MN'[?#9(YW9pQK̉p~?+?(lL%*v7Ri7Qҁ&N&o~}8]]V ;DY';q09zϙՎˮ Kڼmˋ>J}{}wizrN/?yy{˫?O.83xWLj]ӦhIO=Mg.>e?0cx^~o *yedj7:6-u]<H"3RFR_SddJ} 8=(slS6U$>-&fS0k3l>au``x>qQ?8~jU?J0DQ7j)e83"7kltl}0%0Mƶ,<ܱ=f|syo| 1h&GQh7W%>>i~0\uFqd`jmt쉧SWhʫIǃ3hu<5>C7zzn>aH$'>tםNȐI+aM0|ݻ5fݢ>eT# ڱNHL9VcjA*{XMX´Q\{TĨUGa-w{Y.ehS9+3_ƶ|sד_S\[ P5Uk1OA֕ūʰ.e]21ֵ 'Y++kp9[X=0GZ O@,s >ʛ ~wWϓ^%{dRT^'y_ >6fTe%!a̓1޲st4a}37f&.tkN˅,l%ޅ\|?l.*F8jM׫I[oMNw,|etR "lw˄9ʛbHr+zҙ-oʤ R̞m;:EEwZl)]m[As$Dh1X@F- -<h#rNQR6t.ap۸;Z+V^BVLw*kY)$+=|c.KT ֊w | ߉pwb*Dv߉`\E\߉t+x߉v] j1Pѻ5G1Nf0hhX@bSbP9'5jCkh?<J HiPek~7mKÈVެ%a48}xl锸4%e5bsru0PMuw0F°N%Tg&>,NKJ:!;s ݩлr'[TOS-(*R7sGAG$qD)Fgd"LQgB~!5^ |XL;Lg@i~#Dc$\,Fԋqxh6dFͬ}+j%YXA^D}ϏCen EEn]E$N+Fo^dx7;QQ{{&rOr$V, vH)IO(١"GjbFS7u˹;7'R +DX~"/!]Rs6 5߉pf;5Fܗ2±bX0Qں!3ʣUe0ِl rY`#F?DF  SDQX:@ ]>{)|ͼ/Ɔd)Fs62RffT[k-M?;+^x(%ٛ<))P뢏amlcc7g0:73|ĽJQF"@\W*RUկiE- BVjlXqV)VEJDf8!  kHD>1AAP`"VKpgl$nF,O>aǹ1scLLxפz~]_f/%A|\Zˤx! c8@|d>h'so&!]S}K'cS͎vxoG> #/BpIA+[7. 7G2Jasr~6dQC# =6ߙ@Z{\2CDާ_xw>^O*AYْqEq:kleF$VhhGMjOXK69l #Np K\^[G<^_36џ?ˉy&*Aڷs=)럮\wsb=NZ ¹:2#ssq՛%ADRL&Hv[4q$\-?|jw3||N3!dlА ):n3J%PTiok /mBD{-fW1ʟH .~Un$`Eгc^s0^ChTO2/ |EK:hZ t:~_i ^u8":)@HU7޼f?Fw؞u -XU)h6׬gFPjfk<`:\M {u>+$R2U8{zź cnei":e߱u;\yde֭|>uۅ|"ZF}Ĺj݈`EtʾcvYUfʟrno\Dd\I}:6kq:Ł]Y/zzdԨa0QvZu[<߽4xL܌$咄 T#ó%RVUcf6i÷F5{ jzGûF{ ؘS|F/ˈQmeaq0[>@Gd!c;Ѱe=ZfLQL|kXBYyl Y!X!`݁z4ؔs1%x(Nz w{AGMo5!䟃::hsҚnq IIo܈eg:artg` {0c)3YڥpswuB 0/ ,mWr7~vzq#˺(fZcu %΄Tx'MKC@ !ÆM:sj7-E`7E`U," EE|=*4:͡ߖzW/1: {C'>oE3X[Jm! e|ƌB fي[k'B*=޹(5zux{aરbVծp>7fvm?EX1kYT/| G)F(ɠrCX<,"NȆ"!#W4I4!$N#4|dKL%RDR$,煈#taM߇ꫥK):$F1O|&HBiHgpAq~ !XP!YD/P^3Lpz9lD|9̜VxuKkM.a+M")`V#MD GfN2kO% `#\` u5 ղBoF P%@QG|Ye/*rCM>[scyfʋjٷ` -U9Ui\:F<YZs,Ց#wO՚S`ߍ77VM>J!ȍ;Dk_۵Jm98U۵u֨5Jٌ;r ͣ֜}jna,Ra433QkNeC>[l9)y9iP8r52؍7"icQVRsS\qh;z'֔m2۽:"xa 9g>Wr~ oS^?b|f{Ǻj-#5-a2da02هz9If6I3JBœp|NMЩ@9 ]>Z弳l[ΜT%^HGs,P=Ub6ɤ]E:,'3Z'fT9T9FeRkNC`ff8'is͘Ͱ-,XwӶ﨟35yUGؕk**gWxcV*sHhj])nOQ@g?ar^ sM&B6}K4|P-{u4z$ x9Y$0uPضgˈ~=o蒷{;ֶw|\~?AE9 xnF3Nຼ4 q=N,bNH(w+Y_hjfV0`r᫒P@thEzHŅEq60nzQU{jLR{48=ANouczI6ja35{pLGӫh 0w_cnj\'ݧ GhcyƷQWi܋iIsw azn7]]ۥ'`ۃlq~:v֍rPpőty*ͻ۟Wo^_\-o/o+m|*kO?e.Ͽ{fq Ӝ77{Ihs(Z?P|,~6Q:g $>5z<;t0hY^o/DA30)WXzte6]t=2*6?b 6ŧvx_{҆~y룕D;WٸQkߵƽlJ:G|>CaE9DZ0,4 :AEx8N^ϗ뛛Oq0>[ʝuУ˅ԝ{_n@$K;m6/db93!CkXsY#)v!;h-Dgƽ%m,:ܢ`Q?'[oe qV,>uM:@npq՟E *#$waSű";R :* j0NTb5!⣸ K"E=Ek*-`(Y1(jThk` Q^SB5C1-c֬3(Τrh"N>{@s᪐tk!g9k Fy&SLrb U?M[[M8"a +~ 9s\ ЊTtKJy.k |/?ޔuRBzԲq RzH%-5cT~|qc+R?Dvg>F)&QWX5<u9< j>]ܤ]&H~B& 4wISg-fUJե#N]y4h]x =/Yǹ}նT[ø;o޵rG[qئxܜ_]`^~wu ;^ƽZIE3SB>U bPQ8RQ49ØCYڢmFmR$I(KJ]B_1hG1A :aFH}F@&@`ED"=Հۀ&"Gި=Ӓ\˽- ˽YI/DRTLr]Q`n ݑdė+nP߭[GTt9Ѥ|nPm*jZ] l-6ߡe/(Z>\9;MlN8']ި r2o*NU<!g}jHBDMl//^'E;.s`(j ჰmY$}SjI5 y^$f|{W~܂mi6=4-(m ڭVɎnaw_,]U >#B"LC\kȯ)[e d؜›~2*3\Y~"iz)0dzL :/p9έέέm;y@A,ɛYAZ50<)}8=pA4~Gl 'X4 R =*mqud r|m:bjm{p<- i/F@% IgsO>x nN]Q#@iy=Cˤ1)>Y=}{/w՛|L#>9l y'%8#ϝSz9/׃(g%`5N63;SA<3%(ASo݅?)럮\wsb=NŚZ,}Fhm>*r7KTIY؟,UDA rЀeoa"$$)YaF5dތ.3Đc2>6Pmzm%;o؍e7` t,sM PQLB#Xuh Qa `&P$G$:?u7|?[oނ:I{=4FexF X ᥅ca!ּ'*SI=':e3 |:GqCN 㺐\dz^ZYHL[eäk;GtJ ^i'J>]:۞LJ)Aw"%z b껋/=4K">}qȷv&qm"{q{>zѰk>k? 3M{v2?m_2Vb׼pMs85vSp0E^0sGi腒ߛ6^Cs;lںhªo_GniŪݚsv¸|?ǓVbǴ~ b9wc<.Ү%HBq-#Sp8Xu>&`EtʾcvHNʬ[CPmh)UV?u+K)MPSWfZwno\DKT,FMϝNTS.cGyyH? BBD"$' Ёt׎WlBwrUE !UD YqD%AB*;!"9Jd$$@A֨]jjӺyeuȉ;'B׈JuIT9inaXkR֜ 59-f S;YSk~5}'ӚSSkξ5d|I9inpe9jpB:ZNs1$5B!ґ;R54*7Pz#%5o* Cnaf9X@V}HZ?5֜4hyÁ7JzJqÿhͯ52Aʑ7:Zs*mW?m$i!fݵvɬ$`&ٴ-$ }I%Mdg$xTS]UU]`2ך~ 2YPΛ;:8.ɢ(L&ͦK}OlJSQ`$Sal%c̜g\/s=LʾTխ]iԓw`xܻMigC` Y.J@fk` |WRdJCiϐ,*5,@I}8كg_uٞl?Оح`UL0QZΌmJz73}$-\qCm1Am:lʣvp) k>+ٕiAEbH"%_'͟kZ^mx3KQZ\[גjcK QX跢wS厰i6xGmh<O D7tI ;Pޟb/)Ec[5U0V"lUdH 5,ONCi+ 6/f![.Ż*2<$IcEٍ,F4PaTl)I6R6P  iEQJ!clI#:Ƚ$#aEi3.168!TLGDb=*ah bPVD\z19jRbebIQ*VQ,4‡ĮؕgLrPBX*Zt6]Xt]uRg9jx-guNRdd7R!--ԗ#`@\hGu//-ja[ }ɦGS;c}(5JRf`e҄Z*j0 cy1gaŭVƆ} 4Ϯa ste14>Q-VWĽRaEfboW.O٦SAЛZ^݊춖Jݛ٣^LNN|9;'c͐H9v2Oae _ 6"lLk]?aƓ H=u @Lwn_#N;f Xn-{y03W':R0 X ~{wr[2.d,lVϽݵJbňJoCtm t }J0yS$uE"4{ ǻis@=#iw@i wk3.F[:ecҶ#q]xr]t*S>o]T]Hsl #9`^>k ]eXNxl~$A 6ט{<!S!{kF/w>+bp~bG; mlYOKwZߝ}uuѿݥoV{ӿ+ˠ% ջF W׎z2BtΕju-ЁͿe؈eQPeԐ]pGlzpϖ倹ꑋbZq" #a~ `H-6yL噰W{g̑LeSsGuPVAp <b!UB<Bi/C59l$SVKgXmMB({cOE<rCej Toe%gyEvqV<ܶ;- @ Cwf"tֿN〃7s8GWM/8L3zSvY/!8HD=8Q gחv2 U$!q-#S$s\P-vS{ney#:eQD+Bv'{j*$/.dJMb Q偏}GUĘ绕 ݪn'\k+s#Gksw@R|'}k̾Mo7zֆfrg3:^מ GMcZO1'5 ي|,8X wB:srwGarzp")PuJiI`l ar#+CeI._'-WU5_:y/tmW?Faw0rfwBnz,%r\원QQBnDζg)>tO>DCL] !;7\\Q;rW׌`xpe㒤էjwK3g8@|Vb) d3YYdi%ܯbƘX+Nc"Lamh)s+J[s\ֆ"V/[J8Rzte't|iEM @'ؐB`x 1=kipьL>c!ܳnB[5' Q]n]TkzR0gyg++xKAdcjtu቎KRGGT],;M@Z}/zuzy} ld+?~3iI=˓Xa8`i]tVT-. $`VL_E43j\\{wI= a $J $:2@jl3UXxfa_֕}V!NU@R,(O R \z*(5$oYl)gk]@ X _B Q'Xhp)2;F6S>NL&pp,q]^I "?iGWe;?"^c 'j73,B%0pTƉ I^h0'_(ָ>]3y!O,%xZ o6X"[6Z,"=ZSY$2;O0lfEQ "UA3@b$gy,mLS4HDf"gXD2M$#k5fK  @^A f%GV?h% D^ah0cR(ȆLƀjH0ŃɪN(X+pߠ0.,O@;Ggw[}Œ_nkPݟs_l??{Zr;Wm]dOOlڐ7\a12%{0G?9q>t;SGv俞y߿{t^e޺z^Lo>9՛_n?wgzksuӻMpOw됞}|ݽ'rKv߳NF3Gd 1][w2Gt?^b'G镣\')gcʛ6Wa鏗MA}[嵕ۛڤT8")[voCR6]$`_Fj@'`Rq+ԚiX;cQ6$\0L'%VlJUg͸\ 0̼1Vͬ93; _MI:#sۙ'VɢO=a:tVׅ+ M'O~3x<2y~G?ٛ~l7|bL?Yx_ej&`{՛f Ļ|~¨؇{YO^՗~y 2\>F}S7<N^~C"pz]&9GQymn\M`YfH }]a'7fYeȕ|f,;w~y`Q`6\X,&Йiio&-Ъ_U#N5GGk%s+fV{Mh0g#XߖV/0~Ў˸njzx)tx:񞩃  ! :{Uuc-O2oL]>'+eEx g/@uN^ߚn#xcFKj(DS[c_S8kf9ꮳL}rBhqRea$RZ$"d*N)(5 i̕Q&qm{c6+1?TuL௸&]e5ܙtJG]-ku25<º)4)w0CPPq_4#Nٯ3>t/>(?g}}Zݴ;q)TkYD߾}!>:~^ev0 $X'oaΦf&7o}O.fu~1]8(jݳnrdnwk Ω iq`x#} eZ吅ҮڥLNO`Q];"?R IZC^UbX()c*  ci @h\ TJfT$Сr,qbԫ*TTH`JPHXI4FI(Q"biĔX(hIbD*9=~gЯۈ OZ>߰7L =׾LV-rt?9% ?oo)̩'uPs8c5,X e ܞ9Rc=ɄAN+F񒽛 `z~(8›vpKPAqόuᷫ_> Cr?3L~N^q&0;!G;A>3Vʔ;DSt ͝_όw7n.6w3c%$J~fQ;q0uRLi)+JP@$9O [jl=Wo|?+–\yꉡDY[ 3}o/Z";x6< ̻}X9U|sIB =~{ 5ӈYl}88)=I "}?HeQnSn'mD`2<bMrH%;aobpȻ=6csӛLwALfi%1l:M?cyzekqЅ"#q\A OJVe,DŽJzW뽦L3TңJ*'o-)y7dwzou3O@09⵸AEt+Qjkr2{εLb_ Ik?*tVc99|a؄]p6֍MkܺpVx N}<)ٛ +YgP5tWş 8፝⦹0&bnGַ;>k}5yډ~/jZ͜bw2%9^Y^o:{¬Uݾ^Q"urg7m5޶n\QӶT`8֊׾mQnۢsa2scBy ܵ}z Slkm%Y c;M$PK̼RܯkbpuHslD(y$H"ΔRX9F4E ¼!dHh/ UǦn<q)+I>j@TgسPH6{ [zdCX/cEylH C0z0!Q1R"#5PJ#MC*B:FQ 0:yk~uHqa*$w*N g S<7pv6Tq/Z샪X^ƓdShOd2ulEf{;6&[_ǥ|RͶ,`2V덙:wjҐ\E?ݷnhݪGuu;+2umݪnuhWtJ"t=&ѺU偏T}GvUB7f*kА\E 9>Ĉ\ $j>5ЁO59,sncYI;Ws@d1rlRs‚^3( ;pOQ9 Z wjsVCaZIޯ6`$vr%JI$|CBBz2b'TSVB)O8kDRR Y'G&űWI9([Arw_<۝Ќr$0E*P?aRک}OgܦL % g~(# ȹ\#(w")&9NoRVaz$T L+M[b]Tͷ0,je($`!K+ #„"x :}yƲpQ 9 |/ nyW I-d^s 7o/\l*Ul';S{65 r -`]2Ҝ\8lj㒐s:Ȇ ly6K\L3ؗY-k;4botl fzJyY4+˦]V6lO,.h6E!{mv^Ch %J"A( (5ДhK)EDP0xMRD_U/l9 ?;)rݦ]K*_;^kr7O)O咲zU bSj lO JlfĆ P~) E+`KVbЫ[i+Pno*Ca2D<Ʈ*D-Z>JC;&4"9[+C0jcL jBK|("O%POҶRf`=[0OXIvR6[&_UWa1U`,n:te5y)\+|h*`;]֙߬_`yݼl!-߁tVRr"FC_)nԁNYVŽ'b\q!w*3L[b_0[%9A"vlF'2lM:cURppk̕bt0¶WpX oSLaZ2uUSwƔ֢>4 鯘(AWpk${ H(}>4̡y.*f4  f*`q$iġd$pBF3ia%JxoGp)N^_ZP~>1Oc'ol+i"9i@1o&ԋHte|ٲs6oAui1/0!{q0m]kk*"$TCQlb\$Iz:WڱxԆO*x9|*MT&QHX 5a?bhc ,ӔD, ڐ(0FF1 C5Jqe;+MWT$P"BUI}(ߛ;PV=kaxX]G@o'?\kfTLId4QvWzF$R_3-@PiN N#y oF ,h݌)CYk*>2uHU2yy8N`=ft$fYЏ`jfFm:"T R "#iz0geM޹kxtQR,CMlLLc<1Naygyk3ғĬpdf{p2]'0#B4@b2A($,LvUg >AhݲY?SFJ@Дe,"",#Y*mnA[V2AJa]\2rXiD4""@ ,8CMF">U.-)ҧGW hvZՈ8,I BsC}tf:' weۍܺhqI;jqpdٖ!:tT9N(K  /8OӺݧ L3sPuC֦'drd rK*)98I{gҏEmHT]*J>km?N>7Gbe e (on\e,#ZR\K݇ΩTazrm T#槯29$*ĜRQa\Xo}*RY@/_}HЉE5:eZ(%򪝇 .* )2+ Ib)72͆cEҕ )ZqW 9PEjGm_: &*ʾ<k L!(JR2XR!IzeAq )Az+#:|WG}ZuKa^e. ~B;X)j4<{PǙa⬅O{0T9'h\[ =g{׵e&G_堍O4Jg-AҊii2G?Ul+#+-_/ )C>u簠7l.I/sLgg5825c2v;M=.dAJBr\T MI@F>Zsvc 0tSќfC ZjgA;ES&( Htt;_ o!L0!J9)IAj$9몙=}G;C c "co 9f `*"DضIf:a5'(؉Vʒ `Tu CtD+$XzW ؚR7{ BƂgFNV" LgX<Ǥ 390uy*]eƼ׮ZlƱ"9-TN;q'ŇR4ݮ6#fţ{{^m`lKzs4h0O=j n8/ғ;%IfZPwuLPʜ=r<xH7[<>BͺCs-LN$X v 1lbU2Fj,w}-GrNp Z 5u)'5)*̒ 3g'C+}qQ+nlk)]Ʃtf{Ki/}iFihpڥ[A5_mfo0(Yo)we0rልqfn+of<~r`/8btrKhl9y ToYvL$,b߫zkl֟g~L{_\<Õ)Iz!*9P9g*!rF^cG_33ދƞuܾ^(8NCx {AX6F6 4=}$Zf!BvS3 3E^U`1&(&M%4}q0\ǡ['ޯnḻRYNNR͉"E[K*RG; ':7yxnp1/!9ˮ2'gXka"@`ޭ$ XV։CWX&?? 47SL tG'_zd|]q*_Pa06kUhد竣k{cN^ ϪY(CTNDMW%oSʹ08P@xF޵Dk89One S1Y;/.\k)v-sksjiC $W{C`[9##㜋Go}W+:5HS.@Gay-≓bG%S[Ug٫6rJO:lַ}_|{B<"|?X@ :JXR*g)Ze;9&$ORs#X:њpȶ/degi=ޚ-FI[Fh '}ebZ&WGYuj3TSZ3ս\_{, A(s[.in)BU@^k]vEj>۶:w> 85{GXFcyo@@T`D4t%-cQ8j8_(mh0V+/E+X\d,"TxE"#µluF^pDC \^׷GmҰ`=KldLB65?.McNZyQi?2@A]e){uxDxM7A5ӈ9XK/ޚcpOy])_ >n [滃g}{}`nd 9'T\M)J9w[JeIS_FUZ榩Ñ a Dﻣ; 0N; 0<pcIl{~}k$h&8R(9?ѿDy%w=j02XT, @.o1S=殩ꌦC^9afYE&jHf; wbb@m x_1X Ȓ@.[|G_dzI3R 愥`&UJ OҔ~5*9jr6HŢ1+ 2Ef&Zty`a6'QvPgї ^$1[AhNutSGtFZZo:wz<-R)#猍VwaC'v1Fv@Jc}ξHsߜO$u78Sb0t0uZ12p'yyM1.iV HNM9S v/v@ϑՙo|@G6>zsۿ7LSϺ$ |ԑ&n; sm3T,U\^,D19&NJṛR zvq22K`o|*!_C̯IWXI1+a$=zF%MPK /rh )u[Vܳ+&&Hp9ybo7 t@lXIS$Tr T-оVPka=$ "f 2Sr)G='qB`tjl9[Ր }Z }Qz$z2g7Sag*`;/?i٤{|{+,X }J:sv[i*קs&etbkLK?1^\u0sCMJsPJfY*=0-s^'zǤk?t}!5vH*:'ݒf<[SN76C%ڮjIx 9}Im ;489*L_}O&5=ß*g % #NGC }>$rxur'`8c2hxyr'a0gǣ(jvGKFzRHZ^, j6bTjSQ@B=7\9_\r#U;2VT^z65'oJ=#g-cBP/h]Ԛ]4:,MB9s7q$xb Q.gM2bկ`l;)!k^v7mCF =o\G1c7_)!w%fzEJ>_u;O;A+;E;941<lЛMW|S`ԅP{cH:fi%bN \ٍm ߷Cfr{Ą 7LUIRZ \TG-AQ ֆH$ x(2`|O4 y,:X8\FȌՃ$) ON࿧WaZ2  \ֿ޼zn+*)g5N`Yg~~[yvy ?x奝ZE{asiI":ԍ4Ly. 1p23t\65zMUܸt7ɝG3ٜk#j΃.p$ǽ4YX aO^qXF2[ {sEg!".M/aairF:g 3D4qc{8.i7ަSqqI\r.6lȯ_(BCQb9Ak>Y)'A:j.Ĭ{10clI477PšS먈ȔWY=jBSmF2d;_/$sru&A ƚI Nb; Dnv@ljݙ7_l\a53ڥڂbV~wZ(tPL/Fpe6ʐ94% :1q- 8*z^*"(p)SFl$JӈUn˽!\m x脸%=r bh j+ީijMXU/8!n]-$悭w-1 .ڧt )0_N#4dPcB0!K+sb 9Wz%BRl 5`PHCpt1 c q5ivJ%A8Ѫ7/šOb2E!ƻ) 2F|zQ>|jcf  cˑ!" _D0a:Ǡ!*$SMC |Ik%SNYP7FMa]tl<,"'gTܣ|߸ȍLkcO/Lpxj0->LaӑeN.OX'wGdzV}:<zr-L&Hg( ֡Q"Iv}u /WdCg43%eMYL %%La^Dķ.[\%DcNZO-rd2v0Wz}4LHat*(:Xf2RdS-2by o,8; L#{9hU(¤cNt~PLBaøbKm 7R?pYJKU /UB+hSaBXAF 7EqQbx:" F\Lb禚q׹b#bpG}Zp[m^,BأXfB&ap@8YpVP:JF4N JCo lo5\'[R'[p%k{9X>gB V8ͯڴjʹQ56ܭu8[klV,XaݜSC0TD)$1[*+'ളWSi&n':Z^V|47^VOQ|aty{ΛyAYzj;\!Qڇ<SzLi<XD |m)6ݙbъєFM:MKY9s1K Kǚs9R(WV9Z+GR6cMQ^s}r: 4l'oN=60'{j'e(NVŚRW%x;a 8r&@9tks0jmNiXn+೭q>W)P\PPA7޸5#]tōjOQs d97I7={@/y+y1~ /Zjzκ{M`ϡwhy~,YHuV\y*Nm@z0)ֻ'F@g]?TROu[q0`\C\ɋ2?kdD6VJ|A%v||{i"wPT\&c.Fn7Cpt:lP8͇yBj%n[kvG(k]}Z͇[ ku`@AH>܆OxnʞM=OB$i~Z*>#VW&ajxloQF.JY L#Jt\yذyTq߇};I,J4׭7{_[؝R%uAcha'E\3+R Ah0";zO;{x$}y5)y}G7M$loDNx"a\TX5{TQQVe 4 L)[D7ov{ݽ9wZ^k/ϯ{̓W_7U_.hUtϯޅ3~ ٛ8^&/# _8q,8Kc4_G g%MXv8ܝNfa|`N*gRog?ϟ&jW~}i .R.>Q㫦\\eS[TvykzU]RN)ŧV0YMzy Y͒qa:㣼O|bq8Le-m(h׿A9%Y@Ïf -Tj(g&C0E;u,TXſ켁! ^\='Jp@g78_K㨲>t!pw.Y{A>w @jU> pTg/Zl|xwy oo_4{!O% ^ɇOR/*tn;PN9UohOɐs"S,Ӌ*\d4@?W3i >R߯w'ǣbٗ)pdW`qJQt57_nXŎn./CK_ݟ0/ Hs^Â۰X^Zט?Wi%Xr`j] J< y6Ce^ o6T*׌T8ɚbD,[Ju/fBlllp!bDM3R-j|hQiwyc(e_%단B~KW}_D&} .ٗ@! U=.cj>6&4]y]G ~B*xsSp]>uv_.MZty Y~יL?N!^*fpbo?9w{/mr;^@|eͦEbg/L򑿼ߟ?觩wNnfpYAnZ&5L;8 ;GǚbC AuBl&t\zqe봿Gj(Ge(%Qi͉d1j., ky(,U$-TB %ji_yt{[В7J_U#L؛YJatg@#^BqN4w 3#dIhLB{x2UhOG{:giUT,4_aA嵿^|}՛=d20 /i`m. tY~3MƋIyL3O G30 j44ٗ&\c͒; }!'O2IVUxsyR6\Myw DGs]uP{d >(S(ٽNÞyWtjU~^G3pߙ/jh-`殏w/rYG}&TW&-Thh;)x}o(~< yMx`˼ҕf']o8@*p([4X # SD5E@Xm$PQQpv |ʫ(@R!B ,!4&SN.nI({ݐRAPQCB7UGOAp%fnBTiGْC|y'()XR&N0ů}3a̕E~]e//|9F}+8s&HI)yOA4NiPs%ٔ'ɼ{t>,bE'"0@]G:;G_yi~3p_4rMT"*&1gmiR1j|&u(TkQ9ɝeDSq9Ӝ["P}u=Ntt-O7sO·eD$A|pjګ=8q\gbT9-\@m`k2:gM!33='(TŰ>i:֦\g`mE4BJFBrژ @9s$UcQsT8 &( MjҤܘzyM^.j9|. U[ͯNܬ wn*`yœNi€#wߟ=^ s!yYj싯^niSG_]+JyS  Ex˻ouʸ grK-ھ<\1"(B^2cl8gm~H'ņq$xxHʨuCnz(0,ide e)RV VD܉O1tDE F TQ0v??ZC mvi]:5 U7k"(T/!skP3 4VRfPh`ƥq+lq؇eǃS;ВJt-Vʗ~9_R0=@ŭu&(r\ޜ)5<5WO5&~O2Ud}%f#qy(Zqbx]wa?1oy"@$ emWޛ^x\TڲD@+ئ|6> 97.hc׳{mI24GיC.b }\!pA{"3x魥ÏP!'RVTLy*PTQoKP-_T-j)i{Y>:^^:7cOx|z+ƹ^^&}QU~0p@^zPD~K'kB(Ӈp{BA&v̳7fGq5Ykԍ+wƘW ꍺGs|)r8+v[=- ƋbhvwI..|kiJ2~MQ֯k}c}ZA?4jo}żfz?.%J5x];\ͅ&tŹqܘ'l)egJPל<,$ 11QW8}iy>Ly6x!<чYʪnwϾI wFL}}5hP. 蹠21(/׾nW[T帖T4L,%S5>)!u(H̃ƿ|DKYM͍ՊlUXڰܥЄf C?-*1sT Vt#*E*OQ=-qVwT׮m;j M 4W@ g轶x%:dLBQL]42Șhy02 R[Ɵ JB %L(` <˱eX#sbiF`kVt*NPԁp໨Ԝv顣SQӐס\uӚ`IQЦ*9${.D];LXdLd^Ipc4*2Es2Z+asq&iOr d ѝ֖26?egsڔY3'Jt*I֑iQ *MbY*;3@K*07Enhg PV~Eq"ttt (PIOaI*-ɏm'cZ}LܘJ,RLW''qtA$ij3]lIfT۽֖s6Z,=7g/mۋ6^5+d|c_x߆m<[^N͍{%O!dRQG3sfdnCpU xvђe6h!#;PgS'hW?qX 9)CiHרBgDND}(zt)m}(*ThBxq-HטN?զGUrCD`+G9#9=2p_{zb$o}09 aBʼn/\kuXӧbuF!ppLb7J>*%P2;uŁP~ٝ(Bmw=6C:/EM& R"x8_) -e :R6`љ表|zJ1K+Hea8Ee%W砙h&6x+"qry"wLD ;tQ<MNPRArE:l T1W~}u6.ڸ j2˪ژ?2qz] 5XeZQ2Ssk6r`}FK~ y[}Q+&_mIE' ڡAk~<_8p_:u4I믵j!iE8&HEw0㼱ؓLr%xf\PA恵!0'nnψ6vC^\)GEbxvsyM^].N( SW'nV ,U׋GT J(C}Nt1~$ƾ:r}s#>"j$^"'9Eޔzy=!p%H [.LXpAq/e\QmX)C`ן'YJ8KFs 0M D pۆi|PEuCn| 9]L=jqd@1k1'(\k0(CdXbdH"&nbaLJ]f#VCU}!f(BzO3QEeXc-5e $F*ZBQa޵#bbwz(\բi$. %U6kmSO+nq%pfp8<o :IJ6w7>VVT- iZ@̎|^W.ɈGhʈ5V4@ff2FֳAʔzv1)e/G&+Njx=cjsq=nsO:8ofG"sؠ]z8012 9Lb2~d\N;8:UkM|\kB&(ç>5ZB3$iˡ `KL9^xt5R0QW 8T4jsbex i)JjcYUe<7[K2YLXY+2x+iwP1ٸfpwmOc> 5sTf=ߨ ux۵ÐW[c䤟 >ųgWƿ];uqOz~ˈ 3'rUZALZ8#Rܺt)n&9ιDu"՘+UJ*{(;˥Y#{+\јO08F.D^ZJb` nU33t0(bK˕{J4]B?AfeB yKJtfufPcfmX@^s)UIUqJϜ jʾ UoTuɧOJO#Lw3hF@~F5U~8 ](oD"ي [ mu>N)uY m_mk^2.ӭ,|8]a7)2Ý -0'2ƙt9ewY)~\+{З4YjWw%_IC^f)-vbU-:Tݲ@Du}L 4t˾NАWYtǹnJU@<Q}$A}E&nً[U4N鲡n9^!Z{,:v 7<6lwEtj4 NGkQ, hgJF԰AW"&;qA}ێyQK7LI@mR ie扖cshDq]'ЩZPeU3d=+l&kJ9ikOWDt 0\bT2QTQ^PrJ(丒$%VuT^fz%[ʛ%Q$% JTnI2^1ʘY IL&?$]nPiximrBJ,grxG岿wnJʂFYQ.*JIyHNX}B5-qVِHT)e6ZbV% K Q,^ M/uyyXrB5̿jsgPX>(>Pj>hbU4NzݐWݲ@Du} ɊF5- y*Ibڪq#e偈d# -H3Fg9 [U4N 9L,\Ve tċg->k2@6:듏KFcAsSQt1HWXf$lsҲ Ksmb`T+;kSQPt6Ilh/Q6/'JjVĤxoS;Rfv1YciP[vϥ!rXre/yo "]S}P|x=(&u{yUdkT@]EÒ C*drUdD4I)MdS.?'7ŲNc3LE_&ߧ QYdnYt+7fV$dYu3sNFVاsN3KL(Qgsξ۬T#NeED~`ʗzXB(zm@B5Ϥ^J' G*]`tUll47M3Ũӑa]"sTI7sldYJYT*3S_6` 1 Z#.S^x3mǤux\mG+ <'kTD;1eIL2[ZDD|_!Q "u<|W1|WVD+"QjMVj|˳*^"+O:K*eZ;幊]V0Q%]|#T!k|HK3_um\=t`D<2#ׁY۳&8Am.VtdUܒtjLyQawxqu36â3# naVLVvo k,7tܘ𱚍uV`;3ȷm5:&=j=tp+d]2z2 s:*:L>Haj}AQNo}kP8:,ٔU`[ "x< d.Xkhn'oo1Sy,9(ױ72h ޘ>5Opn?lGK+ )pj95\2=\bvܱ~ mҗC=SZpgN79`o2aEmSa{g£/SsW#x~ҭsp@GWwt XOQ.'qwo5o΃ُHuCRi1ߘYCU6įX1Ƿ0ˑ+O'`zG77I7 I7> 1̘DoYƖc罜`k,g:!L4o8-0\!s?8<$b,?t ju%f>m]vdWgϬ5vl Q-Z["gwsE$tI=+\:f-x/{3x&{NXV극4QV (jIJ"cAׅjzՍbTAxK`qZ7Z YrmYA)ō}Ex(1д}3X|t9}] kc#0 W5mz1ס:4SXhf}*b![Xϔ}}d/9 {ȶNJqjckG*4ӹ] |ǔ|-͞I(4Lt=W,:VdžJ Ųub trAU %򰱡)UHX+%:ZȔ; j" $ɪ%&" MqRw$b˩6Z'DH,rJR]NJ5=w.֎FJe;*wdwP}F!54|m%϶var/home/core/zuul-output/logs/kubelet.log0000644000000000000000005565065715145124142017717 0ustar rootrootFeb 17 16:03:10 crc systemd[1]: Starting Kubernetes Kubelet... Feb 17 16:03:10 crc restorecon[4671]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:10 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:11 crc restorecon[4671]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 16:03:11 crc restorecon[4671]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 17 16:03:11 crc kubenswrapper[4672]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 16:03:11 crc kubenswrapper[4672]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 17 16:03:11 crc kubenswrapper[4672]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 16:03:11 crc kubenswrapper[4672]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 16:03:11 crc kubenswrapper[4672]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 17 16:03:11 crc kubenswrapper[4672]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.687353 4672 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693548 4672 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693609 4672 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693625 4672 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693635 4672 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693645 4672 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693654 4672 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693664 4672 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693672 4672 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693681 4672 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693689 4672 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693697 4672 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693705 4672 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693713 4672 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693723 4672 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693731 4672 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693738 4672 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693749 4672 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693756 4672 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693764 4672 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693772 4672 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693780 4672 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693788 4672 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693799 4672 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693810 4672 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693819 4672 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693827 4672 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693835 4672 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693843 4672 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693855 4672 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693864 4672 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693872 4672 feature_gate.go:330] unrecognized feature gate: Example Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693880 4672 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693890 4672 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693902 4672 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693912 4672 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693921 4672 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693930 4672 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693939 4672 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693957 4672 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693966 4672 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693975 4672 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693982 4672 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693991 4672 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.693998 4672 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694007 4672 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694015 4672 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694023 4672 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694031 4672 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694043 4672 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694051 4672 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694059 4672 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694067 4672 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694075 4672 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694083 4672 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694093 4672 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694102 4672 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694111 4672 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694120 4672 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694129 4672 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694137 4672 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694144 4672 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694153 4672 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694161 4672 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694169 4672 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694177 4672 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694184 4672 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694192 4672 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694199 4672 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694213 4672 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694223 4672 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.694232 4672 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694410 4672 flags.go:64] FLAG: --address="0.0.0.0" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694430 4672 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694446 4672 flags.go:64] FLAG: --anonymous-auth="true" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694458 4672 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694470 4672 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694479 4672 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694492 4672 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694503 4672 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694546 4672 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694555 4672 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694566 4672 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694578 4672 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694588 4672 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694597 4672 flags.go:64] FLAG: --cgroup-root="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694606 4672 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694615 4672 flags.go:64] FLAG: --client-ca-file="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694624 4672 flags.go:64] FLAG: --cloud-config="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694633 4672 flags.go:64] FLAG: --cloud-provider="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694644 4672 flags.go:64] FLAG: --cluster-dns="[]" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694656 4672 flags.go:64] FLAG: --cluster-domain="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694665 4672 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694674 4672 flags.go:64] FLAG: --config-dir="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694683 4672 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694693 4672 flags.go:64] FLAG: --container-log-max-files="5" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694705 4672 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694745 4672 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694755 4672 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694765 4672 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694775 4672 flags.go:64] FLAG: --contention-profiling="false" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694785 4672 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694793 4672 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694803 4672 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694812 4672 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694823 4672 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694832 4672 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694841 4672 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694850 4672 flags.go:64] FLAG: --enable-load-reader="false" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694859 4672 flags.go:64] FLAG: --enable-server="true" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694868 4672 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694880 4672 flags.go:64] FLAG: --event-burst="100" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694889 4672 flags.go:64] FLAG: --event-qps="50" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694898 4672 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694908 4672 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694916 4672 flags.go:64] FLAG: --eviction-hard="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694928 4672 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694937 4672 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694948 4672 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694959 4672 flags.go:64] FLAG: --eviction-soft="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694968 4672 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694977 4672 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694987 4672 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.694996 4672 flags.go:64] FLAG: --experimental-mounter-path="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695005 4672 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695014 4672 flags.go:64] FLAG: --fail-swap-on="true" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695023 4672 flags.go:64] FLAG: --feature-gates="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695035 4672 flags.go:64] FLAG: --file-check-frequency="20s" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695044 4672 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695054 4672 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695064 4672 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695073 4672 flags.go:64] FLAG: --healthz-port="10248" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695082 4672 flags.go:64] FLAG: --help="false" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695092 4672 flags.go:64] FLAG: --hostname-override="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695100 4672 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695109 4672 flags.go:64] FLAG: --http-check-frequency="20s" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695118 4672 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695129 4672 flags.go:64] FLAG: --image-credential-provider-config="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695137 4672 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695146 4672 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695155 4672 flags.go:64] FLAG: --image-service-endpoint="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695164 4672 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695173 4672 flags.go:64] FLAG: --kube-api-burst="100" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695182 4672 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695191 4672 flags.go:64] FLAG: --kube-api-qps="50" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695200 4672 flags.go:64] FLAG: --kube-reserved="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695209 4672 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695218 4672 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695228 4672 flags.go:64] FLAG: --kubelet-cgroups="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695236 4672 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695246 4672 flags.go:64] FLAG: --lock-file="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695255 4672 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695264 4672 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695273 4672 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695290 4672 flags.go:64] FLAG: --log-json-split-stream="false" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695300 4672 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695309 4672 flags.go:64] FLAG: --log-text-split-stream="false" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695318 4672 flags.go:64] FLAG: --logging-format="text" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695327 4672 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695337 4672 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695345 4672 flags.go:64] FLAG: --manifest-url="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695354 4672 flags.go:64] FLAG: --manifest-url-header="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695367 4672 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695376 4672 flags.go:64] FLAG: --max-open-files="1000000" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695387 4672 flags.go:64] FLAG: --max-pods="110" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695396 4672 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695405 4672 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695414 4672 flags.go:64] FLAG: --memory-manager-policy="None" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695423 4672 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695432 4672 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695441 4672 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695450 4672 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695473 4672 flags.go:64] FLAG: --node-status-max-images="50" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695482 4672 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695492 4672 flags.go:64] FLAG: --oom-score-adj="-999" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695501 4672 flags.go:64] FLAG: --pod-cidr="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695533 4672 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695576 4672 flags.go:64] FLAG: --pod-manifest-path="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695586 4672 flags.go:64] FLAG: --pod-max-pids="-1" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695595 4672 flags.go:64] FLAG: --pods-per-core="0" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695604 4672 flags.go:64] FLAG: --port="10250" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695613 4672 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695622 4672 flags.go:64] FLAG: --provider-id="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695631 4672 flags.go:64] FLAG: --qos-reserved="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695642 4672 flags.go:64] FLAG: --read-only-port="10255" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695651 4672 flags.go:64] FLAG: --register-node="true" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695660 4672 flags.go:64] FLAG: --register-schedulable="true" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695670 4672 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695685 4672 flags.go:64] FLAG: --registry-burst="10" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695695 4672 flags.go:64] FLAG: --registry-qps="5" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695704 4672 flags.go:64] FLAG: --reserved-cpus="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695714 4672 flags.go:64] FLAG: --reserved-memory="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695726 4672 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695735 4672 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695745 4672 flags.go:64] FLAG: --rotate-certificates="false" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695753 4672 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695762 4672 flags.go:64] FLAG: --runonce="false" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695771 4672 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695780 4672 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695790 4672 flags.go:64] FLAG: --seccomp-default="false" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695799 4672 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695807 4672 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695817 4672 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695826 4672 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695835 4672 flags.go:64] FLAG: --storage-driver-password="root" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695844 4672 flags.go:64] FLAG: --storage-driver-secure="false" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695853 4672 flags.go:64] FLAG: --storage-driver-table="stats" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695862 4672 flags.go:64] FLAG: --storage-driver-user="root" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695871 4672 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695880 4672 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695889 4672 flags.go:64] FLAG: --system-cgroups="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695899 4672 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695915 4672 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695924 4672 flags.go:64] FLAG: --tls-cert-file="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695933 4672 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695946 4672 flags.go:64] FLAG: --tls-min-version="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695955 4672 flags.go:64] FLAG: --tls-private-key-file="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695964 4672 flags.go:64] FLAG: --topology-manager-policy="none" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695973 4672 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695982 4672 flags.go:64] FLAG: --topology-manager-scope="container" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.695992 4672 flags.go:64] FLAG: --v="2" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.696004 4672 flags.go:64] FLAG: --version="false" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.696016 4672 flags.go:64] FLAG: --vmodule="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.696027 4672 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.696036 4672 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699048 4672 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699065 4672 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699075 4672 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699083 4672 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699091 4672 feature_gate.go:330] unrecognized feature gate: Example Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699100 4672 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699108 4672 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699116 4672 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699123 4672 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699132 4672 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699139 4672 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699148 4672 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699155 4672 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699163 4672 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699171 4672 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699178 4672 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699186 4672 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699194 4672 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699201 4672 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699209 4672 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699216 4672 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699224 4672 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699233 4672 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699241 4672 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699249 4672 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699256 4672 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699265 4672 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699272 4672 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699280 4672 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699288 4672 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699295 4672 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699303 4672 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699311 4672 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699320 4672 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699327 4672 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699335 4672 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699344 4672 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699352 4672 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699363 4672 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699373 4672 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699382 4672 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699391 4672 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699400 4672 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699408 4672 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699417 4672 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699425 4672 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699435 4672 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699443 4672 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699451 4672 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699460 4672 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699467 4672 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699475 4672 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699485 4672 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699492 4672 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699500 4672 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699533 4672 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699542 4672 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699550 4672 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699558 4672 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699567 4672 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699576 4672 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699584 4672 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699591 4672 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699599 4672 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699607 4672 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699618 4672 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699627 4672 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699636 4672 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699645 4672 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699655 4672 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.699664 4672 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.699676 4672 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.711095 4672 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.711147 4672 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711242 4672 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711253 4672 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711259 4672 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711266 4672 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711272 4672 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711279 4672 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711284 4672 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711290 4672 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711295 4672 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711300 4672 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711306 4672 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711312 4672 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711317 4672 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711324 4672 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711329 4672 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711335 4672 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711340 4672 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711347 4672 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711355 4672 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711362 4672 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711371 4672 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711377 4672 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711383 4672 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711388 4672 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711394 4672 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711400 4672 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711405 4672 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711410 4672 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711416 4672 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711424 4672 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711429 4672 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711435 4672 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711440 4672 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711445 4672 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711451 4672 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711456 4672 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711461 4672 feature_gate.go:330] unrecognized feature gate: Example Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711466 4672 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711472 4672 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711478 4672 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711484 4672 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711489 4672 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711496 4672 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711503 4672 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711526 4672 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711532 4672 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711537 4672 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711543 4672 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711548 4672 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711553 4672 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711559 4672 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711564 4672 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711569 4672 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711574 4672 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711580 4672 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711587 4672 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711593 4672 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711598 4672 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711604 4672 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711610 4672 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711616 4672 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711621 4672 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711626 4672 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711631 4672 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711636 4672 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711642 4672 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711648 4672 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711653 4672 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711659 4672 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711664 4672 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711670 4672 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.711680 4672 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711861 4672 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711870 4672 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711876 4672 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711883 4672 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711889 4672 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711895 4672 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711901 4672 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711906 4672 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711913 4672 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711918 4672 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711923 4672 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711928 4672 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711933 4672 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711939 4672 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711944 4672 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711949 4672 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711955 4672 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711960 4672 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711967 4672 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711973 4672 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711978 4672 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711983 4672 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711989 4672 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.711994 4672 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712000 4672 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712007 4672 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712014 4672 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712021 4672 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712027 4672 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712035 4672 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712043 4672 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712049 4672 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712055 4672 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712060 4672 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712066 4672 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712072 4672 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712077 4672 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712082 4672 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712088 4672 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712093 4672 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712099 4672 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712104 4672 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712109 4672 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712115 4672 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712120 4672 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712126 4672 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712131 4672 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712136 4672 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712141 4672 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712146 4672 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712151 4672 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712156 4672 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712163 4672 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712169 4672 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712174 4672 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712180 4672 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712185 4672 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712192 4672 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712197 4672 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712203 4672 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712208 4672 feature_gate.go:330] unrecognized feature gate: Example Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712214 4672 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712219 4672 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712225 4672 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712230 4672 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712237 4672 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712244 4672 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712249 4672 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712255 4672 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712260 4672 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.712265 4672 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.712273 4672 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.714114 4672 server.go:940] "Client rotation is on, will bootstrap in background" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.719472 4672 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.719605 4672 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.722887 4672 server.go:997] "Starting client certificate rotation" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.722923 4672 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.724171 4672 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-11 22:33:37.065550189 +0000 UTC Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.724250 4672 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.754210 4672 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.758043 4672 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 16:03:11 crc kubenswrapper[4672]: E0217 16:03:11.759955 4672 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.781453 4672 log.go:25] "Validated CRI v1 runtime API" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.825227 4672 log.go:25] "Validated CRI v1 image API" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.827675 4672 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.833123 4672 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-17-15-58-45-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.833208 4672 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.860588 4672 manager.go:217] Machine: {Timestamp:2026-02-17 16:03:11.857306196 +0000 UTC m=+0.611394968 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:561271bd-298c-447a-8ba6-beca2786bcfb BootID:793c4034-4ed2-49c9-abb4-00e3faa205d0 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:42:09:d8 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:42:09:d8 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:43:d6:44 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:d9:a1:3a Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:a4:3d:2b Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ae:f1:34 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:86:69:8d:02:6f:04 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:a6:8c:f0:3c:94:19 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.860907 4672 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.861102 4672 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.861731 4672 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.861965 4672 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.862006 4672 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.862303 4672 topology_manager.go:138] "Creating topology manager with none policy" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.862318 4672 container_manager_linux.go:303] "Creating device plugin manager" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.862912 4672 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.862963 4672 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.863231 4672 state_mem.go:36] "Initialized new in-memory state store" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.863336 4672 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.868500 4672 kubelet.go:418] "Attempting to sync node with API server" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.868546 4672 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.868605 4672 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.868624 4672 kubelet.go:324] "Adding apiserver pod source" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.868640 4672 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.873738 4672 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.873783 4672 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Feb 17 16:03:11 crc kubenswrapper[4672]: E0217 16:03:11.873875 4672 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:11 crc kubenswrapper[4672]: E0217 16:03:11.873899 4672 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.873922 4672 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.875132 4672 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.876581 4672 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.878322 4672 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.878352 4672 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.878362 4672 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.878373 4672 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.878390 4672 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.878425 4672 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.878434 4672 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.878450 4672 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.878460 4672 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.878470 4672 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.878537 4672 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.878548 4672 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.879616 4672 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.880121 4672 server.go:1280] "Started kubelet" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.881599 4672 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.881584 4672 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 17 16:03:11 crc systemd[1]: Started Kubernetes Kubelet. Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.882597 4672 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.881721 4672 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.884422 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.884582 4672 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.884938 4672 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.884929 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 04:49:05.011146792 +0000 UTC Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.884974 4672 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.884964 4672 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 17 16:03:11 crc kubenswrapper[4672]: E0217 16:03:11.885679 4672 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.885755 4672 server.go:460] "Adding debug handlers to kubelet server" Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.889543 4672 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.889744 4672 factory.go:55] Registering systemd factory Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.889789 4672 factory.go:221] Registration of the systemd container factory successfully Feb 17 16:03:11 crc kubenswrapper[4672]: E0217 16:03:11.889723 4672 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.890544 4672 factory.go:153] Registering CRI-O factory Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.890586 4672 factory.go:221] Registration of the crio container factory successfully Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.890743 4672 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.890979 4672 factory.go:103] Registering Raw factory Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.891033 4672 manager.go:1196] Started watching for new ooms in manager Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.893318 4672 manager.go:319] Starting recovery of all containers Feb 17 16:03:11 crc kubenswrapper[4672]: E0217 16:03:11.898104 4672 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="200ms" Feb 17 16:03:11 crc kubenswrapper[4672]: E0217 16:03:11.899793 4672 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.46:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1895142853f6c3d8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 16:03:11.880086488 +0000 UTC m=+0.634175230,LastTimestamp:2026-02-17 16:03:11.880086488 +0000 UTC m=+0.634175230,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909189 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909242 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909253 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909265 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909276 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909287 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909297 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909307 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909317 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909332 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909341 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909352 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909363 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909375 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909383 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909394 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909406 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909417 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909427 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909438 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909448 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909459 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909470 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909480 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909491 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909503 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909554 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909568 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909581 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909594 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909605 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909616 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909636 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909648 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909660 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909672 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909683 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909696 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909708 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909720 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909732 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909745 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909757 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909771 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909783 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909822 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909834 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909845 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909859 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909869 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909881 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909893 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909911 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909922 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909933 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909944 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909986 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.909999 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910012 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910023 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910034 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910054 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910063 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910074 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910085 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910120 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910131 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910140 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910150 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910160 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910170 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910180 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910190 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910200 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910211 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910220 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910229 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910241 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910251 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910261 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910270 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910281 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910290 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910300 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910316 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910327 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910339 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910349 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910359 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910369 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910380 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910389 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910399 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910408 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910418 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910427 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910437 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910447 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910457 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910466 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910476 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910487 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910497 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910521 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910537 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910549 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910559 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910571 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910588 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910598 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910607 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910618 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910630 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910642 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910651 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910662 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910672 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.910682 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.915252 4672 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.915292 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.915304 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.915315 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.915327 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.915337 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.915348 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.915360 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.915370 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.918614 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.918719 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.918739 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.918759 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.918776 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.918797 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.918813 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.918829 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.918848 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.918868 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.918890 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.918928 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.918951 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.918977 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919000 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919016 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919032 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919052 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919069 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919092 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919108 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919124 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919146 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919163 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919178 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919201 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919223 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919237 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919254 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919268 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919290 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919327 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919344 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919360 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919374 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919390 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919412 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919432 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919447 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919461 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919475 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919497 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919561 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919579 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919595 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919612 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919627 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919645 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919674 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919698 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919714 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919736 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919750 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919772 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919793 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919808 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919826 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919842 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919857 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919872 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919887 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919902 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919917 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919931 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919946 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919962 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919977 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.919992 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.920008 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.920021 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.920035 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.920049 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.920062 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.920077 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.920091 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.920107 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.920121 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.920140 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.920156 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.920170 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.920186 4672 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.920190 4672 manager.go:324] Recovery completed Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.920202 4672 reconstruct.go:97] "Volume reconstruction finished" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.920290 4672 reconciler.go:26] "Reconciler: start to sync state" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.933314 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.935588 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.935642 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.935654 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.936462 4672 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.936571 4672 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.936659 4672 state_mem.go:36] "Initialized new in-memory state store" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.940240 4672 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.943598 4672 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.943656 4672 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.943691 4672 kubelet.go:2335] "Starting kubelet main sync loop" Feb 17 16:03:11 crc kubenswrapper[4672]: E0217 16:03:11.943750 4672 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 17 16:03:11 crc kubenswrapper[4672]: W0217 16:03:11.944310 4672 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Feb 17 16:03:11 crc kubenswrapper[4672]: E0217 16:03:11.944383 4672 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.952217 4672 policy_none.go:49] "None policy: Start" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.953189 4672 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 17 16:03:11 crc kubenswrapper[4672]: I0217 16:03:11.953226 4672 state_mem.go:35] "Initializing new in-memory state store" Feb 17 16:03:11 crc kubenswrapper[4672]: E0217 16:03:11.986167 4672 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.017806 4672 manager.go:334] "Starting Device Plugin manager" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.017870 4672 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.017886 4672 server.go:79] "Starting device plugin registration server" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.018410 4672 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.018441 4672 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.018877 4672 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.018995 4672 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.019006 4672 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 17 16:03:12 crc kubenswrapper[4672]: E0217 16:03:12.028615 4672 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.043889 4672 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.044005 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.045396 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.045439 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.045451 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.045664 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.046157 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.046230 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.046934 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.046988 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.047003 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.047221 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.047330 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.047367 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.047769 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.047824 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.047836 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.048195 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.048257 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.048271 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.048386 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.048428 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.048438 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.048653 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.048805 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.048839 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.049621 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.049658 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.049671 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.049711 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.049733 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.049776 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.050715 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.050936 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.050988 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.051597 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.051633 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.051645 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.051879 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.051917 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.053029 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.053060 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.053069 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.053095 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.053114 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.053123 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:12 crc kubenswrapper[4672]: E0217 16:03:12.101077 4672 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="400ms" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.118794 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.120412 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.120468 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.120479 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.120528 4672 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 16:03:12 crc kubenswrapper[4672]: E0217 16:03:12.121123 4672 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.124043 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.124082 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.124107 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.124124 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.124140 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.124161 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.124182 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.124242 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.124284 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.124365 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.124404 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.124441 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.124476 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.124525 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.124555 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.226284 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.226376 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.226411 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.226443 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.226476 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.226545 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.226577 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.226609 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.226622 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.226635 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.226716 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.226708 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.226646 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.226761 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.226748 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.226719 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.226838 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.226781 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.226947 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.226960 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.226910 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.226663 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.227017 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.227049 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.227078 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.227107 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.227125 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.227201 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.227253 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.227407 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.321479 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.323200 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.323281 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.323305 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.323348 4672 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 16:03:12 crc kubenswrapper[4672]: E0217 16:03:12.324096 4672 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.390090 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.398360 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.418890 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.439089 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.450262 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:12 crc kubenswrapper[4672]: W0217 16:03:12.451403 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-a186a8db74ead03e75029a421708b3b4c8775b432d7cc48a3e93cc2e9cd6d72d WatchSource:0}: Error finding container a186a8db74ead03e75029a421708b3b4c8775b432d7cc48a3e93cc2e9cd6d72d: Status 404 returned error can't find the container with id a186a8db74ead03e75029a421708b3b4c8775b432d7cc48a3e93cc2e9cd6d72d Feb 17 16:03:12 crc kubenswrapper[4672]: W0217 16:03:12.453696 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-cffe91466c00d8a217ce2809e00f0e433c91f15591424e49aa299c6556cb1c9e WatchSource:0}: Error finding container cffe91466c00d8a217ce2809e00f0e433c91f15591424e49aa299c6556cb1c9e: Status 404 returned error can't find the container with id cffe91466c00d8a217ce2809e00f0e433c91f15591424e49aa299c6556cb1c9e Feb 17 16:03:12 crc kubenswrapper[4672]: W0217 16:03:12.463779 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-40b30edce25d5b20d9a0b61e1044d4c55123ebf00802ff0914745c51fac61ffd WatchSource:0}: Error finding container 40b30edce25d5b20d9a0b61e1044d4c55123ebf00802ff0914745c51fac61ffd: Status 404 returned error can't find the container with id 40b30edce25d5b20d9a0b61e1044d4c55123ebf00802ff0914745c51fac61ffd Feb 17 16:03:12 crc kubenswrapper[4672]: W0217 16:03:12.469918 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-acac080fc1e22a6f440952678017936d164051305e12031154d01a06a4d30248 WatchSource:0}: Error finding container acac080fc1e22a6f440952678017936d164051305e12031154d01a06a4d30248: Status 404 returned error can't find the container with id acac080fc1e22a6f440952678017936d164051305e12031154d01a06a4d30248 Feb 17 16:03:12 crc kubenswrapper[4672]: W0217 16:03:12.473840 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-c0504c61c00a218779fde30ec57bf7286e8005bc3dd3783e1045a66f21237c74 WatchSource:0}: Error finding container c0504c61c00a218779fde30ec57bf7286e8005bc3dd3783e1045a66f21237c74: Status 404 returned error can't find the container with id c0504c61c00a218779fde30ec57bf7286e8005bc3dd3783e1045a66f21237c74 Feb 17 16:03:12 crc kubenswrapper[4672]: E0217 16:03:12.502460 4672 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="800ms" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.724881 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.726298 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.726338 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.726348 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.726371 4672 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 16:03:12 crc kubenswrapper[4672]: E0217 16:03:12.727390 4672 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Feb 17 16:03:12 crc kubenswrapper[4672]: W0217 16:03:12.781345 4672 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Feb 17 16:03:12 crc kubenswrapper[4672]: E0217 16:03:12.781456 4672 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.883868 4672 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.885929 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 12:03:26.084322115 +0000 UTC Feb 17 16:03:12 crc kubenswrapper[4672]: W0217 16:03:12.945953 4672 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Feb 17 16:03:12 crc kubenswrapper[4672]: E0217 16:03:12.946093 4672 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.947599 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c0504c61c00a218779fde30ec57bf7286e8005bc3dd3783e1045a66f21237c74"} Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.948884 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"acac080fc1e22a6f440952678017936d164051305e12031154d01a06a4d30248"} Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.949936 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"40b30edce25d5b20d9a0b61e1044d4c55123ebf00802ff0914745c51fac61ffd"} Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.951243 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a186a8db74ead03e75029a421708b3b4c8775b432d7cc48a3e93cc2e9cd6d72d"} Feb 17 16:03:12 crc kubenswrapper[4672]: I0217 16:03:12.952176 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cffe91466c00d8a217ce2809e00f0e433c91f15591424e49aa299c6556cb1c9e"} Feb 17 16:03:13 crc kubenswrapper[4672]: W0217 16:03:13.044353 4672 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Feb 17 16:03:13 crc kubenswrapper[4672]: E0217 16:03:13.044432 4672 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:13 crc kubenswrapper[4672]: E0217 16:03:13.304108 4672 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="1.6s" Feb 17 16:03:13 crc kubenswrapper[4672]: W0217 16:03:13.463031 4672 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Feb 17 16:03:13 crc kubenswrapper[4672]: E0217 16:03:13.463172 4672 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.528328 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.529864 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.529918 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.529928 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.529951 4672 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 16:03:13 crc kubenswrapper[4672]: E0217 16:03:13.530350 4672 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.876896 4672 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 16:03:13 crc kubenswrapper[4672]: E0217 16:03:13.879106 4672 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.884500 4672 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.886621 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 19:38:29.669678824 +0000 UTC Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.956953 4672 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="ff6bae4bfe272b613c05076933d2ffcc4369c52d96e17ee03e2f415c145c6f58" exitCode=0 Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.957098 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.957132 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"ff6bae4bfe272b613c05076933d2ffcc4369c52d96e17ee03e2f415c145c6f58"} Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.958395 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.958470 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.958491 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.959741 4672 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a" exitCode=0 Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.959848 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a"} Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.959897 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.961202 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.961243 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.961262 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.965921 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627"} Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.965994 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672"} Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.966017 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528"} Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.966040 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4"} Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.965936 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.967208 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.967259 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.967277 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.968370 4672 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6" exitCode=0 Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.968551 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.968853 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6"} Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.969738 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.969793 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.969805 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.971828 4672 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea" exitCode=0 Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.971886 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea"} Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.971986 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.973356 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.973443 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.973473 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.981784 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.983021 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.983075 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:13 crc kubenswrapper[4672]: I0217 16:03:13.983095 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:14 crc kubenswrapper[4672]: W0217 16:03:14.675301 4672 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Feb 17 16:03:14 crc kubenswrapper[4672]: E0217 16:03:14.675499 4672 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.884318 4672 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.887429 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 02:41:10.198133449 +0000 UTC Feb 17 16:03:14 crc kubenswrapper[4672]: E0217 16:03:14.905173 4672 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="3.2s" Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.983843 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"de7b3941c4c057228fded474417203e3aeb95fcc1df8094bde7b35fd223eec22"} Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.983902 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.983911 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b9d911c777bbcce655fc6993bdd85da5df16a4402e54b628b839c796f7c784d5"} Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.984036 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"514d00f8857c64df263abe974d69503c1ac4ea7d4c78f57e5826d58208bb79f9"} Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.985020 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.985052 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.985061 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.989223 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018"} Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.989257 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917"} Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.989274 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec"} Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.989287 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb"} Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.992314 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d18ddfa41dd4d4d96d358a9443339bd93c045a41dade757c2a9602284057c347"} Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.992370 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.993257 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.993302 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.993318 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.995414 4672 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439" exitCode=0 Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.995552 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439"} Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.995570 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.995630 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.996723 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.996763 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.996776 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.996814 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.996847 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:14 crc kubenswrapper[4672]: I0217 16:03:14.996887 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:15 crc kubenswrapper[4672]: E0217 16:03:15.029128 4672 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.46:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1895142853f6c3d8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 16:03:11.880086488 +0000 UTC m=+0.634175230,LastTimestamp:2026-02-17 16:03:11.880086488 +0000 UTC m=+0.634175230,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 16:03:15 crc kubenswrapper[4672]: I0217 16:03:15.130757 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:15 crc kubenswrapper[4672]: I0217 16:03:15.132178 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:15 crc kubenswrapper[4672]: I0217 16:03:15.132227 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:15 crc kubenswrapper[4672]: I0217 16:03:15.132239 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:15 crc kubenswrapper[4672]: I0217 16:03:15.132266 4672 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 16:03:15 crc kubenswrapper[4672]: E0217 16:03:15.132600 4672 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.46:6443: connect: connection refused" node="crc" Feb 17 16:03:15 crc kubenswrapper[4672]: W0217 16:03:15.132702 4672 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Feb 17 16:03:15 crc kubenswrapper[4672]: E0217 16:03:15.132786 4672 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:15 crc kubenswrapper[4672]: I0217 16:03:15.180854 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:15 crc kubenswrapper[4672]: W0217 16:03:15.274336 4672 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.46:6443: connect: connection refused Feb 17 16:03:15 crc kubenswrapper[4672]: E0217 16:03:15.274443 4672 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.46:6443: connect: connection refused" logger="UnhandledError" Feb 17 16:03:15 crc kubenswrapper[4672]: I0217 16:03:15.888060 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 13:40:41.657880818 +0000 UTC Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.008872 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.011891 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8"} Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.012085 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.013555 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.013630 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.013653 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.014882 4672 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d" exitCode=0 Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.015046 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.015050 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d"} Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.015109 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.015222 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.016691 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.016713 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.016733 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.016771 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.016774 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.016801 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.016875 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.016918 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.016939 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.017089 4672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.017154 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.018384 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.018430 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.018489 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.024162 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:16 crc kubenswrapper[4672]: I0217 16:03:16.888690 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 11:23:29.460339548 +0000 UTC Feb 17 16:03:17 crc kubenswrapper[4672]: I0217 16:03:17.021535 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:17 crc kubenswrapper[4672]: I0217 16:03:17.021581 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6"} Feb 17 16:03:17 crc kubenswrapper[4672]: I0217 16:03:17.021662 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6"} Feb 17 16:03:17 crc kubenswrapper[4672]: I0217 16:03:17.021688 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2"} Feb 17 16:03:17 crc kubenswrapper[4672]: I0217 16:03:17.021713 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:17 crc kubenswrapper[4672]: I0217 16:03:17.021825 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:17 crc kubenswrapper[4672]: I0217 16:03:17.021875 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:17 crc kubenswrapper[4672]: I0217 16:03:17.022683 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:17 crc kubenswrapper[4672]: I0217 16:03:17.022733 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:17 crc kubenswrapper[4672]: I0217 16:03:17.022749 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:17 crc kubenswrapper[4672]: I0217 16:03:17.023223 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:17 crc kubenswrapper[4672]: I0217 16:03:17.023258 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:17 crc kubenswrapper[4672]: I0217 16:03:17.023271 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:17 crc kubenswrapper[4672]: I0217 16:03:17.775762 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:17 crc kubenswrapper[4672]: I0217 16:03:17.889324 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 07:40:54.688768089 +0000 UTC Feb 17 16:03:17 crc kubenswrapper[4672]: I0217 16:03:17.925871 4672 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 16:03:18 crc kubenswrapper[4672]: I0217 16:03:18.031280 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef"} Feb 17 16:03:18 crc kubenswrapper[4672]: I0217 16:03:18.031339 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:18 crc kubenswrapper[4672]: I0217 16:03:18.031404 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:18 crc kubenswrapper[4672]: I0217 16:03:18.031402 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:18 crc kubenswrapper[4672]: I0217 16:03:18.031342 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740"} Feb 17 16:03:18 crc kubenswrapper[4672]: I0217 16:03:18.033217 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:18 crc kubenswrapper[4672]: I0217 16:03:18.033245 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:18 crc kubenswrapper[4672]: I0217 16:03:18.033273 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:18 crc kubenswrapper[4672]: I0217 16:03:18.033285 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:18 crc kubenswrapper[4672]: I0217 16:03:18.033334 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:18 crc kubenswrapper[4672]: I0217 16:03:18.033291 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:18 crc kubenswrapper[4672]: I0217 16:03:18.034591 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:18 crc kubenswrapper[4672]: I0217 16:03:18.034651 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:18 crc kubenswrapper[4672]: I0217 16:03:18.034674 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:18 crc kubenswrapper[4672]: I0217 16:03:18.333048 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:18 crc kubenswrapper[4672]: I0217 16:03:18.335728 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:18 crc kubenswrapper[4672]: I0217 16:03:18.335784 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:18 crc kubenswrapper[4672]: I0217 16:03:18.335797 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:18 crc kubenswrapper[4672]: I0217 16:03:18.335833 4672 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 16:03:18 crc kubenswrapper[4672]: I0217 16:03:18.712450 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:18 crc kubenswrapper[4672]: I0217 16:03:18.889920 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 23:17:52.093644609 +0000 UTC Feb 17 16:03:19 crc kubenswrapper[4672]: I0217 16:03:19.033685 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:19 crc kubenswrapper[4672]: I0217 16:03:19.033771 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:19 crc kubenswrapper[4672]: I0217 16:03:19.033789 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:19 crc kubenswrapper[4672]: I0217 16:03:19.035244 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:19 crc kubenswrapper[4672]: I0217 16:03:19.035425 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:19 crc kubenswrapper[4672]: I0217 16:03:19.035639 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:19 crc kubenswrapper[4672]: I0217 16:03:19.035778 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:19 crc kubenswrapper[4672]: I0217 16:03:19.035987 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:19 crc kubenswrapper[4672]: I0217 16:03:19.035914 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:19 crc kubenswrapper[4672]: I0217 16:03:19.036207 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:19 crc kubenswrapper[4672]: I0217 16:03:19.035896 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:19 crc kubenswrapper[4672]: I0217 16:03:19.036438 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:19 crc kubenswrapper[4672]: I0217 16:03:19.218268 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 17 16:03:19 crc kubenswrapper[4672]: I0217 16:03:19.220435 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 16:03:19 crc kubenswrapper[4672]: I0217 16:03:19.220780 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:19 crc kubenswrapper[4672]: I0217 16:03:19.222269 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:19 crc kubenswrapper[4672]: I0217 16:03:19.222385 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:19 crc kubenswrapper[4672]: I0217 16:03:19.222478 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:19 crc kubenswrapper[4672]: I0217 16:03:19.890812 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 03:43:01.882663038 +0000 UTC Feb 17 16:03:20 crc kubenswrapper[4672]: I0217 16:03:20.037151 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:20 crc kubenswrapper[4672]: I0217 16:03:20.039031 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:20 crc kubenswrapper[4672]: I0217 16:03:20.039142 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:20 crc kubenswrapper[4672]: I0217 16:03:20.039165 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:20 crc kubenswrapper[4672]: I0217 16:03:20.248905 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:20 crc kubenswrapper[4672]: I0217 16:03:20.249192 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:20 crc kubenswrapper[4672]: I0217 16:03:20.250728 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:20 crc kubenswrapper[4672]: I0217 16:03:20.250792 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:20 crc kubenswrapper[4672]: I0217 16:03:20.250808 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:20 crc kubenswrapper[4672]: I0217 16:03:20.891216 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 15:14:00.35804252 +0000 UTC Feb 17 16:03:21 crc kubenswrapper[4672]: I0217 16:03:21.713301 4672 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 16:03:21 crc kubenswrapper[4672]: I0217 16:03:21.713449 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:03:21 crc kubenswrapper[4672]: I0217 16:03:21.891474 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 22:16:35.560568562 +0000 UTC Feb 17 16:03:22 crc kubenswrapper[4672]: E0217 16:03:22.028727 4672 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 16:03:22 crc kubenswrapper[4672]: I0217 16:03:22.892482 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 21:45:35.853330428 +0000 UTC Feb 17 16:03:23 crc kubenswrapper[4672]: I0217 16:03:23.490190 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:23 crc kubenswrapper[4672]: I0217 16:03:23.490358 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:23 crc kubenswrapper[4672]: I0217 16:03:23.491980 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:23 crc kubenswrapper[4672]: I0217 16:03:23.492029 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:23 crc kubenswrapper[4672]: I0217 16:03:23.492049 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:23 crc kubenswrapper[4672]: I0217 16:03:23.893583 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:56:42.266570022 +0000 UTC Feb 17 16:03:24 crc kubenswrapper[4672]: I0217 16:03:24.184094 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 17 16:03:24 crc kubenswrapper[4672]: I0217 16:03:24.184388 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:24 crc kubenswrapper[4672]: I0217 16:03:24.186129 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:24 crc kubenswrapper[4672]: I0217 16:03:24.186190 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:24 crc kubenswrapper[4672]: I0217 16:03:24.186207 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:24 crc kubenswrapper[4672]: I0217 16:03:24.894501 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 21:11:40.568034749 +0000 UTC Feb 17 16:03:25 crc kubenswrapper[4672]: I0217 16:03:25.885495 4672 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 17 16:03:25 crc kubenswrapper[4672]: I0217 16:03:25.894679 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 01:40:09.703480461 +0000 UTC Feb 17 16:03:26 crc kubenswrapper[4672]: W0217 16:03:26.590831 4672 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 17 16:03:26 crc kubenswrapper[4672]: I0217 16:03:26.591006 4672 trace.go:236] Trace[672185950]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 16:03:16.588) (total time: 10002ms): Feb 17 16:03:26 crc kubenswrapper[4672]: Trace[672185950]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (16:03:26.590) Feb 17 16:03:26 crc kubenswrapper[4672]: Trace[672185950]: [10.002178052s] [10.002178052s] END Feb 17 16:03:26 crc kubenswrapper[4672]: E0217 16:03:26.591046 4672 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 17 16:03:26 crc kubenswrapper[4672]: I0217 16:03:26.620636 4672 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 16:03:26 crc kubenswrapper[4672]: I0217 16:03:26.620727 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 16:03:26 crc kubenswrapper[4672]: I0217 16:03:26.633006 4672 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 16:03:26 crc kubenswrapper[4672]: I0217 16:03:26.633115 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 16:03:26 crc kubenswrapper[4672]: I0217 16:03:26.895232 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 01:45:09.883950704 +0000 UTC Feb 17 16:03:27 crc kubenswrapper[4672]: I0217 16:03:27.896347 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 12:03:12.594969013 +0000 UTC Feb 17 16:03:28 crc kubenswrapper[4672]: I0217 16:03:28.897022 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 22:31:16.046011749 +0000 UTC Feb 17 16:03:29 crc kubenswrapper[4672]: I0217 16:03:29.897452 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 11:08:57.384879274 +0000 UTC Feb 17 16:03:30 crc kubenswrapper[4672]: I0217 16:03:30.256933 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:30 crc kubenswrapper[4672]: I0217 16:03:30.257143 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:30 crc kubenswrapper[4672]: I0217 16:03:30.259207 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:30 crc kubenswrapper[4672]: I0217 16:03:30.259251 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:30 crc kubenswrapper[4672]: I0217 16:03:30.259277 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:30 crc kubenswrapper[4672]: I0217 16:03:30.263981 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:30 crc kubenswrapper[4672]: I0217 16:03:30.898411 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 06:02:52.980901531 +0000 UTC Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.072878 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.073829 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.073880 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.073893 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:31 crc kubenswrapper[4672]: E0217 16:03:31.615987 4672 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.617895 4672 trace.go:236] Trace[741867571]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 16:03:20.194) (total time: 11423ms): Feb 17 16:03:31 crc kubenswrapper[4672]: Trace[741867571]: ---"Objects listed" error: 11423ms (16:03:31.617) Feb 17 16:03:31 crc kubenswrapper[4672]: Trace[741867571]: [11.423590606s] [11.423590606s] END Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.617937 4672 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.618885 4672 trace.go:236] Trace[1082896134]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 16:03:19.277) (total time: 12341ms): Feb 17 16:03:31 crc kubenswrapper[4672]: Trace[1082896134]: ---"Objects listed" error: 12341ms (16:03:31.618) Feb 17 16:03:31 crc kubenswrapper[4672]: Trace[1082896134]: [12.341631133s] [12.341631133s] END Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.618926 4672 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 16:03:31 crc kubenswrapper[4672]: E0217 16:03:31.624006 4672 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.625407 4672 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.625929 4672 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.627965 4672 trace.go:236] Trace[1504918402]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 16:03:19.387) (total time: 12240ms): Feb 17 16:03:31 crc kubenswrapper[4672]: Trace[1504918402]: ---"Objects listed" error: 12239ms (16:03:31.627) Feb 17 16:03:31 crc kubenswrapper[4672]: Trace[1504918402]: [12.240185351s] [12.240185351s] END Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.628843 4672 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.690272 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.697015 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.698862 4672 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:43292->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.699018 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:43292->192.168.126.11:17697: read: connection reset by peer" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.701063 4672 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:43298->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.701376 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:43298->192.168.126.11:17697: read: connection reset by peer" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.702802 4672 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.702867 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.879730 4672 apiserver.go:52] "Watching apiserver" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.882449 4672 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.882909 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.883292 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.883468 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:31 crc kubenswrapper[4672]: E0217 16:03:31.883745 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.883821 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:31 crc kubenswrapper[4672]: E0217 16:03:31.883986 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.884246 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.884326 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.884338 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 16:03:31 crc kubenswrapper[4672]: E0217 16:03:31.884411 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.885970 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.886266 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.886847 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.887027 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.887360 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.888029 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.888305 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.889265 4672 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.889674 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.890966 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.899380 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:19:17.251966078 +0000 UTC Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.920029 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.928182 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.928296 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.928348 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.928394 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.928485 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.928563 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.928895 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.928914 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.928631 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929053 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929091 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929125 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929149 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929163 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929204 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929247 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929287 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929329 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929448 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929493 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929249 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929451 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929492 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929624 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929655 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929662 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929699 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929736 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929776 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929811 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929846 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929858 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929884 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.929918 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.930771 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.930781 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.930836 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.931211 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.931545 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.931578 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.931652 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.931713 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.931773 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.931826 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.931840 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.931852 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.931885 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.931917 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.931941 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.931973 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.932001 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.932004 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.932030 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.932054 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.932080 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.932112 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.932158 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.932187 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.932285 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.932288 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.932318 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.932347 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.932379 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.932413 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.932437 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.932464 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.932473 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.932494 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.932820 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.932967 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.933010 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.933041 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.933070 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.933092 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.933122 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.933151 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.933175 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.933171 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.933204 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.933327 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.933439 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.933491 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.933547 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.933552 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.933627 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.933726 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.933814 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.933874 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.933904 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.933904 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: E0217 16:03:31.934085 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:03:32.433964928 +0000 UTC m=+21.188053710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.934255 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.934317 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.934389 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.934626 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.934711 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.934948 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.935006 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.934830 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.935941 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.937102 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.937094 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.937263 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.937284 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.937392 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.937661 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.937985 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.938259 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.938337 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.938421 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.939413 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.939376 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.939524 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.939564 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.939595 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.939624 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.939646 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.939647 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.939669 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.939800 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.939845 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.939846 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.939871 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.939910 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.939941 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.939964 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.940004 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.940046 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.940081 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.940109 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.940150 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.940166 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.940192 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.940208 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.940223 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.940237 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.940606 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.940672 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.940925 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941096 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941138 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941160 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941180 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941392 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941419 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941440 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941462 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941483 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941503 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941536 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941557 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941578 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941326 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941572 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941600 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941738 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941784 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941822 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941843 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941853 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941954 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941978 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.941998 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.942355 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.942421 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.942890 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.944711 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.944950 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.945052 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.945375 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.945397 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.945417 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.945608 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.945644 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.945781 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.945828 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.943178 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.946491 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.946815 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947033 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.946834 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.946842 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.946903 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947166 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947173 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947233 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947265 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947290 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947314 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947334 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947358 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947386 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947408 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947431 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947451 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947472 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947495 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947573 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947606 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947636 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947660 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947682 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947704 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947728 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947753 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947780 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947803 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947828 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.947993 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948020 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948044 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948074 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948100 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948152 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948187 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948215 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948240 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948264 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948291 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948315 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948344 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948372 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948404 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948439 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948471 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948568 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948602 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948627 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948652 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948674 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948699 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948724 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948755 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948780 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948813 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948847 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948880 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948905 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948929 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948958 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.948995 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949017 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949037 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949056 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949082 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949104 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949126 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949147 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949169 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949189 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949209 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949237 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949268 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949296 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949324 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949358 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949384 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949412 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949636 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949726 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949758 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949792 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949823 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949865 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949894 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949923 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949956 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.949984 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950018 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950046 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950074 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950102 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950133 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950200 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950243 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950279 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950312 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950429 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950484 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950547 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950590 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950628 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950676 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950717 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950763 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950798 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950836 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950910 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950927 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950942 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950957 4672 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950974 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.950988 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951006 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951024 4672 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951042 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951061 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951080 4672 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951096 4672 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951111 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951123 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951136 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951148 4672 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951161 4672 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951174 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951189 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951202 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951214 4672 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951226 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951239 4672 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951252 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951265 4672 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951278 4672 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951291 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951303 4672 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951316 4672 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951329 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951342 4672 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951356 4672 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951369 4672 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951382 4672 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951394 4672 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951408 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951420 4672 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951433 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951445 4672 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951457 4672 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951469 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951482 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951495 4672 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951585 4672 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951607 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951626 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951642 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951658 4672 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951673 4672 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951688 4672 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951702 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951714 4672 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951727 4672 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951740 4672 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951753 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951767 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951779 4672 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951794 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951807 4672 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951819 4672 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951831 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951844 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951858 4672 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951874 4672 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951892 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951909 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951925 4672 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951944 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951963 4672 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.951980 4672 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.952001 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.952015 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.952027 4672 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.952041 4672 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.952053 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.952065 4672 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:31 crc kubenswrapper[4672]: E0217 16:03:31.952349 4672 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:31 crc kubenswrapper[4672]: E0217 16:03:31.952418 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:32.452396708 +0000 UTC m=+21.206485450 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.953732 4672 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.954426 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.954576 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.954699 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.954826 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.955028 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.955327 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.955440 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.956224 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.956272 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.956329 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.956288 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.956869 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.957675 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: E0217 16:03:31.957806 4672 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.957860 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: E0217 16:03:31.957878 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:32.457863121 +0000 UTC m=+21.211951933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.958167 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.958648 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.959298 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.959300 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.959352 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.959468 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.942109 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.959684 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.959753 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.942173 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.942268 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.960690 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.960979 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.961102 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.961180 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.961195 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.961235 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.961487 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.961495 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.961566 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.961990 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.962276 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.962415 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.962498 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.962539 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.962552 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.962789 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.962792 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.963452 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.967416 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.967820 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.967900 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.968164 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.968994 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.967558 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.975432 4672 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.977071 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.977332 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.977644 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.978017 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.980569 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.980740 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.981112 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.981185 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.981221 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.981229 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.981244 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.981263 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.981304 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.981436 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.981469 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.981616 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.982290 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: E0217 16:03:31.982346 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:31 crc kubenswrapper[4672]: E0217 16:03:31.982403 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:31 crc kubenswrapper[4672]: E0217 16:03:31.982417 4672 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:31 crc kubenswrapper[4672]: E0217 16:03:31.982494 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:32.482473521 +0000 UTC m=+21.236562253 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.982499 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.982571 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.982633 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.982942 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.983057 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.983307 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.983435 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.983560 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.983650 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.983797 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: E0217 16:03:31.983839 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.983999 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: E0217 16:03:31.984009 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:31 crc kubenswrapper[4672]: E0217 16:03:31.984031 4672 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.985194 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: E0217 16:03:31.985311 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:32.484662028 +0000 UTC m=+21.238750990 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.985706 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.986213 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.986695 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.986922 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.986977 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.987069 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.987256 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.987449 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.987718 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.988340 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.988646 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.988665 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.988678 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.988752 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.988928 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.988965 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.989121 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.989139 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.989342 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.990819 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.991205 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.991262 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.991333 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.991401 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.992281 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.992593 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.993247 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.994198 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.994485 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.994573 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.994792 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.994800 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.995080 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.995315 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.995634 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.995701 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.995812 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.996218 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.996337 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.996657 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.996401 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.996476 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.996625 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.996645 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.996677 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.996456 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.996919 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.997681 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:03:31 crc kubenswrapper[4672]: I0217 16:03:31.999643 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:31.999966 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.000373 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.001910 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.002718 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.002885 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.004609 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.005343 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.006979 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.007800 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.008368 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.009798 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.010781 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.011711 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.013717 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.013767 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.018659 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.021501 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.027596 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.035146 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.043784 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.052645 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.052765 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.052865 4672 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.052889 4672 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.052904 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.052917 4672 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.052932 4672 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.052946 4672 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.052960 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.052972 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.052985 4672 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.052999 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053011 4672 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053024 4672 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053037 4672 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053050 4672 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053063 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053076 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053089 4672 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053102 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053124 4672 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053142 4672 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053156 4672 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053169 4672 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053181 4672 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053194 4672 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053207 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053220 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053233 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053246 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053258 4672 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053270 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053283 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053295 4672 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053316 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053459 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053788 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.053801 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054655 4672 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054679 4672 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054693 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054705 4672 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054717 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054730 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054741 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054752 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054764 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054778 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054790 4672 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054802 4672 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054816 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054835 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054847 4672 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054859 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054872 4672 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054884 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054895 4672 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054910 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054923 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054938 4672 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054949 4672 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054963 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054975 4672 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054987 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.054999 4672 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055010 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055021 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055032 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055044 4672 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055055 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055068 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055082 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055094 4672 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055107 4672 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055119 4672 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055131 4672 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055142 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055152 4672 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055162 4672 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055173 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055183 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055195 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055205 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055215 4672 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055226 4672 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055251 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055263 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055275 4672 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055286 4672 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055297 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055308 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055319 4672 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055330 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055342 4672 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055355 4672 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055367 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055379 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055391 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055452 4672 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055501 4672 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055538 4672 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055551 4672 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055563 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055576 4672 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055596 4672 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055611 4672 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055624 4672 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055635 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055647 4672 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055658 4672 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055670 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055681 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055690 4672 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055700 4672 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055710 4672 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055719 4672 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055727 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055736 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055744 4672 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055752 4672 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055761 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055771 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.055779 4672 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.063345 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.072198 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.078416 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.080745 4672 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8" exitCode=255 Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.080841 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8"} Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.083152 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:32 crc kubenswrapper[4672]: E0217 16:03:32.086933 4672 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.091316 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.091793 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.091860 4672 scope.go:117] "RemoveContainer" containerID="6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.109366 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.154952 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.180397 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.191427 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.208372 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.209732 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.220390 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:32 crc kubenswrapper[4672]: W0217 16:03:32.220665 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-ba3735a538d718d60bc4499a30b3f4f1d1e93375fb5dd378b425a163db8c8524 WatchSource:0}: Error finding container ba3735a538d718d60bc4499a30b3f4f1d1e93375fb5dd378b425a163db8c8524: Status 404 returned error can't find the container with id ba3735a538d718d60bc4499a30b3f4f1d1e93375fb5dd378b425a163db8c8524 Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.223899 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.231811 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.240948 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:32 crc kubenswrapper[4672]: W0217 16:03:32.243174 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-3b43da97cb697598c5deee63a71a4701840a7e8a71550a092eae1aaf694aa071 WatchSource:0}: Error finding container 3b43da97cb697598c5deee63a71a4701840a7e8a71550a092eae1aaf694aa071: Status 404 returned error can't find the container with id 3b43da97cb697598c5deee63a71a4701840a7e8a71550a092eae1aaf694aa071 Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.246319 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.251499 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 16:03:32 crc kubenswrapper[4672]: W0217 16:03:32.265443 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-0c59985af73786b6c182784d60a558feed6cd9b4843c3df7dbd371e26a2e0721 WatchSource:0}: Error finding container 0c59985af73786b6c182784d60a558feed6cd9b4843c3df7dbd371e26a2e0721: Status 404 returned error can't find the container with id 0c59985af73786b6c182784d60a558feed6cd9b4843c3df7dbd371e26a2e0721 Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.462904 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:32 crc kubenswrapper[4672]: E0217 16:03:32.463117 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:03:33.463094328 +0000 UTC m=+22.217183060 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.463210 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.463332 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:32 crc kubenswrapper[4672]: E0217 16:03:32.463475 4672 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:32 crc kubenswrapper[4672]: E0217 16:03:32.463570 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:33.46356161 +0000 UTC m=+22.217650342 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:32 crc kubenswrapper[4672]: E0217 16:03:32.463851 4672 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:32 crc kubenswrapper[4672]: E0217 16:03:32.463957 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:33.46393995 +0000 UTC m=+22.218028682 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.564396 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.564636 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:32 crc kubenswrapper[4672]: E0217 16:03:32.564654 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:32 crc kubenswrapper[4672]: E0217 16:03:32.564838 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:32 crc kubenswrapper[4672]: E0217 16:03:32.564699 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:32 crc kubenswrapper[4672]: E0217 16:03:32.564990 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:32 crc kubenswrapper[4672]: E0217 16:03:32.565024 4672 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:32 crc kubenswrapper[4672]: E0217 16:03:32.564902 4672 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:32 crc kubenswrapper[4672]: E0217 16:03:32.565102 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:33.565080303 +0000 UTC m=+22.319169045 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:32 crc kubenswrapper[4672]: E0217 16:03:32.565283 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:33.565267978 +0000 UTC m=+22.319356710 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.899901 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 01:04:08.906562254 +0000 UTC Feb 17 16:03:32 crc kubenswrapper[4672]: I0217 16:03:32.944344 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:32 crc kubenswrapper[4672]: E0217 16:03:32.944469 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.086573 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe"} Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.086668 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141"} Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.086696 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3b43da97cb697598c5deee63a71a4701840a7e8a71550a092eae1aaf694aa071"} Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.088308 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1"} Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.088364 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ba3735a538d718d60bc4499a30b3f4f1d1e93375fb5dd378b425a163db8c8524"} Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.090493 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.092426 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3"} Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.092765 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.093650 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0c59985af73786b6c182784d60a558feed6cd9b4843c3df7dbd371e26a2e0721"} Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.102613 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.115200 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.127646 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.143491 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.164726 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.185343 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.207668 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.226113 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.246645 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.268038 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.292133 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.305929 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.320926 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.335224 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.347205 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.360114 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.473049 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.473155 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:33 crc kubenswrapper[4672]: E0217 16:03:33.473230 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:03:35.473204063 +0000 UTC m=+24.227292795 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.473259 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:33 crc kubenswrapper[4672]: E0217 16:03:33.473361 4672 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:33 crc kubenswrapper[4672]: E0217 16:03:33.473416 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:35.473406018 +0000 UTC m=+24.227494750 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:33 crc kubenswrapper[4672]: E0217 16:03:33.473440 4672 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:33 crc kubenswrapper[4672]: E0217 16:03:33.473638 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:35.473608103 +0000 UTC m=+24.227696875 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.574170 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.574494 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:33 crc kubenswrapper[4672]: E0217 16:03:33.574349 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:33 crc kubenswrapper[4672]: E0217 16:03:33.574680 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:33 crc kubenswrapper[4672]: E0217 16:03:33.574594 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:33 crc kubenswrapper[4672]: E0217 16:03:33.574751 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:33 crc kubenswrapper[4672]: E0217 16:03:33.574760 4672 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:33 crc kubenswrapper[4672]: E0217 16:03:33.574813 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:35.574797378 +0000 UTC m=+24.328886110 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:33 crc kubenswrapper[4672]: E0217 16:03:33.574740 4672 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:33 crc kubenswrapper[4672]: E0217 16:03:33.575009 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:35.574994603 +0000 UTC m=+24.329083335 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.900406 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 15:21:09.738111004 +0000 UTC Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.943997 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.944019 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:33 crc kubenswrapper[4672]: E0217 16:03:33.944579 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:33 crc kubenswrapper[4672]: E0217 16:03:33.944659 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.948822 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.949535 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.950263 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.950931 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.951969 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.953278 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.953897 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.954845 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.955494 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.956440 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.957063 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.958159 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.958718 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.959241 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.960204 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.960760 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.961741 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.962162 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.962766 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.963948 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.964449 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.965283 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.965951 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.966390 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.967440 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.968445 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.969337 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.969941 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.970847 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.971355 4672 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.971521 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.973571 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.974081 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.974524 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.975994 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.977016 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.977834 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.978897 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.979856 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 17 16:03:33 crc kubenswrapper[4672]: I0217 16:03:33.980881 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 17 16:03:34 crc kubenswrapper[4672]: I0217 16:03:34.212138 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 17 16:03:34 crc kubenswrapper[4672]: I0217 16:03:34.225584 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4672]: I0217 16:03:34.227610 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 17 16:03:34 crc kubenswrapper[4672]: I0217 16:03:34.230757 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 17 16:03:34 crc kubenswrapper[4672]: I0217 16:03:34.249557 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4672]: I0217 16:03:34.264286 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4672]: I0217 16:03:34.278212 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4672]: I0217 16:03:34.290070 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4672]: I0217 16:03:34.304640 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4672]: I0217 16:03:34.319872 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4672]: I0217 16:03:34.333593 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4672]: I0217 16:03:34.346622 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4672]: I0217 16:03:34.358766 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4672]: I0217 16:03:34.370452 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4672]: I0217 16:03:34.387543 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4672]: I0217 16:03:34.400799 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4672]: I0217 16:03:34.412457 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4672]: I0217 16:03:34.425827 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4672]: I0217 16:03:34.438554 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4672]: I0217 16:03:34.451143 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:34Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:34 crc kubenswrapper[4672]: I0217 16:03:34.901575 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 21:20:00.075364577 +0000 UTC Feb 17 16:03:34 crc kubenswrapper[4672]: I0217 16:03:34.944213 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:34 crc kubenswrapper[4672]: E0217 16:03:34.944729 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:35 crc kubenswrapper[4672]: I0217 16:03:35.101756 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d"} Feb 17 16:03:35 crc kubenswrapper[4672]: I0217 16:03:35.127234 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4672]: I0217 16:03:35.151059 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4672]: I0217 16:03:35.171471 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4672]: I0217 16:03:35.197196 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4672]: I0217 16:03:35.219944 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4672]: I0217 16:03:35.255461 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4672]: I0217 16:03:35.280816 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4672]: I0217 16:03:35.299576 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4672]: I0217 16:03:35.328332 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:35Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:35 crc kubenswrapper[4672]: I0217 16:03:35.493357 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:35 crc kubenswrapper[4672]: I0217 16:03:35.493443 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:35 crc kubenswrapper[4672]: I0217 16:03:35.493568 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:35 crc kubenswrapper[4672]: E0217 16:03:35.493707 4672 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:35 crc kubenswrapper[4672]: E0217 16:03:35.493785 4672 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:35 crc kubenswrapper[4672]: E0217 16:03:35.493796 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:39.49377454 +0000 UTC m=+28.247863302 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:35 crc kubenswrapper[4672]: E0217 16:03:35.493990 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:39.493954125 +0000 UTC m=+28.248042907 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:35 crc kubenswrapper[4672]: E0217 16:03:35.494147 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:03:39.49412422 +0000 UTC m=+28.248213072 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:03:35 crc kubenswrapper[4672]: I0217 16:03:35.595099 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:35 crc kubenswrapper[4672]: I0217 16:03:35.595208 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:35 crc kubenswrapper[4672]: E0217 16:03:35.595396 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:35 crc kubenswrapper[4672]: E0217 16:03:35.595432 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:35 crc kubenswrapper[4672]: E0217 16:03:35.595457 4672 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:35 crc kubenswrapper[4672]: E0217 16:03:35.595396 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:35 crc kubenswrapper[4672]: E0217 16:03:35.595574 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:35 crc kubenswrapper[4672]: E0217 16:03:35.595601 4672 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:35 crc kubenswrapper[4672]: E0217 16:03:35.595579 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:39.595548901 +0000 UTC m=+28.349637673 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:35 crc kubenswrapper[4672]: E0217 16:03:35.595736 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:39.595680094 +0000 UTC m=+28.349768866 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:35 crc kubenswrapper[4672]: I0217 16:03:35.903194 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 10:03:42.00567022 +0000 UTC Feb 17 16:03:35 crc kubenswrapper[4672]: I0217 16:03:35.944074 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:35 crc kubenswrapper[4672]: I0217 16:03:35.944218 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:35 crc kubenswrapper[4672]: E0217 16:03:35.944418 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:35 crc kubenswrapper[4672]: E0217 16:03:35.944543 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:36 crc kubenswrapper[4672]: I0217 16:03:36.817153 4672 csr.go:261] certificate signing request csr-l9vws is approved, waiting to be issued Feb 17 16:03:36 crc kubenswrapper[4672]: I0217 16:03:36.838623 4672 csr.go:257] certificate signing request csr-l9vws is issued Feb 17 16:03:36 crc kubenswrapper[4672]: I0217 16:03:36.871855 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-2g6fq"] Feb 17 16:03:36 crc kubenswrapper[4672]: I0217 16:03:36.872145 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-vst6k"] Feb 17 16:03:36 crc kubenswrapper[4672]: I0217 16:03:36.872323 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2g6fq" Feb 17 16:03:36 crc kubenswrapper[4672]: I0217 16:03:36.872408 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vst6k" Feb 17 16:03:36 crc kubenswrapper[4672]: I0217 16:03:36.876734 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 16:03:36 crc kubenswrapper[4672]: I0217 16:03:36.878988 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 16:03:36 crc kubenswrapper[4672]: I0217 16:03:36.879232 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 16:03:36 crc kubenswrapper[4672]: I0217 16:03:36.879414 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 16:03:36 crc kubenswrapper[4672]: I0217 16:03:36.879633 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 16:03:36 crc kubenswrapper[4672]: I0217 16:03:36.879786 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 16:03:36 crc kubenswrapper[4672]: I0217 16:03:36.880871 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 16:03:36 crc kubenswrapper[4672]: I0217 16:03:36.904331 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 06:14:25.534024321 +0000 UTC Feb 17 16:03:36 crc kubenswrapper[4672]: I0217 16:03:36.914421 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4672]: I0217 16:03:36.933578 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4672]: I0217 16:03:36.944226 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:36 crc kubenswrapper[4672]: E0217 16:03:36.944498 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:36 crc kubenswrapper[4672]: I0217 16:03:36.951871 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4672]: I0217 16:03:36.969613 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4672]: I0217 16:03:36.981647 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:36 crc kubenswrapper[4672]: I0217 16:03:36.999861 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:36Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.006734 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9hsv\" (UniqueName: \"kubernetes.io/projected/ffeb52c8-e4ea-4211-8265-c0e72f364fcb-kube-api-access-k9hsv\") pod \"node-resolver-2g6fq\" (UID: \"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\") " pod="openshift-dns/node-resolver-2g6fq" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.006763 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bjxx\" (UniqueName: \"kubernetes.io/projected/4a619f2f-0992-4440-ac8c-bc513eaf2cfa-kube-api-access-5bjxx\") pod \"node-ca-vst6k\" (UID: \"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\") " pod="openshift-image-registry/node-ca-vst6k" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.006794 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ffeb52c8-e4ea-4211-8265-c0e72f364fcb-hosts-file\") pod \"node-resolver-2g6fq\" (UID: \"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\") " pod="openshift-dns/node-resolver-2g6fq" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.006809 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4a619f2f-0992-4440-ac8c-bc513eaf2cfa-serviceca\") pod \"node-ca-vst6k\" (UID: \"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\") " pod="openshift-image-registry/node-ca-vst6k" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.006829 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a619f2f-0992-4440-ac8c-bc513eaf2cfa-host\") pod \"node-ca-vst6k\" (UID: \"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\") " pod="openshift-image-registry/node-ca-vst6k" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.014419 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.024493 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.034689 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.051854 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.074503 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.092946 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.105675 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.107541 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bjxx\" (UniqueName: \"kubernetes.io/projected/4a619f2f-0992-4440-ac8c-bc513eaf2cfa-kube-api-access-5bjxx\") pod \"node-ca-vst6k\" (UID: \"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\") " pod="openshift-image-registry/node-ca-vst6k" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.107635 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9hsv\" (UniqueName: \"kubernetes.io/projected/ffeb52c8-e4ea-4211-8265-c0e72f364fcb-kube-api-access-k9hsv\") pod \"node-resolver-2g6fq\" (UID: \"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\") " pod="openshift-dns/node-resolver-2g6fq" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.107745 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ffeb52c8-e4ea-4211-8265-c0e72f364fcb-hosts-file\") pod \"node-resolver-2g6fq\" (UID: \"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\") " pod="openshift-dns/node-resolver-2g6fq" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.107791 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4a619f2f-0992-4440-ac8c-bc513eaf2cfa-serviceca\") pod \"node-ca-vst6k\" (UID: \"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\") " pod="openshift-image-registry/node-ca-vst6k" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.107824 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a619f2f-0992-4440-ac8c-bc513eaf2cfa-host\") pod \"node-ca-vst6k\" (UID: \"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\") " pod="openshift-image-registry/node-ca-vst6k" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.107882 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ffeb52c8-e4ea-4211-8265-c0e72f364fcb-hosts-file\") pod \"node-resolver-2g6fq\" (UID: \"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\") " pod="openshift-dns/node-resolver-2g6fq" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.107927 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4a619f2f-0992-4440-ac8c-bc513eaf2cfa-host\") pod \"node-ca-vst6k\" (UID: \"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\") " pod="openshift-image-registry/node-ca-vst6k" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.110184 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4a619f2f-0992-4440-ac8c-bc513eaf2cfa-serviceca\") pod \"node-ca-vst6k\" (UID: \"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\") " pod="openshift-image-registry/node-ca-vst6k" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.119557 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.122982 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9hsv\" (UniqueName: \"kubernetes.io/projected/ffeb52c8-e4ea-4211-8265-c0e72f364fcb-kube-api-access-k9hsv\") pod \"node-resolver-2g6fq\" (UID: \"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\") " pod="openshift-dns/node-resolver-2g6fq" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.132371 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bjxx\" (UniqueName: \"kubernetes.io/projected/4a619f2f-0992-4440-ac8c-bc513eaf2cfa-kube-api-access-5bjxx\") pod \"node-ca-vst6k\" (UID: \"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\") " pod="openshift-image-registry/node-ca-vst6k" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.133489 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.155451 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.177061 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.186482 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2g6fq" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.188501 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.194170 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vst6k" Feb 17 16:03:37 crc kubenswrapper[4672]: W0217 16:03:37.197416 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffeb52c8_e4ea_4211_8265_c0e72f364fcb.slice/crio-52eec7679542e5dddb8778877f4ee43e20875500d4e1e56fb9a54cecd1406d7a WatchSource:0}: Error finding container 52eec7679542e5dddb8778877f4ee43e20875500d4e1e56fb9a54cecd1406d7a: Status 404 returned error can't find the container with id 52eec7679542e5dddb8778877f4ee43e20875500d4e1e56fb9a54cecd1406d7a Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.202338 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.221771 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.234720 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.251734 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.631913 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-n84l8"] Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.632622 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-d6dhs"] Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.632812 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-n84l8" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.632861 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-5jjr2"] Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.632952 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.633447 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: W0217 16:03:37.635619 4672 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: secrets "machine-config-daemon-dockercfg-r5tcq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 17 16:03:37 crc kubenswrapper[4672]: E0217 16:03:37.635662 4672 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-config-daemon-dockercfg-r5tcq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.635778 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 16:03:37 crc kubenswrapper[4672]: W0217 16:03:37.635966 4672 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 17 16:03:37 crc kubenswrapper[4672]: E0217 16:03:37.635991 4672 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.636164 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 16:03:37 crc kubenswrapper[4672]: W0217 16:03:37.636353 4672 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: configmaps "multus-daemon-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 17 16:03:37 crc kubenswrapper[4672]: E0217 16:03:37.636379 4672 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"multus-daemon-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 16:03:37 crc kubenswrapper[4672]: W0217 16:03:37.636491 4672 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 17 16:03:37 crc kubenswrapper[4672]: E0217 16:03:37.636535 4672 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.637003 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 16:03:37 crc kubenswrapper[4672]: W0217 16:03:37.637218 4672 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: secrets "default-dockercfg-2q5b6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 17 16:03:37 crc kubenswrapper[4672]: E0217 16:03:37.637243 4672 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-2q5b6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.637340 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 16:03:37 crc kubenswrapper[4672]: W0217 16:03:37.637585 4672 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 17 16:03:37 crc kubenswrapper[4672]: E0217 16:03:37.637607 4672 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 16:03:37 crc kubenswrapper[4672]: W0217 16:03:37.637730 4672 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: secrets "proxy-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 17 16:03:37 crc kubenswrapper[4672]: E0217 16:03:37.637754 4672 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"proxy-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.637753 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.659388 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.673016 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.683106 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.693536 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.707802 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.719115 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.738274 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.754383 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.771711 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.792426 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.811179 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.814564 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-host-run-k8s-cni-cncf-io\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.814600 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec1ec84d-96ba-4a95-a24b-c9142495d70d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n84l8\" (UID: \"ec1ec84d-96ba-4a95-a24b-c9142495d70d\") " pod="openshift-multus/multus-additional-cni-plugins-n84l8" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.814616 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-host-run-netns\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.814632 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-host-var-lib-cni-multus\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.814648 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-multus-cni-dir\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.814662 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-os-release\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.814733 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-host-var-lib-cni-bin\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.814804 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-system-cni-dir\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.814850 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql9k2\" (UniqueName: \"kubernetes.io/projected/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-kube-api-access-ql9k2\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.814920 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ec1ec84d-96ba-4a95-a24b-c9142495d70d-os-release\") pod \"multus-additional-cni-plugins-n84l8\" (UID: \"ec1ec84d-96ba-4a95-a24b-c9142495d70d\") " pod="openshift-multus/multus-additional-cni-plugins-n84l8" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.814946 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-hostroot\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.814977 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-etc-kubernetes\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.815011 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fa9cd2c6-74a5-4567-a141-be56c668e566-proxy-tls\") pod \"machine-config-daemon-d6dhs\" (UID: \"fa9cd2c6-74a5-4567-a141-be56c668e566\") " pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.815034 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-multus-conf-dir\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.815054 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-host-run-multus-certs\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.815131 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-host-var-lib-kubelet\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.815159 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa9cd2c6-74a5-4567-a141-be56c668e566-mcd-auth-proxy-config\") pod \"machine-config-daemon-d6dhs\" (UID: \"fa9cd2c6-74a5-4567-a141-be56c668e566\") " pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.815185 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-multus-socket-dir-parent\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.815252 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec1ec84d-96ba-4a95-a24b-c9142495d70d-system-cni-dir\") pod \"multus-additional-cni-plugins-n84l8\" (UID: \"ec1ec84d-96ba-4a95-a24b-c9142495d70d\") " pod="openshift-multus/multus-additional-cni-plugins-n84l8" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.815277 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ec1ec84d-96ba-4a95-a24b-c9142495d70d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n84l8\" (UID: \"ec1ec84d-96ba-4a95-a24b-c9142495d70d\") " pod="openshift-multus/multus-additional-cni-plugins-n84l8" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.815294 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ftr2\" (UniqueName: \"kubernetes.io/projected/ec1ec84d-96ba-4a95-a24b-c9142495d70d-kube-api-access-7ftr2\") pod \"multus-additional-cni-plugins-n84l8\" (UID: \"ec1ec84d-96ba-4a95-a24b-c9142495d70d\") " pod="openshift-multus/multus-additional-cni-plugins-n84l8" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.815315 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-cnibin\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.815330 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-multus-daemon-config\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.815347 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fa9cd2c6-74a5-4567-a141-be56c668e566-rootfs\") pod \"machine-config-daemon-d6dhs\" (UID: \"fa9cd2c6-74a5-4567-a141-be56c668e566\") " pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.815363 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl6qq\" (UniqueName: \"kubernetes.io/projected/fa9cd2c6-74a5-4567-a141-be56c668e566-kube-api-access-kl6qq\") pod \"machine-config-daemon-d6dhs\" (UID: \"fa9cd2c6-74a5-4567-a141-be56c668e566\") " pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.815429 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ec1ec84d-96ba-4a95-a24b-c9142495d70d-cni-binary-copy\") pod \"multus-additional-cni-plugins-n84l8\" (UID: \"ec1ec84d-96ba-4a95-a24b-c9142495d70d\") " pod="openshift-multus/multus-additional-cni-plugins-n84l8" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.815474 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ec1ec84d-96ba-4a95-a24b-c9142495d70d-cnibin\") pod \"multus-additional-cni-plugins-n84l8\" (UID: \"ec1ec84d-96ba-4a95-a24b-c9142495d70d\") " pod="openshift-multus/multus-additional-cni-plugins-n84l8" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.815519 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-cni-binary-copy\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.837352 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.840466 4672 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-17 15:58:36 +0000 UTC, rotation deadline is 2026-12-16 01:30:44.560223924 +0000 UTC Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.840553 4672 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7233h27m6.719676809s for next certificate rotation Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.884853 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.905313 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 22:37:51.747740732 +0000 UTC Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.910581 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916117 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-host-run-k8s-cni-cncf-io\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916157 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec1ec84d-96ba-4a95-a24b-c9142495d70d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n84l8\" (UID: \"ec1ec84d-96ba-4a95-a24b-c9142495d70d\") " pod="openshift-multus/multus-additional-cni-plugins-n84l8" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916177 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-host-run-netns\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916198 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-host-var-lib-cni-multus\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916225 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-multus-cni-dir\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916242 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-os-release\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916256 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-host-var-lib-cni-bin\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916282 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-system-cni-dir\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916282 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-host-run-k8s-cni-cncf-io\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916346 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-host-run-netns\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916381 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-host-var-lib-cni-multus\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916430 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-multus-cni-dir\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916598 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-os-release\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916299 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql9k2\" (UniqueName: \"kubernetes.io/projected/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-kube-api-access-ql9k2\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916623 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-host-var-lib-cni-bin\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916670 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ec1ec84d-96ba-4a95-a24b-c9142495d70d-os-release\") pod \"multus-additional-cni-plugins-n84l8\" (UID: \"ec1ec84d-96ba-4a95-a24b-c9142495d70d\") " pod="openshift-multus/multus-additional-cni-plugins-n84l8" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916700 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-hostroot\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916718 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-system-cni-dir\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916728 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-etc-kubernetes\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916755 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fa9cd2c6-74a5-4567-a141-be56c668e566-proxy-tls\") pod \"machine-config-daemon-d6dhs\" (UID: \"fa9cd2c6-74a5-4567-a141-be56c668e566\") " pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916778 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-hostroot\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916779 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-multus-conf-dir\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916808 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-host-run-multus-certs\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916811 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-multus-conf-dir\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916831 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-host-var-lib-kubelet\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916850 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa9cd2c6-74a5-4567-a141-be56c668e566-mcd-auth-proxy-config\") pod \"machine-config-daemon-d6dhs\" (UID: \"fa9cd2c6-74a5-4567-a141-be56c668e566\") " pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916855 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-etc-kubernetes\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916868 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-multus-socket-dir-parent\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916894 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec1ec84d-96ba-4a95-a24b-c9142495d70d-system-cni-dir\") pod \"multus-additional-cni-plugins-n84l8\" (UID: \"ec1ec84d-96ba-4a95-a24b-c9142495d70d\") " pod="openshift-multus/multus-additional-cni-plugins-n84l8" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916913 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ec1ec84d-96ba-4a95-a24b-c9142495d70d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n84l8\" (UID: \"ec1ec84d-96ba-4a95-a24b-c9142495d70d\") " pod="openshift-multus/multus-additional-cni-plugins-n84l8" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916929 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ftr2\" (UniqueName: \"kubernetes.io/projected/ec1ec84d-96ba-4a95-a24b-c9142495d70d-kube-api-access-7ftr2\") pod \"multus-additional-cni-plugins-n84l8\" (UID: \"ec1ec84d-96ba-4a95-a24b-c9142495d70d\") " pod="openshift-multus/multus-additional-cni-plugins-n84l8" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916939 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec1ec84d-96ba-4a95-a24b-c9142495d70d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n84l8\" (UID: \"ec1ec84d-96ba-4a95-a24b-c9142495d70d\") " pod="openshift-multus/multus-additional-cni-plugins-n84l8" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916970 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec1ec84d-96ba-4a95-a24b-c9142495d70d-system-cni-dir\") pod \"multus-additional-cni-plugins-n84l8\" (UID: \"ec1ec84d-96ba-4a95-a24b-c9142495d70d\") " pod="openshift-multus/multus-additional-cni-plugins-n84l8" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916945 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-cnibin\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916983 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-multus-socket-dir-parent\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916998 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-multus-daemon-config\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.917015 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-host-run-multus-certs\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.917017 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fa9cd2c6-74a5-4567-a141-be56c668e566-rootfs\") pod \"machine-config-daemon-d6dhs\" (UID: \"fa9cd2c6-74a5-4567-a141-be56c668e566\") " pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.917034 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fa9cd2c6-74a5-4567-a141-be56c668e566-rootfs\") pod \"machine-config-daemon-d6dhs\" (UID: \"fa9cd2c6-74a5-4567-a141-be56c668e566\") " pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.917033 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-host-var-lib-kubelet\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.917055 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl6qq\" (UniqueName: \"kubernetes.io/projected/fa9cd2c6-74a5-4567-a141-be56c668e566-kube-api-access-kl6qq\") pod \"machine-config-daemon-d6dhs\" (UID: \"fa9cd2c6-74a5-4567-a141-be56c668e566\") " pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.917099 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ec1ec84d-96ba-4a95-a24b-c9142495d70d-cni-binary-copy\") pod \"multus-additional-cni-plugins-n84l8\" (UID: \"ec1ec84d-96ba-4a95-a24b-c9142495d70d\") " pod="openshift-multus/multus-additional-cni-plugins-n84l8" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916757 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ec1ec84d-96ba-4a95-a24b-c9142495d70d-os-release\") pod \"multus-additional-cni-plugins-n84l8\" (UID: \"ec1ec84d-96ba-4a95-a24b-c9142495d70d\") " pod="openshift-multus/multus-additional-cni-plugins-n84l8" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.917134 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ec1ec84d-96ba-4a95-a24b-c9142495d70d-cnibin\") pod \"multus-additional-cni-plugins-n84l8\" (UID: \"ec1ec84d-96ba-4a95-a24b-c9142495d70d\") " pod="openshift-multus/multus-additional-cni-plugins-n84l8" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.916985 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-cnibin\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.917162 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-cni-binary-copy\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.917403 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ec1ec84d-96ba-4a95-a24b-c9142495d70d-cnibin\") pod \"multus-additional-cni-plugins-n84l8\" (UID: \"ec1ec84d-96ba-4a95-a24b-c9142495d70d\") " pod="openshift-multus/multus-additional-cni-plugins-n84l8" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.917621 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ec1ec84d-96ba-4a95-a24b-c9142495d70d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n84l8\" (UID: \"ec1ec84d-96ba-4a95-a24b-c9142495d70d\") " pod="openshift-multus/multus-additional-cni-plugins-n84l8" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.917808 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ec1ec84d-96ba-4a95-a24b-c9142495d70d-cni-binary-copy\") pod \"multus-additional-cni-plugins-n84l8\" (UID: \"ec1ec84d-96ba-4a95-a24b-c9142495d70d\") " pod="openshift-multus/multus-additional-cni-plugins-n84l8" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.917885 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-cni-binary-copy\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.934871 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.939175 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql9k2\" (UniqueName: \"kubernetes.io/projected/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-kube-api-access-ql9k2\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.944010 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:37 crc kubenswrapper[4672]: E0217 16:03:37.944130 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.944018 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:37 crc kubenswrapper[4672]: E0217 16:03:37.944445 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.950541 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.955577 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ftr2\" (UniqueName: \"kubernetes.io/projected/ec1ec84d-96ba-4a95-a24b-c9142495d70d-kube-api-access-7ftr2\") pod \"multus-additional-cni-plugins-n84l8\" (UID: \"ec1ec84d-96ba-4a95-a24b-c9142495d70d\") " pod="openshift-multus/multus-additional-cni-plugins-n84l8" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.973356 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.984905 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:37 crc kubenswrapper[4672]: I0217 16:03:37.995355 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:37Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.008343 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4f9wc"] Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.009106 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.010585 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.010995 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.011033 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.011046 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.011161 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.011270 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.011277 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.014920 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.024765 4672 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.025943 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.025972 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.025981 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.026038 4672 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.026785 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.031099 4672 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.031384 4672 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.032268 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.032288 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.032295 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.032306 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.032315 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:38Z","lastTransitionTime":"2026-02-17T16:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.034695 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.043304 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: E0217 16:03:38.048430 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.052744 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.052766 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.052774 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.052786 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.052797 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:38Z","lastTransitionTime":"2026-02-17T16:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.058527 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: E0217 16:03:38.064427 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.067033 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.067056 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.067065 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.067076 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.067086 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:38Z","lastTransitionTime":"2026-02-17T16:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.069232 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: E0217 16:03:38.077888 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.080612 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.080635 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.080643 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.080654 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.080662 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:38Z","lastTransitionTime":"2026-02-17T16:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.081814 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: E0217 16:03:38.094488 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.097535 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.097575 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.097589 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.097603 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.097615 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:38Z","lastTransitionTime":"2026-02-17T16:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.101288 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.111169 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2g6fq" event={"ID":"ffeb52c8-e4ea-4211-8265-c0e72f364fcb","Type":"ContainerStarted","Data":"e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72"} Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.111209 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2g6fq" event={"ID":"ffeb52c8-e4ea-4211-8265-c0e72f364fcb","Type":"ContainerStarted","Data":"52eec7679542e5dddb8778877f4ee43e20875500d4e1e56fb9a54cecd1406d7a"} Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.112495 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vst6k" event={"ID":"4a619f2f-0992-4440-ac8c-bc513eaf2cfa","Type":"ContainerStarted","Data":"9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003"} Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.112564 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vst6k" event={"ID":"4a619f2f-0992-4440-ac8c-bc513eaf2cfa","Type":"ContainerStarted","Data":"05ead568ebcbcf03076464d158e1ab26a322e750f59fa554bf64a72fef9d7e8b"} Feb 17 16:03:38 crc kubenswrapper[4672]: E0217 16:03:38.114053 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: E0217 16:03:38.114245 4672 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.117846 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-run-openvswitch\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.117996 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-var-lib-openvswitch\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.118087 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98a910a1-b5f0-4f34-9d76-6474c753e8e7-ovnkube-script-lib\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.118194 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-slash\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.118282 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98a910a1-b5f0-4f34-9d76-6474c753e8e7-ovnkube-config\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.118358 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-run-systemd\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.118422 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-systemd-units\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.118497 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-cni-bin\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.118196 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.118675 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.118694 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.118703 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.118716 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.118724 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:38Z","lastTransitionTime":"2026-02-17T16:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.118642 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-log-socket\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.118811 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t59bf\" (UniqueName: \"kubernetes.io/projected/98a910a1-b5f0-4f34-9d76-6474c753e8e7-kube-api-access-t59bf\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.118841 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-kubelet\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.118858 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-run-netns\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.118876 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-run-ovn-kubernetes\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.118898 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-run-ovn\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.118913 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-node-log\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.118973 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98a910a1-b5f0-4f34-9d76-6474c753e8e7-env-overrides\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.119138 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-cni-netd\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.119194 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98a910a1-b5f0-4f34-9d76-6474c753e8e7-ovn-node-metrics-cert\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.119231 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-etc-openvswitch\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.119247 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.134234 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.141533 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.156568 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.168016 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.177673 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.188955 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.199931 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.212468 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.219676 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-log-socket\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.219716 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t59bf\" (UniqueName: \"kubernetes.io/projected/98a910a1-b5f0-4f34-9d76-6474c753e8e7-kube-api-access-t59bf\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.219731 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-log-socket\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.219740 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-kubelet\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.219759 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-run-netns\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.219778 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-run-ovn-kubernetes\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.219799 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-run-ovn\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.219816 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-node-log\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.219833 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98a910a1-b5f0-4f34-9d76-6474c753e8e7-env-overrides\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.219883 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-cni-netd\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.219889 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-run-ovn-kubernetes\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.219903 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98a910a1-b5f0-4f34-9d76-6474c753e8e7-ovn-node-metrics-cert\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.219916 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-run-netns\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.219933 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-node-log\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.219894 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-kubelet\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.219957 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.219984 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-etc-openvswitch\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.220023 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-run-openvswitch\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.220044 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-slash\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.219965 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-run-ovn\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.220064 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-var-lib-openvswitch\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.220100 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-var-lib-openvswitch\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.220113 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98a910a1-b5f0-4f34-9d76-6474c753e8e7-ovnkube-script-lib\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.220119 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-run-openvswitch\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.220135 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-slash\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.220021 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-cni-netd\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.220182 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-etc-openvswitch\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.220187 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98a910a1-b5f0-4f34-9d76-6474c753e8e7-ovnkube-config\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.220216 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.220251 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-run-systemd\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.220268 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-systemd-units\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.220285 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-cni-bin\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.220388 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-systemd-units\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.220413 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-run-systemd\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.220490 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-cni-bin\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.220724 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98a910a1-b5f0-4f34-9d76-6474c753e8e7-env-overrides\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.220797 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98a910a1-b5f0-4f34-9d76-6474c753e8e7-ovnkube-script-lib\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.220968 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98a910a1-b5f0-4f34-9d76-6474c753e8e7-ovnkube-config\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.221103 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.221121 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.221132 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.221146 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.221157 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:38Z","lastTransitionTime":"2026-02-17T16:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.222745 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98a910a1-b5f0-4f34-9d76-6474c753e8e7-ovn-node-metrics-cert\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.223829 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.233831 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.239279 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t59bf\" (UniqueName: \"kubernetes.io/projected/98a910a1-b5f0-4f34-9d76-6474c753e8e7-kube-api-access-t59bf\") pod \"ovnkube-node-4f9wc\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.249055 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.252209 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-n84l8" Feb 17 16:03:38 crc kubenswrapper[4672]: W0217 16:03:38.264278 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec1ec84d_96ba_4a95_a24b_c9142495d70d.slice/crio-77643d2204030b545a42474c061054b7e4600a06940da507a82662db7263e33b WatchSource:0}: Error finding container 77643d2204030b545a42474c061054b7e4600a06940da507a82662db7263e33b: Status 404 returned error can't find the container with id 77643d2204030b545a42474c061054b7e4600a06940da507a82662db7263e33b Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.265043 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.280003 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.295871 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.309495 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.319884 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.322912 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.322945 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.322958 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.322974 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.322985 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:38Z","lastTransitionTime":"2026-02-17T16:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.327459 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.344474 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.362818 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.373983 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.397044 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.410165 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.426490 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.426536 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.426545 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.426559 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.426568 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:38Z","lastTransitionTime":"2026-02-17T16:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.441283 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.482384 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.526798 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.530459 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.530492 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.530504 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.530543 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.530558 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:38Z","lastTransitionTime":"2026-02-17T16:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.565212 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.609470 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.633406 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.633440 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.633449 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.633464 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.633475 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:38Z","lastTransitionTime":"2026-02-17T16:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.644937 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.654982 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.674459 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.683253 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fa9cd2c6-74a5-4567-a141-be56c668e566-proxy-tls\") pod \"machine-config-daemon-d6dhs\" (UID: \"fa9cd2c6-74a5-4567-a141-be56c668e566\") " pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.714394 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.718179 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe-multus-daemon-config\") pod \"multus-5jjr2\" (UID: \"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\") " pod="openshift-multus/multus-5jjr2" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.734181 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.735694 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.735725 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.735736 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.735751 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.735762 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:38Z","lastTransitionTime":"2026-02-17T16:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.767033 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:38Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.774802 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.794891 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.801695 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl6qq\" (UniqueName: \"kubernetes.io/projected/fa9cd2c6-74a5-4567-a141-be56c668e566-kube-api-access-kl6qq\") pod \"machine-config-daemon-d6dhs\" (UID: \"fa9cd2c6-74a5-4567-a141-be56c668e566\") " pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.815382 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.819172 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa9cd2c6-74a5-4567-a141-be56c668e566-mcd-auth-proxy-config\") pod \"machine-config-daemon-d6dhs\" (UID: \"fa9cd2c6-74a5-4567-a141-be56c668e566\") " pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.838205 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.838290 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.838317 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.838349 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.838408 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:38Z","lastTransitionTime":"2026-02-17T16:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.845681 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.857734 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5jjr2" Feb 17 16:03:38 crc kubenswrapper[4672]: W0217 16:03:38.863980 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa9cd2c6_74a5_4567_a141_be56c668e566.slice/crio-e05065a3b1b0cf96d1b34f55a39ebec5a682aaf3707292531306d0dab01ffba1 WatchSource:0}: Error finding container e05065a3b1b0cf96d1b34f55a39ebec5a682aaf3707292531306d0dab01ffba1: Status 404 returned error can't find the container with id e05065a3b1b0cf96d1b34f55a39ebec5a682aaf3707292531306d0dab01ffba1 Feb 17 16:03:38 crc kubenswrapper[4672]: W0217 16:03:38.876993 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedaf690d_34d9_4b32_8a3e_8f5cd3df2bfe.slice/crio-bfd23348a7e44cb3f402cbd60f601e971c866a657dc0377ca5300f3feb6c2162 WatchSource:0}: Error finding container bfd23348a7e44cb3f402cbd60f601e971c866a657dc0377ca5300f3feb6c2162: Status 404 returned error can't find the container with id bfd23348a7e44cb3f402cbd60f601e971c866a657dc0377ca5300f3feb6c2162 Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.906386 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 03:25:08.998406371 +0000 UTC Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.941018 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.941054 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.941063 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.941077 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.941089 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:38Z","lastTransitionTime":"2026-02-17T16:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:38 crc kubenswrapper[4672]: I0217 16:03:38.944467 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:38 crc kubenswrapper[4672]: E0217 16:03:38.944669 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.044426 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.044473 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.044486 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.044504 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.044536 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:39Z","lastTransitionTime":"2026-02-17T16:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.118859 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerStarted","Data":"796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1"} Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.118904 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerStarted","Data":"e05065a3b1b0cf96d1b34f55a39ebec5a682aaf3707292531306d0dab01ffba1"} Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.120570 4672 generic.go:334] "Generic (PLEG): container finished" podID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerID="3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d" exitCode=0 Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.120610 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerDied","Data":"3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d"} Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.120643 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerStarted","Data":"41b2fda982128d8c218ff73b6e891ee27d3fd8ccd248cbe0532cdc1e1b626af4"} Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.123185 4672 generic.go:334] "Generic (PLEG): container finished" podID="ec1ec84d-96ba-4a95-a24b-c9142495d70d" containerID="5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290" exitCode=0 Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.123266 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" event={"ID":"ec1ec84d-96ba-4a95-a24b-c9142495d70d","Type":"ContainerDied","Data":"5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290"} Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.123287 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" event={"ID":"ec1ec84d-96ba-4a95-a24b-c9142495d70d","Type":"ContainerStarted","Data":"77643d2204030b545a42474c061054b7e4600a06940da507a82662db7263e33b"} Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.125943 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5jjr2" event={"ID":"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe","Type":"ContainerStarted","Data":"0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c"} Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.126000 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5jjr2" event={"ID":"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe","Type":"ContainerStarted","Data":"bfd23348a7e44cb3f402cbd60f601e971c866a657dc0377ca5300f3feb6c2162"} Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.140165 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.148666 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.148701 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.148709 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.148722 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.148732 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:39Z","lastTransitionTime":"2026-02-17T16:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.160294 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.175056 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.187794 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.202714 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.217940 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.233663 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.243330 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.251079 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.251111 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.251119 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.251134 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.251144 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:39Z","lastTransitionTime":"2026-02-17T16:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.267662 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.295469 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.310193 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.323770 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.347521 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.353549 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.353577 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.353586 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.353598 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.353610 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:39Z","lastTransitionTime":"2026-02-17T16:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.383996 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.421656 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.456042 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.456075 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.456084 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.456097 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.456107 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:39Z","lastTransitionTime":"2026-02-17T16:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.478480 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.501662 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.531252 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:39 crc kubenswrapper[4672]: E0217 16:03:39.531393 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:03:47.531371466 +0000 UTC m=+36.285460198 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.531536 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.531629 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:39 crc kubenswrapper[4672]: E0217 16:03:39.531680 4672 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:39 crc kubenswrapper[4672]: E0217 16:03:39.531829 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:47.531817977 +0000 UTC m=+36.285906699 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:39 crc kubenswrapper[4672]: E0217 16:03:39.531841 4672 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:39 crc kubenswrapper[4672]: E0217 16:03:39.531997 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:47.531988322 +0000 UTC m=+36.286077044 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.546766 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.557957 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.557994 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.558006 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.558023 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.558035 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:39Z","lastTransitionTime":"2026-02-17T16:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.592630 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.632480 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.632554 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:39 crc kubenswrapper[4672]: E0217 16:03:39.632695 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:39 crc kubenswrapper[4672]: E0217 16:03:39.632715 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:39 crc kubenswrapper[4672]: E0217 16:03:39.632728 4672 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:39 crc kubenswrapper[4672]: E0217 16:03:39.632775 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:47.632760156 +0000 UTC m=+36.386848888 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:39 crc kubenswrapper[4672]: E0217 16:03:39.632840 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:39 crc kubenswrapper[4672]: E0217 16:03:39.632853 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:39 crc kubenswrapper[4672]: E0217 16:03:39.632862 4672 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:39 crc kubenswrapper[4672]: E0217 16:03:39.632891 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 16:03:47.632883399 +0000 UTC m=+36.386972131 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.640498 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.659446 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.659477 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.659486 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.659500 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.659540 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:39Z","lastTransitionTime":"2026-02-17T16:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.663495 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.703086 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.747882 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.762417 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.762464 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.762476 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.762493 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.762522 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:39Z","lastTransitionTime":"2026-02-17T16:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.783844 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.827306 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.864898 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.864936 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.864945 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.864957 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.864967 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:39Z","lastTransitionTime":"2026-02-17T16:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.873735 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.906864 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.906960 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 11:56:22.233353972 +0000 UTC Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.943391 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.944440 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.944496 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:39 crc kubenswrapper[4672]: E0217 16:03:39.944554 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:39 crc kubenswrapper[4672]: E0217 16:03:39.944624 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.966889 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.966934 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.966946 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.966962 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.966975 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:39Z","lastTransitionTime":"2026-02-17T16:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:39 crc kubenswrapper[4672]: I0217 16:03:39.984339 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.024353 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.069403 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.069432 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.069440 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.069452 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.069463 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:40Z","lastTransitionTime":"2026-02-17T16:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.137006 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerStarted","Data":"f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb"} Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.147854 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerStarted","Data":"0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16"} Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.147903 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerStarted","Data":"d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314"} Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.147918 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerStarted","Data":"e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db"} Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.147930 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerStarted","Data":"eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a"} Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.147942 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerStarted","Data":"2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997"} Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.147953 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerStarted","Data":"42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7"} Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.151245 4672 generic.go:334] "Generic (PLEG): container finished" podID="ec1ec84d-96ba-4a95-a24b-c9142495d70d" containerID="631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868" exitCode=0 Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.151340 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" event={"ID":"ec1ec84d-96ba-4a95-a24b-c9142495d70d","Type":"ContainerDied","Data":"631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868"} Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.152678 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.166375 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.173048 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.173085 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.173095 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.173129 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.173143 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:40Z","lastTransitionTime":"2026-02-17T16:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.180141 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.202090 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.233101 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.265817 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.275802 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.275829 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.275837 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.275851 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.275859 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:40Z","lastTransitionTime":"2026-02-17T16:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.303303 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.346558 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.377715 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.377748 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.377760 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.377775 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.377785 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:40Z","lastTransitionTime":"2026-02-17T16:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.387087 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.427549 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.468883 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.481318 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.481466 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.481491 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.481572 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.481600 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:40Z","lastTransitionTime":"2026-02-17T16:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.504339 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.552225 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.583580 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.585770 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.585814 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.585829 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.585845 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.585858 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:40Z","lastTransitionTime":"2026-02-17T16:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.624570 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.666154 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.689505 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.689570 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.689583 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.689599 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.689612 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:40Z","lastTransitionTime":"2026-02-17T16:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.711957 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.751769 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.790503 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.792222 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.792278 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.792297 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.792322 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.792342 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:40Z","lastTransitionTime":"2026-02-17T16:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.828729 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.862571 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.894912 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.895100 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.895196 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.895285 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.895391 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:40Z","lastTransitionTime":"2026-02-17T16:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.907643 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 08:52:52.361430202 +0000 UTC Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.915436 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.944745 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:40 crc kubenswrapper[4672]: E0217 16:03:40.944864 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.948066 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.982690 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:40Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.997353 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.997574 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.997707 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.997823 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:40 crc kubenswrapper[4672]: I0217 16:03:40.997929 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:40Z","lastTransitionTime":"2026-02-17T16:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.027673 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.064656 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.100192 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.100235 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.100249 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.100291 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.100305 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:41Z","lastTransitionTime":"2026-02-17T16:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.110082 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.153974 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.158172 4672 generic.go:334] "Generic (PLEG): container finished" podID="ec1ec84d-96ba-4a95-a24b-c9142495d70d" containerID="849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5" exitCode=0 Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.158218 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" event={"ID":"ec1ec84d-96ba-4a95-a24b-c9142495d70d","Type":"ContainerDied","Data":"849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5"} Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.187462 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.205384 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.205429 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.205446 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.205466 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.205481 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:41Z","lastTransitionTime":"2026-02-17T16:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.228538 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.264402 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.303283 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.308040 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.308123 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.308135 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.308152 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.308163 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:41Z","lastTransitionTime":"2026-02-17T16:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.346943 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.390307 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.411132 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.411212 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.411226 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.411252 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.411265 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:41Z","lastTransitionTime":"2026-02-17T16:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.425871 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.465873 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.505604 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.515183 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.515222 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.515232 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.515251 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.515264 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:41Z","lastTransitionTime":"2026-02-17T16:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.551475 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.592059 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.619207 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.619260 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.619277 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.619299 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.619317 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:41Z","lastTransitionTime":"2026-02-17T16:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.630102 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.664229 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.705863 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.721529 4672 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.722272 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.722327 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.722352 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.722382 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.722403 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:41Z","lastTransitionTime":"2026-02-17T16:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.771636 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.802239 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.825455 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.825526 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.825540 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.825559 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.825572 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:41Z","lastTransitionTime":"2026-02-17T16:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.830221 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.908641 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 04:20:20.95005442 +0000 UTC Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.928694 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.928750 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.928768 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.928792 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.928809 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:41Z","lastTransitionTime":"2026-02-17T16:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.945674 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:41 crc kubenswrapper[4672]: E0217 16:03:41.945821 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.946262 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:41 crc kubenswrapper[4672]: E0217 16:03:41.946355 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.967208 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:41 crc kubenswrapper[4672]: I0217 16:03:41.993405 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.011802 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.083736 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.083890 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.083904 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.083921 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.083933 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:42Z","lastTransitionTime":"2026-02-17T16:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.086090 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.101601 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.117442 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.131774 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.143139 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.164272 4672 generic.go:334] "Generic (PLEG): container finished" podID="ec1ec84d-96ba-4a95-a24b-c9142495d70d" containerID="c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551" exitCode=0 Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.164356 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" event={"ID":"ec1ec84d-96ba-4a95-a24b-c9142495d70d","Type":"ContainerDied","Data":"c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551"} Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.168819 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerStarted","Data":"24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4"} Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.184761 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.193230 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.193263 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.193273 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.193287 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.193297 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:42Z","lastTransitionTime":"2026-02-17T16:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.224465 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.274109 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.296170 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.296204 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.296214 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.296229 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.296242 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:42Z","lastTransitionTime":"2026-02-17T16:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.304411 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.347812 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.383006 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.399037 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.399104 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.399146 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.399173 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.399191 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:42Z","lastTransitionTime":"2026-02-17T16:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.432618 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.477388 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.501954 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.501998 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.502007 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.502021 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.502030 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:42Z","lastTransitionTime":"2026-02-17T16:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.508443 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.551280 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.583701 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.604776 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.604835 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.604859 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.604888 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.604905 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:42Z","lastTransitionTime":"2026-02-17T16:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.627436 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.663180 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.702701 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.707692 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.707738 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.707756 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.707778 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.707794 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:42Z","lastTransitionTime":"2026-02-17T16:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.742187 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.783859 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.810834 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.810886 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.810902 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.810924 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.810940 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:42Z","lastTransitionTime":"2026-02-17T16:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.828468 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.864126 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.905373 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.909500 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 02:13:52.530976734 +0000 UTC Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.913154 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.913198 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.913209 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.913227 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.913239 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:42Z","lastTransitionTime":"2026-02-17T16:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.943010 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.947861 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:42 crc kubenswrapper[4672]: E0217 16:03:42.948084 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:42 crc kubenswrapper[4672]: I0217 16:03:42.986400 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.016430 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.016474 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.016483 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.016500 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.016536 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:43Z","lastTransitionTime":"2026-02-17T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.025889 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.120160 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.120236 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.120260 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.120292 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.120316 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:43Z","lastTransitionTime":"2026-02-17T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.178284 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" event={"ID":"ec1ec84d-96ba-4a95-a24b-c9142495d70d","Type":"ContainerDied","Data":"cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a"} Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.178365 4672 generic.go:334] "Generic (PLEG): container finished" podID="ec1ec84d-96ba-4a95-a24b-c9142495d70d" containerID="cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a" exitCode=0 Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.202296 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.223431 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.223501 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.223576 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.223609 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.223633 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:43Z","lastTransitionTime":"2026-02-17T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.226357 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.249786 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.268789 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.281936 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.296301 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.311014 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.325756 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.325825 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.325843 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.325872 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.325890 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:43Z","lastTransitionTime":"2026-02-17T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.352647 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.388051 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.439970 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.445694 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.445743 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.445755 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.445776 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.445791 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:43Z","lastTransitionTime":"2026-02-17T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.477349 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.505066 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.546324 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.548018 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.548062 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.548075 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.548092 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.548159 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:43Z","lastTransitionTime":"2026-02-17T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.592987 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.623636 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:43Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.651261 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.651341 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.651363 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.651395 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.651417 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:43Z","lastTransitionTime":"2026-02-17T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.755020 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.755113 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.755138 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.755164 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.755184 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:43Z","lastTransitionTime":"2026-02-17T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.857740 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.857803 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.857821 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.857844 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.857863 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:43Z","lastTransitionTime":"2026-02-17T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.910459 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 21:13:47.727434586 +0000 UTC Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.944768 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:43 crc kubenswrapper[4672]: E0217 16:03:43.944897 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.945337 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:43 crc kubenswrapper[4672]: E0217 16:03:43.945421 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.961068 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.961121 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.961138 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.961162 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:43 crc kubenswrapper[4672]: I0217 16:03:43.961179 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:43Z","lastTransitionTime":"2026-02-17T16:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.064222 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.064321 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.064347 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.064378 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.064405 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:44Z","lastTransitionTime":"2026-02-17T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.167967 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.168010 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.168022 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.168040 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.168051 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:44Z","lastTransitionTime":"2026-02-17T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.186473 4672 generic.go:334] "Generic (PLEG): container finished" podID="ec1ec84d-96ba-4a95-a24b-c9142495d70d" containerID="44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3" exitCode=0 Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.186536 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" event={"ID":"ec1ec84d-96ba-4a95-a24b-c9142495d70d","Type":"ContainerDied","Data":"44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3"} Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.213796 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.230208 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.244591 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.257785 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.271084 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.271157 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.271186 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.271220 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.271244 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:44Z","lastTransitionTime":"2026-02-17T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.283877 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.301363 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.319047 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.345066 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.367810 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.374729 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.374788 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.374808 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.374832 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.374849 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:44Z","lastTransitionTime":"2026-02-17T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.389358 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.409193 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.427659 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.444568 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.460923 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.483272 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.485270 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.485630 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.485865 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.486002 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.486323 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:44Z","lastTransitionTime":"2026-02-17T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.588984 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.589029 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.589042 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.589061 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.589075 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:44Z","lastTransitionTime":"2026-02-17T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.691862 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.691937 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.691952 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.691974 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.691986 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:44Z","lastTransitionTime":"2026-02-17T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.773793 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.786186 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.803798 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.803859 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.803879 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.803901 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.803917 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:44Z","lastTransitionTime":"2026-02-17T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.811929 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.835586 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.854689 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.868857 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.881240 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.897204 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.906977 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.907044 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.907055 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.907073 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.907084 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:44Z","lastTransitionTime":"2026-02-17T16:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.911660 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 05:59:41.332132104 +0000 UTC Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.912571 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.926043 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.943060 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.944322 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:44 crc kubenswrapper[4672]: E0217 16:03:44.944570 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.960449 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.977232 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:44 crc kubenswrapper[4672]: I0217 16:03:44.993146 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:44Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.010046 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.010094 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.010106 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.010126 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.010140 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:45Z","lastTransitionTime":"2026-02-17T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.013174 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.039094 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.113372 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.113464 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.113496 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.113642 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.113679 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:45Z","lastTransitionTime":"2026-02-17T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.196066 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" event={"ID":"ec1ec84d-96ba-4a95-a24b-c9142495d70d","Type":"ContainerStarted","Data":"86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808"} Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.203537 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerStarted","Data":"6fc5400a2ee51af7b4d6668478faaddd1d5f33a379bb0da3784adee047d6c4a6"} Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.204322 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.204403 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.217496 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.217602 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.217626 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.217657 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.217686 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:45Z","lastTransitionTime":"2026-02-17T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.227803 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.293276 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.297655 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.298462 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.310067 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.319619 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.319672 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.319685 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.319709 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.319723 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:45Z","lastTransitionTime":"2026-02-17T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.326354 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.344410 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.356439 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.369348 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.378384 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.392801 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.405324 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.421920 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.421967 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.421979 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.421995 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.422006 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:45Z","lastTransitionTime":"2026-02-17T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.426775 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.444106 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.456871 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.468402 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.490638 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.513828 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.524061 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.524127 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.524151 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.524179 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.524260 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:45Z","lastTransitionTime":"2026-02-17T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.526076 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.543587 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.582493 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.626261 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.626323 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.626337 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.626353 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.626365 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:45Z","lastTransitionTime":"2026-02-17T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.630785 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc5400a2ee51af7b4d6668478faaddd1d5f33a379bb0da3784adee047d6c4a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.667920 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.705502 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.728687 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.728726 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.728739 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.728757 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.728773 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:45Z","lastTransitionTime":"2026-02-17T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.744758 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.789347 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.825959 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.831135 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.831288 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.831390 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.831494 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.831641 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:45Z","lastTransitionTime":"2026-02-17T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.871639 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.905479 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.912809 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 23:46:22.181824881 +0000 UTC Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.935737 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.935800 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.935817 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.935839 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.935857 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:45Z","lastTransitionTime":"2026-02-17T16:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.943907 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.944055 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.944077 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:45 crc kubenswrapper[4672]: E0217 16:03:45.944157 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:45 crc kubenswrapper[4672]: E0217 16:03:45.944617 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:45 crc kubenswrapper[4672]: I0217 16:03:45.987376 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:45Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.028601 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:46Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.038965 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.039027 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.039044 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.039070 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.039087 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:46Z","lastTransitionTime":"2026-02-17T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.142346 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.142399 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.142412 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.142437 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.142451 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:46Z","lastTransitionTime":"2026-02-17T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.207414 4672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.245835 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.245901 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.245919 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.245944 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.245962 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:46Z","lastTransitionTime":"2026-02-17T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.348477 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.348570 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.348588 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.348612 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.348628 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:46Z","lastTransitionTime":"2026-02-17T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.451313 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.451369 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.451388 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.451412 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.451436 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:46Z","lastTransitionTime":"2026-02-17T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.554578 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.554933 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.555068 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.555210 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.555342 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:46Z","lastTransitionTime":"2026-02-17T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.705320 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.705381 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.705395 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.705416 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.705432 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:46Z","lastTransitionTime":"2026-02-17T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.808705 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.808763 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.808772 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.808787 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.808798 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:46Z","lastTransitionTime":"2026-02-17T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.912408 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.912456 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.912470 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.912486 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.912503 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:46Z","lastTransitionTime":"2026-02-17T16:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.914018 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 20:46:44.862255272 +0000 UTC Feb 17 16:03:46 crc kubenswrapper[4672]: I0217 16:03:46.944854 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:46 crc kubenswrapper[4672]: E0217 16:03:46.944993 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.015119 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.015158 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.015174 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.015193 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.015208 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:47Z","lastTransitionTime":"2026-02-17T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.118742 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.119027 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.119037 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.119051 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.119062 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:47Z","lastTransitionTime":"2026-02-17T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.210427 4672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.222908 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.222964 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.222978 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.222995 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.223008 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:47Z","lastTransitionTime":"2026-02-17T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.325374 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.325425 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.325440 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.325461 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.325479 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:47Z","lastTransitionTime":"2026-02-17T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.428261 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.428323 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.428340 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.428363 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.428379 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:47Z","lastTransitionTime":"2026-02-17T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.530800 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.530847 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.530857 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.530874 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.530886 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:47Z","lastTransitionTime":"2026-02-17T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.534157 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.534258 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:47 crc kubenswrapper[4672]: E0217 16:03:47.534278 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:04:03.534253842 +0000 UTC m=+52.288342584 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.534313 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:47 crc kubenswrapper[4672]: E0217 16:03:47.534401 4672 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:47 crc kubenswrapper[4672]: E0217 16:03:47.534455 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:04:03.534440037 +0000 UTC m=+52.288528779 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:03:47 crc kubenswrapper[4672]: E0217 16:03:47.534463 4672 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:47 crc kubenswrapper[4672]: E0217 16:03:47.534549 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:04:03.534506439 +0000 UTC m=+52.288595201 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.633431 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.633479 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.633490 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.633526 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.633539 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:47Z","lastTransitionTime":"2026-02-17T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.634863 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.634975 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:47 crc kubenswrapper[4672]: E0217 16:03:47.635024 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:47 crc kubenswrapper[4672]: E0217 16:03:47.635042 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:47 crc kubenswrapper[4672]: E0217 16:03:47.635052 4672 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:47 crc kubenswrapper[4672]: E0217 16:03:47.635098 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 16:04:03.635083368 +0000 UTC m=+52.389172100 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:47 crc kubenswrapper[4672]: E0217 16:03:47.635161 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:03:47 crc kubenswrapper[4672]: E0217 16:03:47.635186 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:03:47 crc kubenswrapper[4672]: E0217 16:03:47.635203 4672 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:47 crc kubenswrapper[4672]: E0217 16:03:47.635271 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 16:04:03.635251492 +0000 UTC m=+52.389340254 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.735241 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.735315 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.735335 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.735360 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.735376 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:47Z","lastTransitionTime":"2026-02-17T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.838147 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.838180 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.838189 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.838202 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.838212 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:47Z","lastTransitionTime":"2026-02-17T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.914824 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 09:57:44.001955115 +0000 UTC Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.939971 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.939993 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.940000 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.940011 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.940018 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:47Z","lastTransitionTime":"2026-02-17T16:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.944492 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:47 crc kubenswrapper[4672]: E0217 16:03:47.944665 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:47 crc kubenswrapper[4672]: I0217 16:03:47.945910 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:47 crc kubenswrapper[4672]: E0217 16:03:47.946009 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.042752 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.043169 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.043296 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.043390 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.043494 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.147320 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.147396 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.147421 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.147455 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.147478 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.217389 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4f9wc_98a910a1-b5f0-4f34-9d76-6474c753e8e7/ovnkube-controller/0.log" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.221788 4672 generic.go:334] "Generic (PLEG): container finished" podID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerID="6fc5400a2ee51af7b4d6668478faaddd1d5f33a379bb0da3784adee047d6c4a6" exitCode=1 Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.221851 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerDied","Data":"6fc5400a2ee51af7b4d6668478faaddd1d5f33a379bb0da3784adee047d6c4a6"} Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.223212 4672 scope.go:117] "RemoveContainer" containerID="6fc5400a2ee51af7b4d6668478faaddd1d5f33a379bb0da3784adee047d6c4a6" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.243339 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:48Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.250185 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.250221 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.250232 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.250247 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.250259 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.261589 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:48Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.279336 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:48Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.295590 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:48Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.314355 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:48Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.327194 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:48Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.340827 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:48Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.354037 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.354088 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.354104 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.354127 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.354146 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.362199 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:48Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.380370 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:48Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.405036 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.405097 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.405115 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.405138 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.405163 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.399439 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:48Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:48 crc kubenswrapper[4672]: E0217 16:03:48.429453 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:48Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.435050 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.435110 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.435127 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.435155 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.435174 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.445675 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:48Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:48 crc kubenswrapper[4672]: E0217 16:03:48.454012 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:48Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.460012 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.460056 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.460071 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.460090 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.460103 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.470066 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:48Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:48 crc kubenswrapper[4672]: E0217 16:03:48.474388 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:48Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.478951 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.478996 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.479018 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.479047 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.479069 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.483960 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:48Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:48 crc kubenswrapper[4672]: E0217 16:03:48.493883 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:48Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.501703 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:48Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.505201 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.505272 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.505283 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.505306 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.505319 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.519094 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc5400a2ee51af7b4d6668478faaddd1d5f33a379bb0da3784adee047d6c4a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fc5400a2ee51af7b4d6668478faaddd1d5f33a379bb0da3784adee047d6c4a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:47Z\\\",\\\"message\\\":\\\" 5990 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:47.242623 5990 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:47.242683 5990 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 16:03:47.242749 5990 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:47.242820 5990 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 16:03:47.242879 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 16:03:47.242917 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 16:03:47.242964 5990 factory.go:656] Stopping watch factory\\\\nI0217 16:03:47.243012 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 16:03:47.243055 5990 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 16:03:47.243081 5990 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 16:03:47.243098 5990 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:48Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:48 crc kubenswrapper[4672]: E0217 16:03:48.519730 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:48Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:48 crc kubenswrapper[4672]: E0217 16:03:48.519903 4672 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.521450 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.521536 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.521554 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.521576 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.521591 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.624220 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.624271 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.624282 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.624302 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.624317 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.726403 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.726433 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.726441 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.726454 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.726462 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.828653 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.828704 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.828721 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.828744 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.828761 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.914949 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 20:47:37.216123322 +0000 UTC Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.931167 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.931232 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.931250 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.931276 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.931294 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:48Z","lastTransitionTime":"2026-02-17T16:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:48 crc kubenswrapper[4672]: I0217 16:03:48.944683 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:48 crc kubenswrapper[4672]: E0217 16:03:48.944846 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.034405 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.034470 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.034489 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.034550 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.034594 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:49Z","lastTransitionTime":"2026-02-17T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.137246 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.137293 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.137308 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.137327 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.137341 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:49Z","lastTransitionTime":"2026-02-17T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.231659 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4f9wc_98a910a1-b5f0-4f34-9d76-6474c753e8e7/ovnkube-controller/1.log" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.232665 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4f9wc_98a910a1-b5f0-4f34-9d76-6474c753e8e7/ovnkube-controller/0.log" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.236400 4672 generic.go:334] "Generic (PLEG): container finished" podID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerID="d283cb6739fe72163d4567bab0b63cc72697a661ee8ae5dbe706f0e378e02aeb" exitCode=1 Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.236456 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerDied","Data":"d283cb6739fe72163d4567bab0b63cc72697a661ee8ae5dbe706f0e378e02aeb"} Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.236552 4672 scope.go:117] "RemoveContainer" containerID="6fc5400a2ee51af7b4d6668478faaddd1d5f33a379bb0da3784adee047d6c4a6" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.238201 4672 scope.go:117] "RemoveContainer" containerID="d283cb6739fe72163d4567bab0b63cc72697a661ee8ae5dbe706f0e378e02aeb" Feb 17 16:03:49 crc kubenswrapper[4672]: E0217 16:03:49.238659 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4f9wc_openshift-ovn-kubernetes(98a910a1-b5f0-4f34-9d76-6474c753e8e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.241638 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.241732 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.241756 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.241785 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.241808 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:49Z","lastTransitionTime":"2026-02-17T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.263397 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.280492 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.307605 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.338675 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.346489 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.346555 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.346632 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.346692 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.346706 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:49Z","lastTransitionTime":"2026-02-17T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.363922 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.376437 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.387436 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.397000 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.410728 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.425862 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.443409 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.448957 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.449027 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.449041 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.449063 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.449076 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:49Z","lastTransitionTime":"2026-02-17T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.459025 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.474711 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.487667 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.507136 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d283cb6739fe72163d4567bab0b63cc72697a661ee8ae5dbe706f0e378e02aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fc5400a2ee51af7b4d6668478faaddd1d5f33a379bb0da3784adee047d6c4a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:47Z\\\",\\\"message\\\":\\\" 5990 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:47.242623 5990 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:47.242683 5990 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 16:03:47.242749 5990 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:47.242820 5990 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 16:03:47.242879 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 16:03:47.242917 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 16:03:47.242964 5990 factory.go:656] Stopping watch factory\\\\nI0217 16:03:47.243012 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 16:03:47.243055 5990 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 16:03:47.243081 5990 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 16:03:47.243098 5990 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d283cb6739fe72163d4567bab0b63cc72697a661ee8ae5dbe706f0e378e02aeb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:49Z\\\",\\\"message\\\":\\\"(0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:49.162356 6141 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:49.162401 6141 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:49.162367 6141 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.162621 6141 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.163165 6141 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:49.163285 6141 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.163769 6141 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 16:03:49.163806 6141 factory.go:656] Stopping watch factory\\\\nI0217 16:03:49.163822 6141 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:03:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:49Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.552232 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.552281 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.552291 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.552350 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.552371 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:49Z","lastTransitionTime":"2026-02-17T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.655422 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.655477 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.655492 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.655536 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.655554 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:49Z","lastTransitionTime":"2026-02-17T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.758822 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.758884 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.758902 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.758930 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.758950 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:49Z","lastTransitionTime":"2026-02-17T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.863051 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.863128 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.863153 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.863182 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.863205 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:49Z","lastTransitionTime":"2026-02-17T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.915946 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 02:06:55.91407218 +0000 UTC Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.944470 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.944589 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:49 crc kubenswrapper[4672]: E0217 16:03:49.944731 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:49 crc kubenswrapper[4672]: E0217 16:03:49.944922 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.966745 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.966799 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.966813 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.966832 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:49 crc kubenswrapper[4672]: I0217 16:03:49.966848 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:49Z","lastTransitionTime":"2026-02-17T16:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.059260 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7"] Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.060050 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.066666 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.066855 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.069380 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.069435 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.069453 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.069484 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.069503 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:50Z","lastTransitionTime":"2026-02-17T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.086034 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.110420 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.128802 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.148377 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.158434 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4e418bd1-d1c0-4f75-8fb2-6c74780f648c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qfvh7\" (UID: \"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.158636 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4e418bd1-d1c0-4f75-8fb2-6c74780f648c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qfvh7\" (UID: \"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.158730 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwmgw\" (UniqueName: \"kubernetes.io/projected/4e418bd1-d1c0-4f75-8fb2-6c74780f648c-kube-api-access-kwmgw\") pod \"ovnkube-control-plane-749d76644c-qfvh7\" (UID: \"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.158803 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4e418bd1-d1c0-4f75-8fb2-6c74780f648c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qfvh7\" (UID: \"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.168077 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.173357 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.173418 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.173438 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.173466 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.173486 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:50Z","lastTransitionTime":"2026-02-17T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.186948 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.208486 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.226275 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.241607 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4f9wc_98a910a1-b5f0-4f34-9d76-6474c753e8e7/ovnkube-controller/1.log" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.243656 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.260169 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwmgw\" (UniqueName: \"kubernetes.io/projected/4e418bd1-d1c0-4f75-8fb2-6c74780f648c-kube-api-access-kwmgw\") pod \"ovnkube-control-plane-749d76644c-qfvh7\" (UID: \"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.260253 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4e418bd1-d1c0-4f75-8fb2-6c74780f648c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qfvh7\" (UID: \"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.260313 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4e418bd1-d1c0-4f75-8fb2-6c74780f648c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qfvh7\" (UID: \"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.260360 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4e418bd1-d1c0-4f75-8fb2-6c74780f648c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qfvh7\" (UID: \"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.261478 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4e418bd1-d1c0-4f75-8fb2-6c74780f648c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qfvh7\" (UID: \"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.262239 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4e418bd1-d1c0-4f75-8fb2-6c74780f648c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qfvh7\" (UID: \"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.266008 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4e418bd1-d1c0-4f75-8fb2-6c74780f648c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qfvh7\" (UID: \"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.267359 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.276558 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.276635 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.276648 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.276666 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.276696 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:50Z","lastTransitionTime":"2026-02-17T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.281267 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwmgw\" (UniqueName: \"kubernetes.io/projected/4e418bd1-d1c0-4f75-8fb2-6c74780f648c-kube-api-access-kwmgw\") pod \"ovnkube-control-plane-749d76644c-qfvh7\" (UID: \"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.293051 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.306620 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.318270 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.328968 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.346216 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d283cb6739fe72163d4567bab0b63cc72697a661ee8ae5dbe706f0e378e02aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fc5400a2ee51af7b4d6668478faaddd1d5f33a379bb0da3784adee047d6c4a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:47Z\\\",\\\"message\\\":\\\" 5990 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:47.242623 5990 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:47.242683 5990 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 16:03:47.242749 5990 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:47.242820 5990 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 16:03:47.242879 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 16:03:47.242917 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 16:03:47.242964 5990 factory.go:656] Stopping watch factory\\\\nI0217 16:03:47.243012 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 16:03:47.243055 5990 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 16:03:47.243081 5990 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 16:03:47.243098 5990 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d283cb6739fe72163d4567bab0b63cc72697a661ee8ae5dbe706f0e378e02aeb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:49Z\\\",\\\"message\\\":\\\"(0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:49.162356 6141 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:49.162401 6141 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:49.162367 6141 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.162621 6141 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.163165 6141 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:49.163285 6141 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.163769 6141 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 16:03:49.163806 6141 factory.go:656] Stopping watch factory\\\\nI0217 16:03:49.163822 6141 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:03:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.357034 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qfvh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.379319 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.379368 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.379382 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.379401 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.379416 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:50Z","lastTransitionTime":"2026-02-17T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.388697 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.486246 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.486282 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.486292 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.486326 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.486337 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:50Z","lastTransitionTime":"2026-02-17T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.588695 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.588771 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.588784 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.588800 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.588812 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:50Z","lastTransitionTime":"2026-02-17T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.691940 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.692059 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.692091 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.692140 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.692165 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:50Z","lastTransitionTime":"2026-02-17T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.795591 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.795662 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.795681 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.795705 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.795724 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:50Z","lastTransitionTime":"2026-02-17T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.899191 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.899257 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.899279 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.899308 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.899328 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:50Z","lastTransitionTime":"2026-02-17T16:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.916995 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:29:32.860245517 +0000 UTC Feb 17 16:03:50 crc kubenswrapper[4672]: I0217 16:03:50.944434 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:50 crc kubenswrapper[4672]: E0217 16:03:50.944658 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.001834 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.001896 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.001914 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.001973 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.001991 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:51Z","lastTransitionTime":"2026-02-17T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.105198 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.105257 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.105279 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.105309 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.105331 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:51Z","lastTransitionTime":"2026-02-17T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.208016 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.208065 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.208078 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.208098 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.208110 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:51Z","lastTransitionTime":"2026-02-17T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.251595 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" event={"ID":"4e418bd1-d1c0-4f75-8fb2-6c74780f648c","Type":"ContainerStarted","Data":"77542e439619fe148e71b29dda8c7c1957550c206d27bd12aa640f991b7ab96c"} Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.251673 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" event={"ID":"4e418bd1-d1c0-4f75-8fb2-6c74780f648c","Type":"ContainerStarted","Data":"4897d237d880d7e444b27a13aba3e1e2d3a7ab13092c77bc1978c08f9ce3e2a4"} Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.251695 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" event={"ID":"4e418bd1-d1c0-4f75-8fb2-6c74780f648c","Type":"ContainerStarted","Data":"d5acb98c043b36f9b7ec1a54b7546a9f7deecaebb989fee484eca4063c07d50c"} Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.272969 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.288384 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.304551 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.310202 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.310242 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.310255 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.310269 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.310278 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:51Z","lastTransitionTime":"2026-02-17T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.316786 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.327574 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.355366 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d283cb6739fe72163d4567bab0b63cc72697a661ee8ae5dbe706f0e378e02aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fc5400a2ee51af7b4d6668478faaddd1d5f33a379bb0da3784adee047d6c4a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:47Z\\\",\\\"message\\\":\\\" 5990 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:47.242623 5990 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:47.242683 5990 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 16:03:47.242749 5990 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:47.242820 5990 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 16:03:47.242879 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 16:03:47.242917 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 16:03:47.242964 5990 factory.go:656] Stopping watch factory\\\\nI0217 16:03:47.243012 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 16:03:47.243055 5990 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 16:03:47.243081 5990 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 16:03:47.243098 5990 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d283cb6739fe72163d4567bab0b63cc72697a661ee8ae5dbe706f0e378e02aeb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:49Z\\\",\\\"message\\\":\\\"(0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:49.162356 6141 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:49.162401 6141 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:49.162367 6141 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.162621 6141 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.163165 6141 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:49.163285 6141 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.163769 6141 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 16:03:49.163806 6141 factory.go:656] Stopping watch factory\\\\nI0217 16:03:49.163822 6141 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:03:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.371230 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4897d237d880d7e444b27a13aba3e1e2d3a7ab13092c77bc1978c08f9ce3e2a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77542e439619fe148e71b29dda8c7c1957550c206d27bd12aa640f991b7ab96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qfvh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.404662 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.412461 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.412533 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.412548 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.412572 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.412591 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:51Z","lastTransitionTime":"2026-02-17T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.421540 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.440829 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.458172 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.479109 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.498391 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.512695 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.515408 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.515449 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.515460 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.515478 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.515493 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:51Z","lastTransitionTime":"2026-02-17T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.527291 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.546487 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.575803 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-hqdz9"] Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.576684 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:03:51 crc kubenswrapper[4672]: E0217 16:03:51.576785 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.592656 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.605777 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.618833 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.618883 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.618895 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.618912 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.618922 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:51Z","lastTransitionTime":"2026-02-17T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.619964 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.634914 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.648141 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.663460 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.676463 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pbx6\" (UniqueName: \"kubernetes.io/projected/712be02c-2ccc-4989-aecb-653745bacb0d-kube-api-access-8pbx6\") pod \"network-metrics-daemon-hqdz9\" (UID: \"712be02c-2ccc-4989-aecb-653745bacb0d\") " pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.676559 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs\") pod \"network-metrics-daemon-hqdz9\" (UID: \"712be02c-2ccc-4989-aecb-653745bacb0d\") " pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.676630 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.688590 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.701957 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.721063 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.721095 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.721105 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.721119 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.721130 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:51Z","lastTransitionTime":"2026-02-17T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.722264 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.740120 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d283cb6739fe72163d4567bab0b63cc72697a661ee8ae5dbe706f0e378e02aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fc5400a2ee51af7b4d6668478faaddd1d5f33a379bb0da3784adee047d6c4a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:47Z\\\",\\\"message\\\":\\\" 5990 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:47.242623 5990 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:47.242683 5990 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 16:03:47.242749 5990 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:47.242820 5990 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 16:03:47.242879 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 16:03:47.242917 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 16:03:47.242964 5990 factory.go:656] Stopping watch factory\\\\nI0217 16:03:47.243012 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 16:03:47.243055 5990 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 16:03:47.243081 5990 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 16:03:47.243098 5990 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d283cb6739fe72163d4567bab0b63cc72697a661ee8ae5dbe706f0e378e02aeb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:49Z\\\",\\\"message\\\":\\\"(0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:49.162356 6141 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:49.162401 6141 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:49.162367 6141 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.162621 6141 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.163165 6141 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:49.163285 6141 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.163769 6141 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 16:03:49.163806 6141 factory.go:656] Stopping watch factory\\\\nI0217 16:03:49.163822 6141 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:03:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.753203 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4897d237d880d7e444b27a13aba3e1e2d3a7ab13092c77bc1978c08f9ce3e2a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77542e439619fe148e71b29dda8c7c1957550c206d27bd12aa640f991b7ab96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qfvh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.766008 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hqdz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"712be02c-2ccc-4989-aecb-653745bacb0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hqdz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.777639 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pbx6\" (UniqueName: \"kubernetes.io/projected/712be02c-2ccc-4989-aecb-653745bacb0d-kube-api-access-8pbx6\") pod \"network-metrics-daemon-hqdz9\" (UID: \"712be02c-2ccc-4989-aecb-653745bacb0d\") " pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.777684 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs\") pod \"network-metrics-daemon-hqdz9\" (UID: \"712be02c-2ccc-4989-aecb-653745bacb0d\") " pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:03:51 crc kubenswrapper[4672]: E0217 16:03:51.777853 4672 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:03:51 crc kubenswrapper[4672]: E0217 16:03:51.777920 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs podName:712be02c-2ccc-4989-aecb-653745bacb0d nodeName:}" failed. No retries permitted until 2026-02-17 16:03:52.277901255 +0000 UTC m=+41.031989987 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs") pod "network-metrics-daemon-hqdz9" (UID: "712be02c-2ccc-4989-aecb-653745bacb0d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.791792 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.797445 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pbx6\" (UniqueName: \"kubernetes.io/projected/712be02c-2ccc-4989-aecb-653745bacb0d-kube-api-access-8pbx6\") pod \"network-metrics-daemon-hqdz9\" (UID: \"712be02c-2ccc-4989-aecb-653745bacb0d\") " pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.805791 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.820971 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.824178 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.824311 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.824401 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.824483 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.824597 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:51Z","lastTransitionTime":"2026-02-17T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.836324 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.917788 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:54:17.155024704 +0000 UTC Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.926909 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.926976 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.926995 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.927021 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.927043 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:51Z","lastTransitionTime":"2026-02-17T16:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.944283 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.944298 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:51 crc kubenswrapper[4672]: E0217 16:03:51.944485 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:51 crc kubenswrapper[4672]: E0217 16:03:51.944733 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.965100 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:51 crc kubenswrapper[4672]: I0217 16:03:51.989724 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.010019 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.025476 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.030138 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.030179 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.030191 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.030209 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.030221 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:52Z","lastTransitionTime":"2026-02-17T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.036873 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.052950 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.069198 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.084433 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.103632 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.120232 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.132165 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.132216 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.132234 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.132260 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.132279 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:52Z","lastTransitionTime":"2026-02-17T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.135989 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.147642 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.175730 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d283cb6739fe72163d4567bab0b63cc72697a661ee8ae5dbe706f0e378e02aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fc5400a2ee51af7b4d6668478faaddd1d5f33a379bb0da3784adee047d6c4a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:47Z\\\",\\\"message\\\":\\\" 5990 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:47.242623 5990 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:47.242683 5990 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 16:03:47.242749 5990 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:47.242820 5990 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 16:03:47.242879 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 16:03:47.242917 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 16:03:47.242964 5990 factory.go:656] Stopping watch factory\\\\nI0217 16:03:47.243012 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 16:03:47.243055 5990 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 16:03:47.243081 5990 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 16:03:47.243098 5990 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d283cb6739fe72163d4567bab0b63cc72697a661ee8ae5dbe706f0e378e02aeb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:49Z\\\",\\\"message\\\":\\\"(0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:49.162356 6141 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:49.162401 6141 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:49.162367 6141 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.162621 6141 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.163165 6141 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:49.163285 6141 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.163769 6141 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 16:03:49.163806 6141 factory.go:656] Stopping watch factory\\\\nI0217 16:03:49.163822 6141 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:03:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.189933 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4897d237d880d7e444b27a13aba3e1e2d3a7ab13092c77bc1978c08f9ce3e2a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77542e439619fe148e71b29dda8c7c1957550c206d27bd12aa640f991b7ab96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qfvh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.202580 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hqdz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"712be02c-2ccc-4989-aecb-653745bacb0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hqdz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.225640 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.234854 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.235081 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.235331 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.235411 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.235478 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:52Z","lastTransitionTime":"2026-02-17T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.241546 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.283016 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs\") pod \"network-metrics-daemon-hqdz9\" (UID: \"712be02c-2ccc-4989-aecb-653745bacb0d\") " pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:03:52 crc kubenswrapper[4672]: E0217 16:03:52.283291 4672 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:03:52 crc kubenswrapper[4672]: E0217 16:03:52.283402 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs podName:712be02c-2ccc-4989-aecb-653745bacb0d nodeName:}" failed. No retries permitted until 2026-02-17 16:03:53.283372219 +0000 UTC m=+42.037460981 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs") pod "network-metrics-daemon-hqdz9" (UID: "712be02c-2ccc-4989-aecb-653745bacb0d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.337887 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.337926 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.337937 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.337951 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.337962 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:52Z","lastTransitionTime":"2026-02-17T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.440987 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.441016 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.441026 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.441037 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.441048 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:52Z","lastTransitionTime":"2026-02-17T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.544486 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.544546 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.544620 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.544644 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.544661 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:52Z","lastTransitionTime":"2026-02-17T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.647862 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.647906 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.647915 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.647929 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.647941 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:52Z","lastTransitionTime":"2026-02-17T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.750564 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.750623 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.750644 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.750670 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.750690 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:52Z","lastTransitionTime":"2026-02-17T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.853758 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.853795 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.853803 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.853818 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.853829 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:52Z","lastTransitionTime":"2026-02-17T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.919042 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 09:06:37.061255293 +0000 UTC Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.944592 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:52 crc kubenswrapper[4672]: E0217 16:03:52.944828 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.956274 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.956320 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.956337 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.956360 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:52 crc kubenswrapper[4672]: I0217 16:03:52.956377 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:52Z","lastTransitionTime":"2026-02-17T16:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.064208 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.064269 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.064286 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.064311 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.064329 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:53Z","lastTransitionTime":"2026-02-17T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.167870 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.167938 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.167955 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.167980 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.167997 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:53Z","lastTransitionTime":"2026-02-17T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.270445 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.270580 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.270614 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.270644 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.270668 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:53Z","lastTransitionTime":"2026-02-17T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.294455 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs\") pod \"network-metrics-daemon-hqdz9\" (UID: \"712be02c-2ccc-4989-aecb-653745bacb0d\") " pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:03:53 crc kubenswrapper[4672]: E0217 16:03:53.295181 4672 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:03:53 crc kubenswrapper[4672]: E0217 16:03:53.295292 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs podName:712be02c-2ccc-4989-aecb-653745bacb0d nodeName:}" failed. No retries permitted until 2026-02-17 16:03:55.295267369 +0000 UTC m=+44.049356141 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs") pod "network-metrics-daemon-hqdz9" (UID: "712be02c-2ccc-4989-aecb-653745bacb0d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.374623 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.374691 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.374715 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.374750 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.374776 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:53Z","lastTransitionTime":"2026-02-17T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.477578 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.477656 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.477676 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.477700 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.477717 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:53Z","lastTransitionTime":"2026-02-17T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.580865 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.580931 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.580953 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.580981 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.581003 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:53Z","lastTransitionTime":"2026-02-17T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.684427 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.684599 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.684631 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.684659 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.684681 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:53Z","lastTransitionTime":"2026-02-17T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.786871 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.787132 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.787145 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.787159 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.787167 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:53Z","lastTransitionTime":"2026-02-17T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.890542 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.890608 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.890626 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.890653 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.890671 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:53Z","lastTransitionTime":"2026-02-17T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.920221 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 13:37:32.343152418 +0000 UTC Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.944673 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.944731 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.944694 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:53 crc kubenswrapper[4672]: E0217 16:03:53.944903 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:03:53 crc kubenswrapper[4672]: E0217 16:03:53.944973 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:53 crc kubenswrapper[4672]: E0217 16:03:53.945053 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.993361 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.993388 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.993397 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.993410 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:53 crc kubenswrapper[4672]: I0217 16:03:53.993419 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:53Z","lastTransitionTime":"2026-02-17T16:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.096355 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.096412 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.096430 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.096452 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.096468 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:54Z","lastTransitionTime":"2026-02-17T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.199213 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.199285 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.199327 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.199347 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.199360 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:54Z","lastTransitionTime":"2026-02-17T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.302119 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.302181 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.302199 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.302253 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.302272 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:54Z","lastTransitionTime":"2026-02-17T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.404953 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.405008 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.405025 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.405047 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.405066 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:54Z","lastTransitionTime":"2026-02-17T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.507530 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.507580 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.507591 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.507612 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.507624 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:54Z","lastTransitionTime":"2026-02-17T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.610787 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.610851 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.610867 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.610891 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.610907 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:54Z","lastTransitionTime":"2026-02-17T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.713055 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.713098 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.713109 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.713130 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.713141 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:54Z","lastTransitionTime":"2026-02-17T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.816573 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.816627 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.816643 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.816661 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.816676 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:54Z","lastTransitionTime":"2026-02-17T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.918850 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.918893 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.918904 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.918918 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.918929 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:54Z","lastTransitionTime":"2026-02-17T16:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.921232 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 22:18:45.939225149 +0000 UTC Feb 17 16:03:54 crc kubenswrapper[4672]: I0217 16:03:54.944647 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:54 crc kubenswrapper[4672]: E0217 16:03:54.944790 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.021170 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.021209 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.021221 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.021242 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.021254 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:55Z","lastTransitionTime":"2026-02-17T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.124370 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.124424 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.124440 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.124459 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.124472 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:55Z","lastTransitionTime":"2026-02-17T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.227912 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.228190 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.228278 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.228373 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.228462 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:55Z","lastTransitionTime":"2026-02-17T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.313824 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs\") pod \"network-metrics-daemon-hqdz9\" (UID: \"712be02c-2ccc-4989-aecb-653745bacb0d\") " pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:03:55 crc kubenswrapper[4672]: E0217 16:03:55.314006 4672 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:03:55 crc kubenswrapper[4672]: E0217 16:03:55.314072 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs podName:712be02c-2ccc-4989-aecb-653745bacb0d nodeName:}" failed. No retries permitted until 2026-02-17 16:03:59.314053196 +0000 UTC m=+48.068141928 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs") pod "network-metrics-daemon-hqdz9" (UID: "712be02c-2ccc-4989-aecb-653745bacb0d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.331431 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.331481 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.331489 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.331502 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.331535 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:55Z","lastTransitionTime":"2026-02-17T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.434235 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.434276 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.434284 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.434298 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.434307 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:55Z","lastTransitionTime":"2026-02-17T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.537747 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.537795 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.537808 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.537826 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.537841 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:55Z","lastTransitionTime":"2026-02-17T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.640706 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.640741 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.640752 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.640769 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.640798 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:55Z","lastTransitionTime":"2026-02-17T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.744505 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.744594 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.744611 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.744634 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.744651 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:55Z","lastTransitionTime":"2026-02-17T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.848299 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.848350 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.848367 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.848389 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.848405 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:55Z","lastTransitionTime":"2026-02-17T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.922327 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 04:30:29.135866365 +0000 UTC Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.944732 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.945135 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:55 crc kubenswrapper[4672]: E0217 16:03:55.945369 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.945397 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:55 crc kubenswrapper[4672]: E0217 16:03:55.945789 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:55 crc kubenswrapper[4672]: E0217 16:03:55.946082 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.952097 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.952145 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.952161 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.952184 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:55 crc kubenswrapper[4672]: I0217 16:03:55.952202 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:55Z","lastTransitionTime":"2026-02-17T16:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.054203 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.054238 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.054247 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.054277 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.054288 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:56Z","lastTransitionTime":"2026-02-17T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.157675 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.157721 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.157736 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.157760 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.157777 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:56Z","lastTransitionTime":"2026-02-17T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.260635 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.260686 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.260698 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.260717 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.260729 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:56Z","lastTransitionTime":"2026-02-17T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.363203 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.363247 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.363260 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.363276 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.363288 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:56Z","lastTransitionTime":"2026-02-17T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.466409 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.466456 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.466464 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.466479 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.466489 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:56Z","lastTransitionTime":"2026-02-17T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.569540 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.569577 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.569586 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.569600 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.569609 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:56Z","lastTransitionTime":"2026-02-17T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.671887 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.672204 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.672350 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.672561 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.672717 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:56Z","lastTransitionTime":"2026-02-17T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.775500 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.775617 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.775636 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.775666 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.775688 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:56Z","lastTransitionTime":"2026-02-17T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.878864 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.878930 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.878955 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.878980 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.878998 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:56Z","lastTransitionTime":"2026-02-17T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.923223 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 12:22:13.93485516 +0000 UTC Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.944902 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:56 crc kubenswrapper[4672]: E0217 16:03:56.945044 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.982579 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.982693 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.982714 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.982743 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:56 crc kubenswrapper[4672]: I0217 16:03:56.982774 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:56Z","lastTransitionTime":"2026-02-17T16:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.085948 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.086307 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.086556 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.086788 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.087001 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:57Z","lastTransitionTime":"2026-02-17T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.190302 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.190355 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.190373 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.190396 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.190414 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:57Z","lastTransitionTime":"2026-02-17T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.292829 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.292891 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.292907 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.292933 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.292952 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:57Z","lastTransitionTime":"2026-02-17T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.396316 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.396756 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.396940 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.397153 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.397308 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:57Z","lastTransitionTime":"2026-02-17T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.500329 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.500400 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.500418 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.500443 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.500463 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:57Z","lastTransitionTime":"2026-02-17T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.603552 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.603584 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.603594 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.603610 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.603621 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:57Z","lastTransitionTime":"2026-02-17T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.706102 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.706161 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.706180 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.706205 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.706222 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:57Z","lastTransitionTime":"2026-02-17T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.808924 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.808968 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.808985 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.809001 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.809015 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:57Z","lastTransitionTime":"2026-02-17T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.911125 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.911186 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.911207 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.911235 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.911256 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:57Z","lastTransitionTime":"2026-02-17T16:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.923940 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 23:16:52.948935885 +0000 UTC Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.944654 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.944871 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:03:57 crc kubenswrapper[4672]: I0217 16:03:57.944993 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:57 crc kubenswrapper[4672]: E0217 16:03:57.945067 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:03:57 crc kubenswrapper[4672]: E0217 16:03:57.945233 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:57 crc kubenswrapper[4672]: E0217 16:03:57.944871 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.014341 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.014403 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.014416 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.014433 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.014445 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.117772 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.117826 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.117841 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.117861 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.117875 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.220721 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.220810 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.220836 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.220869 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.220893 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.323498 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.323548 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.323557 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.323571 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.323580 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.426848 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.427239 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.427251 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.427268 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.427280 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.530209 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.530251 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.530264 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.530281 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.530293 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.633138 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.633230 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.633248 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.633270 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.633290 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.736550 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.736604 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.736622 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.736646 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.736662 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.752326 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.752369 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.752381 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.752398 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.752410 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4672]: E0217 16:03:58.769215 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:58Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.773237 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.773295 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.773312 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.773335 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.773352 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4672]: E0217 16:03:58.789748 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:58Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.795548 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.795658 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.795685 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.795715 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.795734 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4672]: E0217 16:03:58.813954 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:58Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.819874 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.819934 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.819953 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.819973 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.819992 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4672]: E0217 16:03:58.841586 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:58Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.846562 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.846642 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.846667 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.846696 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.846738 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4672]: E0217 16:03:58.861001 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:58Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:58 crc kubenswrapper[4672]: E0217 16:03:58.861348 4672 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.865356 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.865487 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.865692 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.865805 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.865983 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.924888 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:55:44.975124447 +0000 UTC Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.944125 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:03:58 crc kubenswrapper[4672]: E0217 16:03:58.944242 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.968774 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.968818 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.968830 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.968851 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:58 crc kubenswrapper[4672]: I0217 16:03:58.968866 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:58Z","lastTransitionTime":"2026-02-17T16:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.070776 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.071057 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.071131 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.071261 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.071344 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:59Z","lastTransitionTime":"2026-02-17T16:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.173832 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.174053 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.174123 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.174193 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.174271 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:59Z","lastTransitionTime":"2026-02-17T16:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.225956 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.232053 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.240468 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:59Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.255651 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:59Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.277224 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.277283 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.277296 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.277316 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.277335 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:59Z","lastTransitionTime":"2026-02-17T16:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.281710 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:59Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.310915 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:59Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.335878 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:59Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.352101 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:59Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.358674 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs\") pod \"network-metrics-daemon-hqdz9\" (UID: \"712be02c-2ccc-4989-aecb-653745bacb0d\") " pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:03:59 crc kubenswrapper[4672]: E0217 16:03:59.358851 4672 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:03:59 crc kubenswrapper[4672]: E0217 16:03:59.358928 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs podName:712be02c-2ccc-4989-aecb-653745bacb0d nodeName:}" failed. No retries permitted until 2026-02-17 16:04:07.358910133 +0000 UTC m=+56.112998875 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs") pod "network-metrics-daemon-hqdz9" (UID: "712be02c-2ccc-4989-aecb-653745bacb0d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.370337 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d283cb6739fe72163d4567bab0b63cc72697a661ee8ae5dbe706f0e378e02aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fc5400a2ee51af7b4d6668478faaddd1d5f33a379bb0da3784adee047d6c4a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:47Z\\\",\\\"message\\\":\\\" 5990 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:47.242623 5990 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:47.242683 5990 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 16:03:47.242749 5990 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:47.242820 5990 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 16:03:47.242879 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 16:03:47.242917 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 16:03:47.242964 5990 factory.go:656] Stopping watch factory\\\\nI0217 16:03:47.243012 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 16:03:47.243055 5990 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 16:03:47.243081 5990 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 16:03:47.243098 5990 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d283cb6739fe72163d4567bab0b63cc72697a661ee8ae5dbe706f0e378e02aeb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:49Z\\\",\\\"message\\\":\\\"(0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:49.162356 6141 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:49.162401 6141 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:49.162367 6141 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.162621 6141 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.163165 6141 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:49.163285 6141 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.163769 6141 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 16:03:49.163806 6141 factory.go:656] Stopping watch factory\\\\nI0217 16:03:49.163822 6141 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:03:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:59Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.380360 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.380491 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.380584 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.380666 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.380733 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:59Z","lastTransitionTime":"2026-02-17T16:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.381009 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4897d237d880d7e444b27a13aba3e1e2d3a7ab13092c77bc1978c08f9ce3e2a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77542e439619fe148e71b29dda8c7c1957550c206d27bd12aa640f991b7ab96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qfvh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:59Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.392289 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hqdz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"712be02c-2ccc-4989-aecb-653745bacb0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hqdz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:59Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.404193 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:59Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.418732 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:59Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.430957 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:59Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.448043 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:59Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.462706 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:59Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.474760 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:59Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.483813 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.483862 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.483882 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.483903 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.483920 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:59Z","lastTransitionTime":"2026-02-17T16:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.488259 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:59Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.500159 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:03:59Z is after 2025-08-24T17:21:41Z" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.586820 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.586881 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.586934 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.586956 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.586975 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:59Z","lastTransitionTime":"2026-02-17T16:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.689378 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.689466 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.689490 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.689572 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.689598 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:59Z","lastTransitionTime":"2026-02-17T16:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.792683 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.792883 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.792921 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.792953 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.792980 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:59Z","lastTransitionTime":"2026-02-17T16:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.895651 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.895717 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.895740 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.895761 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.895777 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:59Z","lastTransitionTime":"2026-02-17T16:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.926056 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 17:33:28.071952749 +0000 UTC Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.944649 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:03:59 crc kubenswrapper[4672]: E0217 16:03:59.945042 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.944789 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:03:59 crc kubenswrapper[4672]: E0217 16:03:59.945737 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.944650 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:03:59 crc kubenswrapper[4672]: E0217 16:03:59.946253 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.998237 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.998268 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.998276 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.998290 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:03:59 crc kubenswrapper[4672]: I0217 16:03:59.998303 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:03:59Z","lastTransitionTime":"2026-02-17T16:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.100854 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.101344 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.101642 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.101826 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.102020 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:00Z","lastTransitionTime":"2026-02-17T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.205955 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.206008 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.206031 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.206054 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.206071 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:00Z","lastTransitionTime":"2026-02-17T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.309304 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.309389 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.309406 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.309429 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.309446 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:00Z","lastTransitionTime":"2026-02-17T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.411709 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.412047 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.412254 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.412461 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.412692 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:00Z","lastTransitionTime":"2026-02-17T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.515736 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.516085 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.516266 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.516404 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.516544 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:00Z","lastTransitionTime":"2026-02-17T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.620394 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.620438 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.620450 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.620469 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.620482 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:00Z","lastTransitionTime":"2026-02-17T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.723655 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.723942 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.724066 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.724198 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.724366 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:00Z","lastTransitionTime":"2026-02-17T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.827994 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.828075 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.828092 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.828115 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.828131 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:00Z","lastTransitionTime":"2026-02-17T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.927119 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 09:53:45.939440537 +0000 UTC Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.932474 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.932575 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.932595 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.932621 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.932638 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:00Z","lastTransitionTime":"2026-02-17T16:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:00 crc kubenswrapper[4672]: I0217 16:04:00.943895 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:00 crc kubenswrapper[4672]: E0217 16:04:00.944079 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.036231 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.036302 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.036325 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.036354 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.036379 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:01Z","lastTransitionTime":"2026-02-17T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.139798 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.139887 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.139914 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.139948 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.139973 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:01Z","lastTransitionTime":"2026-02-17T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.242831 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.242903 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.242923 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.242947 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.242963 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:01Z","lastTransitionTime":"2026-02-17T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.351824 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.351891 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.351911 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.351939 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.351965 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:01Z","lastTransitionTime":"2026-02-17T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.455155 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.455471 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.455590 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.455682 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.455780 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:01Z","lastTransitionTime":"2026-02-17T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.558319 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.558725 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.558920 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.559098 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.559269 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:01Z","lastTransitionTime":"2026-02-17T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.662310 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.662362 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.662373 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.662403 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.662414 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:01Z","lastTransitionTime":"2026-02-17T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.765767 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.765917 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.765945 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.765976 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.766001 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:01Z","lastTransitionTime":"2026-02-17T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.869429 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.869826 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.869980 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.870179 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.870333 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:01Z","lastTransitionTime":"2026-02-17T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.927738 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 09:38:17.944459185 +0000 UTC Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.944406 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:01 crc kubenswrapper[4672]: E0217 16:04:01.944637 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.944678 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.944733 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:01 crc kubenswrapper[4672]: E0217 16:04:01.945257 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:01 crc kubenswrapper[4672]: E0217 16:04:01.945359 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.945819 4672 scope.go:117] "RemoveContainer" containerID="d283cb6739fe72163d4567bab0b63cc72697a661ee8ae5dbe706f0e378e02aeb" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.966854 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:01Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.972871 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.972911 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.972922 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.972938 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.972949 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:01Z","lastTransitionTime":"2026-02-17T16:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:01 crc kubenswrapper[4672]: I0217 16:04:01.987014 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:01Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.006890 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.025947 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.043914 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"786083a3-395c-4659-b58a-a5517a9aa843\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://514d00f8857c64df263abe974d69503c1ac4ea7d4c78f57e5826d58208bb79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9d911c777bbcce655fc6993bdd85da5df16a4402e54b628b839c796f7c784d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7b3941c4c057228fded474417203e3aeb95fcc1df8094bde7b35fd223eec22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.064622 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.075976 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.076023 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.076033 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.076053 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.076066 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:02Z","lastTransitionTime":"2026-02-17T16:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.091195 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.116719 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.124009 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d283cb6739fe72163d4567bab0b63cc72697a661ee8ae5dbe706f0e378e02aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fc5400a2ee51af7b4d6668478faaddd1d5f33a379bb0da3784adee047d6c4a6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:47Z\\\",\\\"message\\\":\\\" 5990 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:47.242623 5990 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:47.242683 5990 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 16:03:47.242749 5990 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:47.242820 5990 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 16:03:47.242879 5990 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 16:03:47.242917 5990 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 16:03:47.242964 5990 factory.go:656] Stopping watch factory\\\\nI0217 16:03:47.243012 5990 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 16:03:47.243055 5990 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 16:03:47.243081 5990 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 16:03:47.243098 5990 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d283cb6739fe72163d4567bab0b63cc72697a661ee8ae5dbe706f0e378e02aeb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:49Z\\\",\\\"message\\\":\\\"(0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:49.162356 6141 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:49.162401 6141 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:49.162367 6141 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.162621 6141 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.163165 6141 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:49.163285 6141 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.163769 6141 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 16:03:49.163806 6141 factory.go:656] Stopping watch factory\\\\nI0217 16:03:49.163822 6141 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:03:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.138984 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4897d237d880d7e444b27a13aba3e1e2d3a7ab13092c77bc1978c08f9ce3e2a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77542e439619fe148e71b29dda8c7c1957550c206d27bd12aa640f991b7ab96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qfvh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.156038 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hqdz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"712be02c-2ccc-4989-aecb-653745bacb0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hqdz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.178629 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.178708 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.178732 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.178766 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.178790 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:02Z","lastTransitionTime":"2026-02-17T16:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.183970 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.202263 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.217634 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.227742 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.251269 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.270127 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.281369 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.281431 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.281450 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.281478 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.281498 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:02Z","lastTransitionTime":"2026-02-17T16:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.282188 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.297167 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.309733 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"786083a3-395c-4659-b58a-a5517a9aa843\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://514d00f8857c64df263abe974d69503c1ac4ea7d4c78f57e5826d58208bb79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9d911c777bbcce655fc6993bdd85da5df16a4402e54b628b839c796f7c784d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7b3941c4c057228fded474417203e3aeb95fcc1df8094bde7b35fd223eec22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.323553 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.324716 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4f9wc_98a910a1-b5f0-4f34-9d76-6474c753e8e7/ovnkube-controller/1.log" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.328331 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerStarted","Data":"9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4"} Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.329024 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.345418 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.363576 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.381826 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.384389 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.384472 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.384492 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.384553 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.384575 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:02Z","lastTransitionTime":"2026-02-17T16:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.409010 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d283cb6739fe72163d4567bab0b63cc72697a661ee8ae5dbe706f0e378e02aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d283cb6739fe72163d4567bab0b63cc72697a661ee8ae5dbe706f0e378e02aeb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:49Z\\\",\\\"message\\\":\\\"(0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:49.162356 6141 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:49.162401 6141 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:49.162367 6141 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.162621 6141 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.163165 6141 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:49.163285 6141 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.163769 6141 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 16:03:49.163806 6141 factory.go:656] Stopping watch factory\\\\nI0217 16:03:49.163822 6141 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:03:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4f9wc_openshift-ovn-kubernetes(98a910a1-b5f0-4f34-9d76-6474c753e8e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.426716 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4897d237d880d7e444b27a13aba3e1e2d3a7ab13092c77bc1978c08f9ce3e2a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77542e439619fe148e71b29dda8c7c1957550c206d27bd12aa640f991b7ab96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qfvh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.440614 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hqdz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"712be02c-2ccc-4989-aecb-653745bacb0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hqdz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.469466 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.487307 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.487385 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.487408 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.487439 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.487462 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:02Z","lastTransitionTime":"2026-02-17T16:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.488372 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.508292 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.524044 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.536654 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.550411 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.560747 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.573116 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.586791 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.591257 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.591293 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.591302 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.591316 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.591326 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:02Z","lastTransitionTime":"2026-02-17T16:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.599066 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.618925 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d283cb6739fe72163d4567bab0b63cc72697a661ee8ae5dbe706f0e378e02aeb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:49Z\\\",\\\"message\\\":\\\"(0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:49.162356 6141 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:49.162401 6141 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:49.162367 6141 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.162621 6141 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.163165 6141 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:49.163285 6141 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.163769 6141 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 16:03:49.163806 6141 factory.go:656] Stopping watch factory\\\\nI0217 16:03:49.163822 6141 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:03:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.630825 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4897d237d880d7e444b27a13aba3e1e2d3a7ab13092c77bc1978c08f9ce3e2a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77542e439619fe148e71b29dda8c7c1957550c206d27bd12aa640f991b7ab96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qfvh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.650079 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hqdz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"712be02c-2ccc-4989-aecb-653745bacb0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hqdz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.668732 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.692183 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.694059 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.694100 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.694110 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.694124 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.694134 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:02Z","lastTransitionTime":"2026-02-17T16:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.708606 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.727242 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.742059 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.756684 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.772321 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.796292 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.796916 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.796949 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.796960 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.796976 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.796988 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:02Z","lastTransitionTime":"2026-02-17T16:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.812751 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.831190 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.843953 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.855672 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.866879 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"786083a3-395c-4659-b58a-a5517a9aa843\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://514d00f8857c64df263abe974d69503c1ac4ea7d4c78f57e5826d58208bb79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9d911c777bbcce655fc6993bdd85da5df16a4402e54b628b839c796f7c784d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7b3941c4c057228fded474417203e3aeb95fcc1df8094bde7b35fd223eec22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.882792 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.899273 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:02Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.900785 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.900814 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.900823 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.900837 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.900848 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:02Z","lastTransitionTime":"2026-02-17T16:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.928132 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 17:36:33.818578897 +0000 UTC Feb 17 16:04:02 crc kubenswrapper[4672]: I0217 16:04:02.944478 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:02 crc kubenswrapper[4672]: E0217 16:04:02.944636 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.003027 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.003056 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.003065 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.003093 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.003103 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:03Z","lastTransitionTime":"2026-02-17T16:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.106058 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.106111 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.106119 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.106130 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.106139 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:03Z","lastTransitionTime":"2026-02-17T16:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.209075 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.209135 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.209151 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.209170 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.209186 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:03Z","lastTransitionTime":"2026-02-17T16:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.311987 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.312024 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.312036 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.312053 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.312065 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:03Z","lastTransitionTime":"2026-02-17T16:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.334038 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4f9wc_98a910a1-b5f0-4f34-9d76-6474c753e8e7/ovnkube-controller/2.log" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.335210 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4f9wc_98a910a1-b5f0-4f34-9d76-6474c753e8e7/ovnkube-controller/1.log" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.338671 4672 generic.go:334] "Generic (PLEG): container finished" podID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerID="9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4" exitCode=1 Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.338718 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerDied","Data":"9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4"} Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.338765 4672 scope.go:117] "RemoveContainer" containerID="d283cb6739fe72163d4567bab0b63cc72697a661ee8ae5dbe706f0e378e02aeb" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.340248 4672 scope.go:117] "RemoveContainer" containerID="9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4" Feb 17 16:04:03 crc kubenswrapper[4672]: E0217 16:04:03.341080 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4f9wc_openshift-ovn-kubernetes(98a910a1-b5f0-4f34-9d76-6474c753e8e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.371112 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.384456 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.394655 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.406586 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.415898 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.415962 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.415993 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.416017 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.416036 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:03Z","lastTransitionTime":"2026-02-17T16:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.428085 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d283cb6739fe72163d4567bab0b63cc72697a661ee8ae5dbe706f0e378e02aeb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:03:49Z\\\",\\\"message\\\":\\\"(0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:49.162356 6141 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:49.162401 6141 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:03:49.162367 6141 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.162621 6141 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.163165 6141 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:03:49.163285 6141 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:03:49.163769 6141 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 16:03:49.163806 6141 factory.go:656] Stopping watch factory\\\\nI0217 16:03:49.163822 6141 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:03:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:02Z\\\",\\\"message\\\":\\\"s/factory.go:140\\\\nI0217 16:04:02.973587 6352 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:04:02.973759 6352 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:04:02.974155 6352 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:04:02.974619 6352 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 16:04:02.974650 6352 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 16:04:02.974667 6352 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 16:04:02.974674 6352 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 16:04:02.974690 6352 factory.go:656] Stopping watch factory\\\\nI0217 16:04:02.974708 6352 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:04:02.974735 6352 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 16:04:02.974750 6352 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 16:04:02.974758 6352 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 16\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.441263 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4897d237d880d7e444b27a13aba3e1e2d3a7ab13092c77bc1978c08f9ce3e2a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77542e439619fe148e71b29dda8c7c1957550c206d27bd12aa640f991b7ab96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qfvh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.455677 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hqdz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"712be02c-2ccc-4989-aecb-653745bacb0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hqdz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.473863 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.491115 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.507105 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.518941 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.518966 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.518974 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.518986 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.518996 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:03Z","lastTransitionTime":"2026-02-17T16:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.526463 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.540836 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.552976 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.564241 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.575528 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.590322 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.603027 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:04:03 crc kubenswrapper[4672]: E0217 16:04:03.603172 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:04:35.603152793 +0000 UTC m=+84.357241535 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.603272 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.603344 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:03 crc kubenswrapper[4672]: E0217 16:04:03.603403 4672 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:04:03 crc kubenswrapper[4672]: E0217 16:04:03.603449 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:04:35.60343958 +0000 UTC m=+84.357528312 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:04:03 crc kubenswrapper[4672]: E0217 16:04:03.603480 4672 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:04:03 crc kubenswrapper[4672]: E0217 16:04:03.603546 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:04:35.603534843 +0000 UTC m=+84.357623585 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.609840 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.620737 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.620811 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.620834 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.620862 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.620886 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:03Z","lastTransitionTime":"2026-02-17T16:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.624999 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"786083a3-395c-4659-b58a-a5517a9aa843\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://514d00f8857c64df263abe974d69503c1ac4ea7d4c78f57e5826d58208bb79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9d911c777bbcce655fc6993bdd85da5df16a4402e54b628b839c796f7c784d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7b3941c4c057228fded474417203e3aeb95fcc1df8094bde7b35fd223eec22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:03Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.704662 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.704701 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:03 crc kubenswrapper[4672]: E0217 16:04:03.704827 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:04:03 crc kubenswrapper[4672]: E0217 16:04:03.704844 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:04:03 crc kubenswrapper[4672]: E0217 16:04:03.704853 4672 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:04:03 crc kubenswrapper[4672]: E0217 16:04:03.704869 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:04:03 crc kubenswrapper[4672]: E0217 16:04:03.704905 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 16:04:35.704879949 +0000 UTC m=+84.458968681 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:04:03 crc kubenswrapper[4672]: E0217 16:04:03.704905 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:04:03 crc kubenswrapper[4672]: E0217 16:04:03.704929 4672 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:04:03 crc kubenswrapper[4672]: E0217 16:04:03.704984 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 16:04:35.704965891 +0000 UTC m=+84.459054663 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.723387 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.723418 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.723446 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.723458 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.723466 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:03Z","lastTransitionTime":"2026-02-17T16:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.825667 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.825732 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.825749 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.825772 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.825788 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:03Z","lastTransitionTime":"2026-02-17T16:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.928464 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 07:20:46.598174323 +0000 UTC Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.929786 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.929838 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.929859 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.929887 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.929911 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:03Z","lastTransitionTime":"2026-02-17T16:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.944620 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.944678 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:03 crc kubenswrapper[4672]: I0217 16:04:03.944809 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:03 crc kubenswrapper[4672]: E0217 16:04:03.945019 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:03 crc kubenswrapper[4672]: E0217 16:04:03.945584 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:03 crc kubenswrapper[4672]: E0217 16:04:03.945597 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.033325 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.033448 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.033470 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.033577 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.033602 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:04Z","lastTransitionTime":"2026-02-17T16:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.137449 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.137549 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.137568 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.137594 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.137613 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:04Z","lastTransitionTime":"2026-02-17T16:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.241034 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.241094 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.241115 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.241142 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.241164 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:04Z","lastTransitionTime":"2026-02-17T16:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.344351 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.344400 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.344417 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.344439 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.344455 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:04Z","lastTransitionTime":"2026-02-17T16:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.346240 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4f9wc_98a910a1-b5f0-4f34-9d76-6474c753e8e7/ovnkube-controller/2.log" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.353139 4672 scope.go:117] "RemoveContainer" containerID="9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4" Feb 17 16:04:04 crc kubenswrapper[4672]: E0217 16:04:04.353414 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4f9wc_openshift-ovn-kubernetes(98a910a1-b5f0-4f34-9d76-6474c753e8e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.372932 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:04Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.392411 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:04Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.411599 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:04Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.428545 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:04Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.445863 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:04Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.447425 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.447480 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.447498 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.447571 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.447588 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:04Z","lastTransitionTime":"2026-02-17T16:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.465996 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:04Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.483945 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:04Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.502193 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:04Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.519489 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"786083a3-395c-4659-b58a-a5517a9aa843\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://514d00f8857c64df263abe974d69503c1ac4ea7d4c78f57e5826d58208bb79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9d911c777bbcce655fc6993bdd85da5df16a4402e54b628b839c796f7c784d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7b3941c4c057228fded474417203e3aeb95fcc1df8094bde7b35fd223eec22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:04Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.540377 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:04Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.550540 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.550582 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.550594 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.550613 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.550624 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:04Z","lastTransitionTime":"2026-02-17T16:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.563298 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:04Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.580790 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:04Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.595089 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:04Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.625866 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:02Z\\\",\\\"message\\\":\\\"s/factory.go:140\\\\nI0217 16:04:02.973587 6352 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:04:02.973759 6352 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:04:02.974155 6352 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:04:02.974619 6352 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 16:04:02.974650 6352 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 16:04:02.974667 6352 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 16:04:02.974674 6352 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 16:04:02.974690 6352 factory.go:656] Stopping watch factory\\\\nI0217 16:04:02.974708 6352 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:04:02.974735 6352 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 16:04:02.974750 6352 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 16:04:02.974758 6352 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 16\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4f9wc_openshift-ovn-kubernetes(98a910a1-b5f0-4f34-9d76-6474c753e8e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:04Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.642788 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4897d237d880d7e444b27a13aba3e1e2d3a7ab13092c77bc1978c08f9ce3e2a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77542e439619fe148e71b29dda8c7c1957550c206d27bd12aa640f991b7ab96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qfvh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:04Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.653993 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.654047 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.654066 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.654089 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.654109 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:04Z","lastTransitionTime":"2026-02-17T16:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.660901 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hqdz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"712be02c-2ccc-4989-aecb-653745bacb0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hqdz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:04Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.693497 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:04Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.713773 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:04Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.760501 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.760610 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.760632 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.760660 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.760686 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:04Z","lastTransitionTime":"2026-02-17T16:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.863628 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.863669 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.863679 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.863696 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.863708 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:04Z","lastTransitionTime":"2026-02-17T16:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.929002 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 06:48:16.330235877 +0000 UTC Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.946423 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:04 crc kubenswrapper[4672]: E0217 16:04:04.946686 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.966496 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.966561 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.966576 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.966597 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:04 crc kubenswrapper[4672]: I0217 16:04:04.966613 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:04Z","lastTransitionTime":"2026-02-17T16:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.069820 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.070199 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.070362 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.070503 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.070735 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:05Z","lastTransitionTime":"2026-02-17T16:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.174504 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.174885 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.175121 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.175577 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.175823 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:05Z","lastTransitionTime":"2026-02-17T16:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.278847 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.278914 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.278935 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.278959 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.278979 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:05Z","lastTransitionTime":"2026-02-17T16:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.382037 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.382110 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.382135 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.382165 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.382190 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:05Z","lastTransitionTime":"2026-02-17T16:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.484976 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.485051 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.485073 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.485096 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.485114 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:05Z","lastTransitionTime":"2026-02-17T16:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.587928 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.587982 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.587999 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.588021 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.588038 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:05Z","lastTransitionTime":"2026-02-17T16:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.690734 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.690795 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.690812 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.690837 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.690853 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:05Z","lastTransitionTime":"2026-02-17T16:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.793770 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.793830 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.793846 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.793872 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.793890 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:05Z","lastTransitionTime":"2026-02-17T16:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.897569 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.897625 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.897641 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.897663 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.897682 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:05Z","lastTransitionTime":"2026-02-17T16:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.929701 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 10:55:45.431050717 +0000 UTC Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.944034 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.944093 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:05 crc kubenswrapper[4672]: I0217 16:04:05.944133 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:05 crc kubenswrapper[4672]: E0217 16:04:05.944233 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:05 crc kubenswrapper[4672]: E0217 16:04:05.944680 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:05 crc kubenswrapper[4672]: E0217 16:04:05.944812 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.000298 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.000348 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.000365 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.000386 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.000402 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:06Z","lastTransitionTime":"2026-02-17T16:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.104104 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.104169 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.104197 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.104228 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.104252 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:06Z","lastTransitionTime":"2026-02-17T16:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.207325 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.207388 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.207407 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.207434 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.207452 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:06Z","lastTransitionTime":"2026-02-17T16:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.310660 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.310725 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.310749 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.310777 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.310799 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:06Z","lastTransitionTime":"2026-02-17T16:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.413357 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.413747 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.413921 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.414076 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.414210 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:06Z","lastTransitionTime":"2026-02-17T16:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.517971 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.518036 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.518055 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.518079 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.518099 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:06Z","lastTransitionTime":"2026-02-17T16:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.621295 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.621346 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.621363 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.621385 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.621403 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:06Z","lastTransitionTime":"2026-02-17T16:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.724879 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.724945 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.724967 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.724993 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.725014 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:06Z","lastTransitionTime":"2026-02-17T16:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.827730 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.827810 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.827835 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.827869 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.827892 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:06Z","lastTransitionTime":"2026-02-17T16:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.929957 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:57:42.448754675 +0000 UTC Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.930254 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.930330 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.930347 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.930370 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.930388 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:06Z","lastTransitionTime":"2026-02-17T16:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:06 crc kubenswrapper[4672]: I0217 16:04:06.943839 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:06 crc kubenswrapper[4672]: E0217 16:04:06.944002 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.033990 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.034057 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.034078 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.034104 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.034124 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.137218 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.137296 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.137314 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.137342 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.137359 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.240460 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.240528 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.240542 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.240559 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.240572 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.343247 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.343320 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.343661 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.343723 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.343749 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.445029 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs\") pod \"network-metrics-daemon-hqdz9\" (UID: \"712be02c-2ccc-4989-aecb-653745bacb0d\") " pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:07 crc kubenswrapper[4672]: E0217 16:04:07.445347 4672 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:04:07 crc kubenswrapper[4672]: E0217 16:04:07.445496 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs podName:712be02c-2ccc-4989-aecb-653745bacb0d nodeName:}" failed. No retries permitted until 2026-02-17 16:04:23.44545734 +0000 UTC m=+72.199546112 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs") pod "network-metrics-daemon-hqdz9" (UID: "712be02c-2ccc-4989-aecb-653745bacb0d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.446965 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.447018 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.447039 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.447069 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.447088 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.549767 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.549806 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.549818 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.549911 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.549926 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.652100 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.652127 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.652135 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.652148 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.652156 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.754713 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.754771 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.754785 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.754801 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.755207 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.859216 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.859296 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.859321 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.859348 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.859367 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.930202 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 01:32:59.388578587 +0000 UTC Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.944664 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:07 crc kubenswrapper[4672]: E0217 16:04:07.944895 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.945660 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:07 crc kubenswrapper[4672]: E0217 16:04:07.945821 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.946045 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:07 crc kubenswrapper[4672]: E0217 16:04:07.946211 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.961902 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.961938 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.961949 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.961965 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:07 crc kubenswrapper[4672]: I0217 16:04:07.961976 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:07Z","lastTransitionTime":"2026-02-17T16:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.064121 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.064153 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.064162 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.064175 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.064184 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:08Z","lastTransitionTime":"2026-02-17T16:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.166593 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.166662 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.166681 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.166708 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.166724 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:08Z","lastTransitionTime":"2026-02-17T16:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.270825 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.270889 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.270907 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.270932 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.270948 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:08Z","lastTransitionTime":"2026-02-17T16:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.372755 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.372790 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.372800 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.372813 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.372821 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:08Z","lastTransitionTime":"2026-02-17T16:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.475329 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.475373 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.475387 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.475406 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.475419 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:08Z","lastTransitionTime":"2026-02-17T16:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.578116 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.578162 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.578177 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.578196 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.578208 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:08Z","lastTransitionTime":"2026-02-17T16:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.681210 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.681254 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.681264 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.681284 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.681296 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:08Z","lastTransitionTime":"2026-02-17T16:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.784255 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.784331 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.784352 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.784374 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.784391 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:08Z","lastTransitionTime":"2026-02-17T16:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.872844 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.873258 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.873496 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.873795 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.873984 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:08Z","lastTransitionTime":"2026-02-17T16:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4672]: E0217 16:04:08.895079 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:08Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.900332 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.900400 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.900444 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.900469 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.900486 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:08Z","lastTransitionTime":"2026-02-17T16:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4672]: E0217 16:04:08.918472 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:08Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.922961 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.923026 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.923044 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.923069 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.923087 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:08Z","lastTransitionTime":"2026-02-17T16:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.930647 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 06:48:43.795572013 +0000 UTC Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.944082 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:08 crc kubenswrapper[4672]: E0217 16:04:08.944362 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:08 crc kubenswrapper[4672]: E0217 16:04:08.946404 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:08Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.951505 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.951626 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.951653 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.951681 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.951703 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:08Z","lastTransitionTime":"2026-02-17T16:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4672]: E0217 16:04:08.973412 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:08Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.978151 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.978216 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.978240 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.978269 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.978289 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:08Z","lastTransitionTime":"2026-02-17T16:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:08 crc kubenswrapper[4672]: E0217 16:04:08.997323 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:08Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:08 crc kubenswrapper[4672]: E0217 16:04:08.997432 4672 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.999698 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.999751 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.999767 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.999789 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:08 crc kubenswrapper[4672]: I0217 16:04:08.999806 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:08Z","lastTransitionTime":"2026-02-17T16:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.103116 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.103189 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.103210 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.103235 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.103252 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:09Z","lastTransitionTime":"2026-02-17T16:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.205831 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.205924 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.205952 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.205984 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.206008 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:09Z","lastTransitionTime":"2026-02-17T16:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.309231 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.309287 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.309307 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.309337 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.309394 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:09Z","lastTransitionTime":"2026-02-17T16:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.412793 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.412850 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.412861 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.412885 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.412901 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:09Z","lastTransitionTime":"2026-02-17T16:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.516218 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.516262 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.516272 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.516290 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.516306 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:09Z","lastTransitionTime":"2026-02-17T16:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.618858 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.618896 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.618907 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.618922 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.618932 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:09Z","lastTransitionTime":"2026-02-17T16:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.722222 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.722284 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.722300 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.722326 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.722347 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:09Z","lastTransitionTime":"2026-02-17T16:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.825732 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.825794 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.825818 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.825842 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.825857 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:09Z","lastTransitionTime":"2026-02-17T16:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.928954 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.929019 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.929061 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.929086 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.929103 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:09Z","lastTransitionTime":"2026-02-17T16:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.931238 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 08:43:59.421920441 +0000 UTC Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.944885 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.944924 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:09 crc kubenswrapper[4672]: E0217 16:04:09.945128 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:09 crc kubenswrapper[4672]: I0217 16:04:09.945162 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:09 crc kubenswrapper[4672]: E0217 16:04:09.945460 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:09 crc kubenswrapper[4672]: E0217 16:04:09.945339 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.032499 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.032845 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.033048 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.033194 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.033348 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:10Z","lastTransitionTime":"2026-02-17T16:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.135766 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.136044 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.136125 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.136245 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.136334 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:10Z","lastTransitionTime":"2026-02-17T16:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.239316 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.239406 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.239432 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.239465 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.239489 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:10Z","lastTransitionTime":"2026-02-17T16:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.343371 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.343480 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.343505 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.343576 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.343601 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:10Z","lastTransitionTime":"2026-02-17T16:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.446643 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.446806 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.446837 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.446867 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.446890 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:10Z","lastTransitionTime":"2026-02-17T16:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.550634 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.550712 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.550730 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.550749 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.550764 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:10Z","lastTransitionTime":"2026-02-17T16:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.653799 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.653850 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.653866 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.653887 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.653899 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:10Z","lastTransitionTime":"2026-02-17T16:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.756494 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.756659 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.756693 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.756721 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.756746 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:10Z","lastTransitionTime":"2026-02-17T16:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.860364 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.860431 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.860451 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.860482 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.860499 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:10Z","lastTransitionTime":"2026-02-17T16:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.932682 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 02:00:50.480411582 +0000 UTC Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.944119 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:10 crc kubenswrapper[4672]: E0217 16:04:10.944279 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.963202 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.963500 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.963648 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.963744 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:10 crc kubenswrapper[4672]: I0217 16:04:10.963820 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:10Z","lastTransitionTime":"2026-02-17T16:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.067306 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.067687 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.067927 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.068135 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.068278 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:11Z","lastTransitionTime":"2026-02-17T16:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.171309 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.171369 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.171386 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.171409 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.171426 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:11Z","lastTransitionTime":"2026-02-17T16:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.274599 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.274675 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.274693 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.274719 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.274737 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:11Z","lastTransitionTime":"2026-02-17T16:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.377480 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.377579 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.377599 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.377624 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.377643 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:11Z","lastTransitionTime":"2026-02-17T16:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.480919 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.480981 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.480998 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.481024 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.481044 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:11Z","lastTransitionTime":"2026-02-17T16:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.584903 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.584988 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.585016 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.585047 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.585070 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:11Z","lastTransitionTime":"2026-02-17T16:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.687641 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.687707 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.687722 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.687747 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.687763 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:11Z","lastTransitionTime":"2026-02-17T16:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.790757 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.790807 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.790822 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.790842 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.790856 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:11Z","lastTransitionTime":"2026-02-17T16:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.894224 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.894269 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.894280 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.894298 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.894310 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:11Z","lastTransitionTime":"2026-02-17T16:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.933214 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 15:35:35.181492838 +0000 UTC Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.944815 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:11 crc kubenswrapper[4672]: E0217 16:04:11.945070 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.945179 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:11 crc kubenswrapper[4672]: E0217 16:04:11.945401 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.945654 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:11 crc kubenswrapper[4672]: E0217 16:04:11.946032 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.966169 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:11Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.981880 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:11Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.998072 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.998135 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.998153 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.998177 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:11 crc kubenswrapper[4672]: I0217 16:04:11.998199 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:11Z","lastTransitionTime":"2026-02-17T16:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.010374 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:12Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.038871 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:12Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.058709 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:12Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.075195 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:12Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.095155 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:12Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.100067 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.100114 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.100131 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.100159 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.100176 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:12Z","lastTransitionTime":"2026-02-17T16:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.109580 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:12Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.127996 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"786083a3-395c-4659-b58a-a5517a9aa843\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://514d00f8857c64df263abe974d69503c1ac4ea7d4c78f57e5826d58208bb79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9d911c777bbcce655fc6993bdd85da5df16a4402e54b628b839c796f7c784d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7b3941c4c057228fded474417203e3aeb95fcc1df8094bde7b35fd223eec22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:12Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.178380 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:12Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.202870 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.202978 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.202993 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.203008 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.203019 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:12Z","lastTransitionTime":"2026-02-17T16:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.205494 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:12Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.215462 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:12Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.233688 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:02Z\\\",\\\"message\\\":\\\"s/factory.go:140\\\\nI0217 16:04:02.973587 6352 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:04:02.973759 6352 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:04:02.974155 6352 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:04:02.974619 6352 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 16:04:02.974650 6352 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 16:04:02.974667 6352 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 16:04:02.974674 6352 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 16:04:02.974690 6352 factory.go:656] Stopping watch factory\\\\nI0217 16:04:02.974708 6352 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:04:02.974735 6352 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 16:04:02.974750 6352 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 16:04:02.974758 6352 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 16\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4f9wc_openshift-ovn-kubernetes(98a910a1-b5f0-4f34-9d76-6474c753e8e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:12Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.248019 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4897d237d880d7e444b27a13aba3e1e2d3a7ab13092c77bc1978c08f9ce3e2a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77542e439619fe148e71b29dda8c7c1957550c206d27bd12aa640f991b7ab96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qfvh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:12Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.260668 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hqdz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"712be02c-2ccc-4989-aecb-653745bacb0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hqdz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:12Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.297253 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:12Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.305088 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.305158 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.305182 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.305216 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.305239 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:12Z","lastTransitionTime":"2026-02-17T16:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.318075 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:12Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.333312 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:12Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.408689 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.408745 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.408766 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.408787 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.408804 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:12Z","lastTransitionTime":"2026-02-17T16:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.512432 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.512493 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.512536 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.512559 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.512576 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:12Z","lastTransitionTime":"2026-02-17T16:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.616265 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.616321 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.616338 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.616359 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.616375 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:12Z","lastTransitionTime":"2026-02-17T16:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.719634 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.719695 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.719717 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.719770 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.719801 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:12Z","lastTransitionTime":"2026-02-17T16:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.823186 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.823256 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.823278 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.823308 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.823330 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:12Z","lastTransitionTime":"2026-02-17T16:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.926023 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.926682 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.926723 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.926754 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.926776 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:12Z","lastTransitionTime":"2026-02-17T16:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.934095 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:42:34.258219956 +0000 UTC Feb 17 16:04:12 crc kubenswrapper[4672]: I0217 16:04:12.944549 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:12 crc kubenswrapper[4672]: E0217 16:04:12.944696 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.029604 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.029663 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.029685 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.029711 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.029731 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:13Z","lastTransitionTime":"2026-02-17T16:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.132958 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.133019 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.133035 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.133060 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.133078 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:13Z","lastTransitionTime":"2026-02-17T16:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.236198 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.236249 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.236266 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.236287 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.236303 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:13Z","lastTransitionTime":"2026-02-17T16:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.339070 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.339127 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.339145 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.339169 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.339187 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:13Z","lastTransitionTime":"2026-02-17T16:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.442736 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.442812 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.442831 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.442856 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.442874 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:13Z","lastTransitionTime":"2026-02-17T16:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.545791 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.545878 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.545896 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.545921 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.545938 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:13Z","lastTransitionTime":"2026-02-17T16:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.648735 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.648766 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.648773 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.648785 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.648794 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:13Z","lastTransitionTime":"2026-02-17T16:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.751932 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.752010 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.752037 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.752068 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.752090 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:13Z","lastTransitionTime":"2026-02-17T16:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.855230 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.855292 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.855311 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.855337 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.855357 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:13Z","lastTransitionTime":"2026-02-17T16:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.935059 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 06:56:08.894070335 +0000 UTC Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.944536 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.944597 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.944616 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:13 crc kubenswrapper[4672]: E0217 16:04:13.944674 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:13 crc kubenswrapper[4672]: E0217 16:04:13.944776 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:13 crc kubenswrapper[4672]: E0217 16:04:13.945080 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.957176 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.957223 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.957236 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.957255 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:13 crc kubenswrapper[4672]: I0217 16:04:13.957270 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:13Z","lastTransitionTime":"2026-02-17T16:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.060268 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.060352 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.060370 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.060396 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.060418 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:14Z","lastTransitionTime":"2026-02-17T16:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.164405 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.164464 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.164484 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.164534 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.164557 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:14Z","lastTransitionTime":"2026-02-17T16:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.268039 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.268097 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.268114 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.268139 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.268157 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:14Z","lastTransitionTime":"2026-02-17T16:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.371477 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.371544 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.371555 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.371575 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.371587 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:14Z","lastTransitionTime":"2026-02-17T16:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.486033 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.486389 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.486426 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.486459 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.486481 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:14Z","lastTransitionTime":"2026-02-17T16:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.589540 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.589580 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.589589 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.589604 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.589615 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:14Z","lastTransitionTime":"2026-02-17T16:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.691959 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.692024 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.692046 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.692075 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.692100 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:14Z","lastTransitionTime":"2026-02-17T16:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.794598 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.794713 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.794733 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.794756 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.794776 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:14Z","lastTransitionTime":"2026-02-17T16:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.897903 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.897979 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.897996 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.898019 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.898037 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:14Z","lastTransitionTime":"2026-02-17T16:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.935592 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 18:13:43.587486308 +0000 UTC Feb 17 16:04:14 crc kubenswrapper[4672]: I0217 16:04:14.943849 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:14 crc kubenswrapper[4672]: E0217 16:04:14.944014 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.000880 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.000939 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.000954 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.000974 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.000988 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:15Z","lastTransitionTime":"2026-02-17T16:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.104949 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.105028 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.105056 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.105084 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.105104 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:15Z","lastTransitionTime":"2026-02-17T16:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.208693 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.208773 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.208795 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.208825 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.208849 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:15Z","lastTransitionTime":"2026-02-17T16:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.311722 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.311783 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.311793 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.311820 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.311834 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:15Z","lastTransitionTime":"2026-02-17T16:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.415144 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.415200 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.415213 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.415232 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.415244 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:15Z","lastTransitionTime":"2026-02-17T16:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.518585 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.518656 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.518679 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.518711 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.518739 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:15Z","lastTransitionTime":"2026-02-17T16:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.621202 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.621245 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.621256 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.621270 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.621283 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:15Z","lastTransitionTime":"2026-02-17T16:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.723801 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.723855 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.723871 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.723892 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.723909 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:15Z","lastTransitionTime":"2026-02-17T16:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.826824 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.826883 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.826900 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.826921 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.826937 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:15Z","lastTransitionTime":"2026-02-17T16:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.929647 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.929723 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.929746 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.929777 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.929798 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:15Z","lastTransitionTime":"2026-02-17T16:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.936954 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 11:55:00.181562258 +0000 UTC Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.944350 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:15 crc kubenswrapper[4672]: E0217 16:04:15.944569 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.944711 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.945150 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:15 crc kubenswrapper[4672]: E0217 16:04:15.945400 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:15 crc kubenswrapper[4672]: E0217 16:04:15.946642 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:15 crc kubenswrapper[4672]: I0217 16:04:15.947153 4672 scope.go:117] "RemoveContainer" containerID="9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4" Feb 17 16:04:15 crc kubenswrapper[4672]: E0217 16:04:15.947538 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4f9wc_openshift-ovn-kubernetes(98a910a1-b5f0-4f34-9d76-6474c753e8e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.033081 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.033178 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.033201 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.033235 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.033257 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:16Z","lastTransitionTime":"2026-02-17T16:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.135861 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.135902 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.135912 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.135928 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.135940 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:16Z","lastTransitionTime":"2026-02-17T16:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.238244 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.238282 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.238293 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.238305 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.238316 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:16Z","lastTransitionTime":"2026-02-17T16:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.341848 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.341917 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.341933 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.341956 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.341975 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:16Z","lastTransitionTime":"2026-02-17T16:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.444747 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.444791 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.444802 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.444819 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.444832 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:16Z","lastTransitionTime":"2026-02-17T16:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.547278 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.547343 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.547360 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.547384 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.547401 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:16Z","lastTransitionTime":"2026-02-17T16:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.649944 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.649977 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.649985 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.650001 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.650010 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:16Z","lastTransitionTime":"2026-02-17T16:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.752185 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.752231 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.752241 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.752256 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.752266 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:16Z","lastTransitionTime":"2026-02-17T16:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.854574 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.854623 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.854631 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.854649 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.854662 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:16Z","lastTransitionTime":"2026-02-17T16:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.938113 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 23:30:39.927162184 +0000 UTC Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.944487 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:16 crc kubenswrapper[4672]: E0217 16:04:16.944744 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.957827 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.957895 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.957913 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.957938 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:16 crc kubenswrapper[4672]: I0217 16:04:16.957955 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:16Z","lastTransitionTime":"2026-02-17T16:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.060936 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.061017 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.061078 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.061110 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.061146 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:17Z","lastTransitionTime":"2026-02-17T16:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.166131 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.166168 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.166177 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.166190 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.166198 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:17Z","lastTransitionTime":"2026-02-17T16:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.268641 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.268698 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.268718 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.268740 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.268756 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:17Z","lastTransitionTime":"2026-02-17T16:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.370886 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.370965 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.370988 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.371018 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.371039 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:17Z","lastTransitionTime":"2026-02-17T16:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.473500 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.473577 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.473596 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.473620 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.473640 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:17Z","lastTransitionTime":"2026-02-17T16:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.576140 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.576191 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.576207 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.576229 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.576244 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:17Z","lastTransitionTime":"2026-02-17T16:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.678238 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.678284 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.678299 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.678319 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.678336 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:17Z","lastTransitionTime":"2026-02-17T16:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.780977 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.781176 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.781309 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.781331 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.781345 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:17Z","lastTransitionTime":"2026-02-17T16:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.883164 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.883202 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.883212 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.883227 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.883238 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:17Z","lastTransitionTime":"2026-02-17T16:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.939021 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 20:23:56.428562466 +0000 UTC Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.944331 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.944333 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:17 crc kubenswrapper[4672]: E0217 16:04:17.944488 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:17 crc kubenswrapper[4672]: E0217 16:04:17.944551 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.944358 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:17 crc kubenswrapper[4672]: E0217 16:04:17.944640 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.985210 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.985246 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.985256 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.985270 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:17 crc kubenswrapper[4672]: I0217 16:04:17.985281 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:17Z","lastTransitionTime":"2026-02-17T16:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.087719 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.087751 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.087759 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.087774 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.087784 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:18Z","lastTransitionTime":"2026-02-17T16:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.190575 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.190617 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.190627 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.190640 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.190649 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:18Z","lastTransitionTime":"2026-02-17T16:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.293393 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.293426 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.293435 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.293447 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.293456 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:18Z","lastTransitionTime":"2026-02-17T16:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.396266 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.396314 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.396328 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.396346 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.396362 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:18Z","lastTransitionTime":"2026-02-17T16:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.499856 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.499902 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.499914 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.499930 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.499942 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:18Z","lastTransitionTime":"2026-02-17T16:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.602788 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.602879 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.602905 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.602936 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.602958 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:18Z","lastTransitionTime":"2026-02-17T16:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.706120 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.706203 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.706227 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.706258 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.706282 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:18Z","lastTransitionTime":"2026-02-17T16:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.808440 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.808479 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.808487 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.808500 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.808528 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:18Z","lastTransitionTime":"2026-02-17T16:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.910860 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.910909 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.910921 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.910937 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.910950 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:18Z","lastTransitionTime":"2026-02-17T16:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.940088 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 06:41:50.073626346 +0000 UTC Feb 17 16:04:18 crc kubenswrapper[4672]: I0217 16:04:18.944570 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:18 crc kubenswrapper[4672]: E0217 16:04:18.944753 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.013806 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.013859 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.013871 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.013889 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.013901 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.061961 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.062042 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.062064 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.062094 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.062117 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4672]: E0217 16:04:19.079019 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:19Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.084713 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.084783 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.084807 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.084836 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.084859 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4672]: E0217 16:04:19.105402 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:19Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.109735 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.109804 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.109842 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.109879 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.109903 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4672]: E0217 16:04:19.129940 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:19Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.134250 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.134316 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.134340 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.134370 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.134392 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4672]: E0217 16:04:19.150309 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:19Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.154293 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.154363 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.154387 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.154415 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.154436 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4672]: E0217 16:04:19.169844 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:19Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:19 crc kubenswrapper[4672]: E0217 16:04:19.170060 4672 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.172040 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.172101 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.172123 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.172149 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.172173 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.274381 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.274422 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.274434 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.274450 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.274460 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.376736 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.376805 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.376828 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.376856 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.376876 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.479236 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.479270 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.479278 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.479293 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.479304 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.581861 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.581889 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.581897 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.581909 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.581920 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.684804 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.684837 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.684848 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.684861 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.684870 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.787352 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.787420 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.787440 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.787464 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.787484 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.889349 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.889416 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.889441 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.889468 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.889486 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.940680 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 06:06:01.399913039 +0000 UTC Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.944173 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.944221 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.944186 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:19 crc kubenswrapper[4672]: E0217 16:04:19.944324 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:19 crc kubenswrapper[4672]: E0217 16:04:19.944500 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:19 crc kubenswrapper[4672]: E0217 16:04:19.944684 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.991282 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.991328 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.991337 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.991352 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:19 crc kubenswrapper[4672]: I0217 16:04:19.991362 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:19Z","lastTransitionTime":"2026-02-17T16:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.093396 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.093432 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.093440 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.093453 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.093462 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:20Z","lastTransitionTime":"2026-02-17T16:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.195414 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.195464 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.195474 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.195494 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.195522 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:20Z","lastTransitionTime":"2026-02-17T16:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.297262 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.297306 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.297318 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.297331 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.297340 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:20Z","lastTransitionTime":"2026-02-17T16:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.399970 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.400015 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.400024 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.400038 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.400048 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:20Z","lastTransitionTime":"2026-02-17T16:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.502751 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.502793 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.502806 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.502823 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.502834 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:20Z","lastTransitionTime":"2026-02-17T16:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.605676 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.605728 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.605744 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.605764 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.605784 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:20Z","lastTransitionTime":"2026-02-17T16:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.708352 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.708392 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.708401 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.708414 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.708425 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:20Z","lastTransitionTime":"2026-02-17T16:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.811016 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.811094 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.811116 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.811144 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.811170 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:20Z","lastTransitionTime":"2026-02-17T16:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.912738 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.912789 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.912802 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.912818 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.912830 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:20Z","lastTransitionTime":"2026-02-17T16:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.941203 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 04:51:35.790469809 +0000 UTC Feb 17 16:04:20 crc kubenswrapper[4672]: I0217 16:04:20.944428 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:20 crc kubenswrapper[4672]: E0217 16:04:20.944539 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.014356 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.014396 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.014409 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.014425 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.014437 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:21Z","lastTransitionTime":"2026-02-17T16:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.118066 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.118105 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.118117 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.118130 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.118139 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:21Z","lastTransitionTime":"2026-02-17T16:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.220896 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.220954 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.220971 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.220996 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.221013 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:21Z","lastTransitionTime":"2026-02-17T16:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.325090 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.325151 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.325672 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.325706 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.325719 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:21Z","lastTransitionTime":"2026-02-17T16:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.428733 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.428955 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.429150 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.429492 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.429685 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:21Z","lastTransitionTime":"2026-02-17T16:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.532175 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.532379 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.532438 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.532561 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.532643 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:21Z","lastTransitionTime":"2026-02-17T16:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.634474 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.634615 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.634643 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.634672 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.634693 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:21Z","lastTransitionTime":"2026-02-17T16:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.736831 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.737055 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.737148 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.737213 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.737268 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:21Z","lastTransitionTime":"2026-02-17T16:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.839771 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.840056 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.840121 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.840196 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.840262 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:21Z","lastTransitionTime":"2026-02-17T16:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.941495 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 20:10:45.103801022 +0000 UTC Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.943093 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.943135 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.943146 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.943162 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.943173 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:21Z","lastTransitionTime":"2026-02-17T16:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.943851 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.943864 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:21 crc kubenswrapper[4672]: E0217 16:04:21.943953 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.944001 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:21 crc kubenswrapper[4672]: E0217 16:04:21.944082 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:21 crc kubenswrapper[4672]: E0217 16:04:21.944240 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.957848 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:21Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.971951 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:21Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:21 crc kubenswrapper[4672]: I0217 16:04:21.987491 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:21Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.000254 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:21Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.011564 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:22Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.021952 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:22Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.033587 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:22Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.044306 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:22Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.045174 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.045221 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.045233 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.045254 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.045272 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:22Z","lastTransitionTime":"2026-02-17T16:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.091602 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"786083a3-395c-4659-b58a-a5517a9aa843\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://514d00f8857c64df263abe974d69503c1ac4ea7d4c78f57e5826d58208bb79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9d911c777bbcce655fc6993bdd85da5df16a4402e54b628b839c796f7c784d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7b3941c4c057228fded474417203e3aeb95fcc1df8094bde7b35fd223eec22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:22Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.112567 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:22Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.131959 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:22Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.143366 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4897d237d880d7e444b27a13aba3e1e2d3a7ab13092c77bc1978c08f9ce3e2a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77542e439619fe148e71b29dda8c7c1957550c206d27bd12aa640f991b7ab96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qfvh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:22Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.147361 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.147399 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.147408 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.147423 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.147433 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:22Z","lastTransitionTime":"2026-02-17T16:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.154045 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hqdz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"712be02c-2ccc-4989-aecb-653745bacb0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hqdz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:22Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.185531 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:22Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.201868 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:22Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.214818 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:22Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.227644 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:22Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.250549 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.250583 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.250594 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.250610 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.250622 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:22Z","lastTransitionTime":"2026-02-17T16:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.256177 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:02Z\\\",\\\"message\\\":\\\"s/factory.go:140\\\\nI0217 16:04:02.973587 6352 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:04:02.973759 6352 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:04:02.974155 6352 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:04:02.974619 6352 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 16:04:02.974650 6352 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 16:04:02.974667 6352 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 16:04:02.974674 6352 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 16:04:02.974690 6352 factory.go:656] Stopping watch factory\\\\nI0217 16:04:02.974708 6352 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:04:02.974735 6352 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 16:04:02.974750 6352 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 16:04:02.974758 6352 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 16\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4f9wc_openshift-ovn-kubernetes(98a910a1-b5f0-4f34-9d76-6474c753e8e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:22Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.353065 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.353112 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.353120 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.353133 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.353143 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:22Z","lastTransitionTime":"2026-02-17T16:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.455832 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.455895 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.455912 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.455937 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.455955 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:22Z","lastTransitionTime":"2026-02-17T16:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.558469 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.558635 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.558662 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.558720 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.558746 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:22Z","lastTransitionTime":"2026-02-17T16:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.660944 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.661138 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.661229 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.661324 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.661403 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:22Z","lastTransitionTime":"2026-02-17T16:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.763658 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.764103 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.764233 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.764375 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.764532 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:22Z","lastTransitionTime":"2026-02-17T16:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.866707 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.866772 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.866790 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.866816 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.866832 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:22Z","lastTransitionTime":"2026-02-17T16:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.942623 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:10:10.115703447 +0000 UTC Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.943922 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:22 crc kubenswrapper[4672]: E0217 16:04:22.944103 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.969343 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.969412 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.969439 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.969470 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:22 crc kubenswrapper[4672]: I0217 16:04:22.969492 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:22Z","lastTransitionTime":"2026-02-17T16:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.071699 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.071757 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.071770 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.071788 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.071800 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:23Z","lastTransitionTime":"2026-02-17T16:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.173838 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.173905 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.173927 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.173955 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.173978 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:23Z","lastTransitionTime":"2026-02-17T16:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.275747 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.275888 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.275900 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.275914 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.275926 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:23Z","lastTransitionTime":"2026-02-17T16:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.378413 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.378472 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.378486 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.378501 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.378533 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:23Z","lastTransitionTime":"2026-02-17T16:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.481156 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.481242 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.481255 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.481271 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.481280 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:23Z","lastTransitionTime":"2026-02-17T16:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.506297 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs\") pod \"network-metrics-daemon-hqdz9\" (UID: \"712be02c-2ccc-4989-aecb-653745bacb0d\") " pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:23 crc kubenswrapper[4672]: E0217 16:04:23.506502 4672 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:04:23 crc kubenswrapper[4672]: E0217 16:04:23.506630 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs podName:712be02c-2ccc-4989-aecb-653745bacb0d nodeName:}" failed. No retries permitted until 2026-02-17 16:04:55.506605692 +0000 UTC m=+104.260694464 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs") pod "network-metrics-daemon-hqdz9" (UID: "712be02c-2ccc-4989-aecb-653745bacb0d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.583440 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.583506 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.583572 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.583596 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.583615 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:23Z","lastTransitionTime":"2026-02-17T16:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.685961 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.686002 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.686011 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.686026 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.686035 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:23Z","lastTransitionTime":"2026-02-17T16:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.787844 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.787899 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.787916 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.787941 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.787957 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:23Z","lastTransitionTime":"2026-02-17T16:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.890749 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.890995 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.891060 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.891129 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.891190 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:23Z","lastTransitionTime":"2026-02-17T16:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.943572 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 19:47:11.47605787 +0000 UTC Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.944553 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.944586 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.944624 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:23 crc kubenswrapper[4672]: E0217 16:04:23.944708 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:23 crc kubenswrapper[4672]: E0217 16:04:23.944884 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:23 crc kubenswrapper[4672]: E0217 16:04:23.944941 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.993466 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.993550 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.993570 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.993595 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:23 crc kubenswrapper[4672]: I0217 16:04:23.993616 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:23Z","lastTransitionTime":"2026-02-17T16:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.096264 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.096331 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.096351 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.096377 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.096394 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:24Z","lastTransitionTime":"2026-02-17T16:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.198801 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.198852 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.198868 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.198890 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.198908 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:24Z","lastTransitionTime":"2026-02-17T16:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.302778 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.303052 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.303206 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.303316 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.303419 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:24Z","lastTransitionTime":"2026-02-17T16:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.406211 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.406271 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.406291 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.406316 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.406333 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:24Z","lastTransitionTime":"2026-02-17T16:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.508235 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.508282 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.508295 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.508308 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.508317 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:24Z","lastTransitionTime":"2026-02-17T16:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.610917 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.610956 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.610967 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.610985 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.610996 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:24Z","lastTransitionTime":"2026-02-17T16:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.713251 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.713701 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.713897 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.714048 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.714175 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:24Z","lastTransitionTime":"2026-02-17T16:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.816967 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.817077 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.817095 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.817124 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.817141 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:24Z","lastTransitionTime":"2026-02-17T16:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.919825 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.919870 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.919881 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.919900 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.919914 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:24Z","lastTransitionTime":"2026-02-17T16:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.944306 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:24 crc kubenswrapper[4672]: E0217 16:04:24.944438 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:24 crc kubenswrapper[4672]: I0217 16:04:24.944616 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 18:21:44.545499066 +0000 UTC Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.022589 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.022637 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.022645 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.022659 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.022668 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:25Z","lastTransitionTime":"2026-02-17T16:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.125574 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.125613 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.125621 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.125634 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.125644 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:25Z","lastTransitionTime":"2026-02-17T16:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.228167 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.228234 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.228256 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.228286 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.228317 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:25Z","lastTransitionTime":"2026-02-17T16:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.331837 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.331904 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.331927 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.331956 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.331979 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:25Z","lastTransitionTime":"2026-02-17T16:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.434967 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.435030 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.435050 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.435076 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.435112 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:25Z","lastTransitionTime":"2026-02-17T16:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.537428 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.537501 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.537559 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.537589 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.537612 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:25Z","lastTransitionTime":"2026-02-17T16:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.640743 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.640813 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.640835 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.640863 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.640881 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:25Z","lastTransitionTime":"2026-02-17T16:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.743778 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.743842 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.743864 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.743892 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.743914 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:25Z","lastTransitionTime":"2026-02-17T16:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.847349 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.847451 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.847469 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.847549 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.847645 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:25Z","lastTransitionTime":"2026-02-17T16:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.944564 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:25 crc kubenswrapper[4672]: E0217 16:04:25.944691 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.944580 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.944860 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.944860 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 21:57:10.231636896 +0000 UTC Feb 17 16:04:25 crc kubenswrapper[4672]: E0217 16:04:25.945066 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:25 crc kubenswrapper[4672]: E0217 16:04:25.945160 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.950918 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.950982 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.951002 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.951026 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:25 crc kubenswrapper[4672]: I0217 16:04:25.951051 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:25Z","lastTransitionTime":"2026-02-17T16:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.053414 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.053481 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.053499 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.053549 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.053567 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:26Z","lastTransitionTime":"2026-02-17T16:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.155989 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.156059 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.156080 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.156106 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.156125 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:26Z","lastTransitionTime":"2026-02-17T16:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.258443 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.258565 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.258589 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.258614 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.258634 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:26Z","lastTransitionTime":"2026-02-17T16:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.361136 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.361208 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.361233 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.361333 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.361412 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:26Z","lastTransitionTime":"2026-02-17T16:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.464722 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.464792 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.464809 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.464832 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.464850 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:26Z","lastTransitionTime":"2026-02-17T16:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.567043 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.567104 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.567121 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.567147 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.567166 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:26Z","lastTransitionTime":"2026-02-17T16:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.584255 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5jjr2_edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe/kube-multus/0.log" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.584334 4672 generic.go:334] "Generic (PLEG): container finished" podID="edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe" containerID="0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c" exitCode=1 Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.584382 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5jjr2" event={"ID":"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe","Type":"ContainerDied","Data":"0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c"} Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.585152 4672 scope.go:117] "RemoveContainer" containerID="0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.606143 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"786083a3-395c-4659-b58a-a5517a9aa843\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://514d00f8857c64df263abe974d69503c1ac4ea7d4c78f57e5826d58208bb79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9d911c777bbcce655fc6993bdd85da5df16a4402e54b628b839c796f7c784d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7b3941c4c057228fded474417203e3aeb95fcc1df8094bde7b35fd223eec22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:26Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.632911 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:26Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.656875 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:26Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.671170 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.671236 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.671311 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.671344 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.671415 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:26Z","lastTransitionTime":"2026-02-17T16:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.675701 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hqdz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"712be02c-2ccc-4989-aecb-653745bacb0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hqdz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:26Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.709150 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:26Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.729446 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:26Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.745646 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:26Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.760919 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:26Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.774301 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.774377 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.774396 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.774422 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.774440 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:26Z","lastTransitionTime":"2026-02-17T16:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.792063 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:02Z\\\",\\\"message\\\":\\\"s/factory.go:140\\\\nI0217 16:04:02.973587 6352 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:04:02.973759 6352 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:04:02.974155 6352 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:04:02.974619 6352 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 16:04:02.974650 6352 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 16:04:02.974667 6352 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 16:04:02.974674 6352 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 16:04:02.974690 6352 factory.go:656] Stopping watch factory\\\\nI0217 16:04:02.974708 6352 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:04:02.974735 6352 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 16:04:02.974750 6352 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 16:04:02.974758 6352 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 16\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4f9wc_openshift-ovn-kubernetes(98a910a1-b5f0-4f34-9d76-6474c753e8e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:26Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.810189 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4897d237d880d7e444b27a13aba3e1e2d3a7ab13092c77bc1978c08f9ce3e2a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77542e439619fe148e71b29dda8c7c1957550c206d27bd12aa640f991b7ab96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qfvh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:26Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.833040 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:26Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.852628 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:26Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.870251 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:26Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.877753 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.877814 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.877831 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.877857 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.877874 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:26Z","lastTransitionTime":"2026-02-17T16:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.889902 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:25Z\\\",\\\"message\\\":\\\"2026-02-17T16:03:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fd2c4888-2f49-43b8-af52-10fe708bb3a8\\\\n2026-02-17T16:03:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fd2c4888-2f49-43b8-af52-10fe708bb3a8 to /host/opt/cni/bin/\\\\n2026-02-17T16:03:40Z [verbose] multus-daemon started\\\\n2026-02-17T16:03:40Z [verbose] Readiness Indicator file check\\\\n2026-02-17T16:04:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:26Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.908108 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:26Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.927269 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:26Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.944524 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:26 crc kubenswrapper[4672]: E0217 16:04:26.944667 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.945653 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 17:33:28.848136727 +0000 UTC Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.945714 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:26Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.960335 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:26Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.979906 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.979959 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.979984 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.980015 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:26 crc kubenswrapper[4672]: I0217 16:04:26.980037 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:26Z","lastTransitionTime":"2026-02-17T16:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.082420 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.082456 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.082465 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.082478 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.082488 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:27Z","lastTransitionTime":"2026-02-17T16:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.185304 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.185367 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.185383 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.185407 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.185426 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:27Z","lastTransitionTime":"2026-02-17T16:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.288680 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.288740 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.288756 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.288778 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.288795 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:27Z","lastTransitionTime":"2026-02-17T16:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.390959 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.391020 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.391037 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.391063 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.391079 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:27Z","lastTransitionTime":"2026-02-17T16:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.493545 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.493585 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.493593 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.493606 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.493614 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:27Z","lastTransitionTime":"2026-02-17T16:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.590793 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5jjr2_edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe/kube-multus/0.log" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.590838 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5jjr2" event={"ID":"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe","Type":"ContainerStarted","Data":"f7f95d42a206c5e9b8e4b546034635db87f5912e543fea24cccde60817511eaa"} Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.599937 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.600028 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.600146 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.600183 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.600207 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:27Z","lastTransitionTime":"2026-02-17T16:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.619130 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:27Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.643473 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:27Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.664439 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"786083a3-395c-4659-b58a-a5517a9aa843\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://514d00f8857c64df263abe974d69503c1ac4ea7d4c78f57e5826d58208bb79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9d911c777bbcce655fc6993bdd85da5df16a4402e54b628b839c796f7c784d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7b3941c4c057228fded474417203e3aeb95fcc1df8094bde7b35fd223eec22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:27Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.696856 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:27Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.704013 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.704076 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.704095 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.704122 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.704139 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:27Z","lastTransitionTime":"2026-02-17T16:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.718368 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:27Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.738074 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:27Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.754645 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:27Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.786319 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:02Z\\\",\\\"message\\\":\\\"s/factory.go:140\\\\nI0217 16:04:02.973587 6352 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:04:02.973759 6352 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:04:02.974155 6352 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:04:02.974619 6352 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 16:04:02.974650 6352 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 16:04:02.974667 6352 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 16:04:02.974674 6352 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 16:04:02.974690 6352 factory.go:656] Stopping watch factory\\\\nI0217 16:04:02.974708 6352 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:04:02.974735 6352 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 16:04:02.974750 6352 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 16:04:02.974758 6352 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 16\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4f9wc_openshift-ovn-kubernetes(98a910a1-b5f0-4f34-9d76-6474c753e8e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:27Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.806317 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4897d237d880d7e444b27a13aba3e1e2d3a7ab13092c77bc1978c08f9ce3e2a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77542e439619fe148e71b29dda8c7c1957550c206d27bd12aa640f991b7ab96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qfvh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:27Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.807755 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.807820 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.807845 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.807915 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.807940 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:27Z","lastTransitionTime":"2026-02-17T16:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.822576 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hqdz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"712be02c-2ccc-4989-aecb-653745bacb0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hqdz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:27Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.842808 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:27Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.861212 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:27Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.881126 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f95d42a206c5e9b8e4b546034635db87f5912e543fea24cccde60817511eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:25Z\\\",\\\"message\\\":\\\"2026-02-17T16:03:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fd2c4888-2f49-43b8-af52-10fe708bb3a8\\\\n2026-02-17T16:03:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fd2c4888-2f49-43b8-af52-10fe708bb3a8 to /host/opt/cni/bin/\\\\n2026-02-17T16:03:40Z [verbose] multus-daemon started\\\\n2026-02-17T16:03:40Z [verbose] Readiness Indicator file check\\\\n2026-02-17T16:04:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:04:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:27Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.901473 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:27Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.910653 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.910726 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.910750 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.910779 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.910801 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:27Z","lastTransitionTime":"2026-02-17T16:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.920742 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:27Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.942437 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:27Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.946781 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 16:00:01.954878817 +0000 UTC Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.946846 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.946954 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.947314 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:27 crc kubenswrapper[4672]: E0217 16:04:27.947278 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:27 crc kubenswrapper[4672]: E0217 16:04:27.947414 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:27 crc kubenswrapper[4672]: E0217 16:04:27.947610 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.963846 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:27Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:27 crc kubenswrapper[4672]: I0217 16:04:27.984261 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:27Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.021829 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.021893 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.021910 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.021933 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.021950 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.124653 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.124723 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.124740 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.124768 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.124786 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.227807 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.227904 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.227928 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.227960 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.227986 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.331237 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.331290 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.331309 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.331332 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.331350 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.434260 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.434312 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.434329 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.434353 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.434370 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.537685 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.537778 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.537803 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.537833 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.537854 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.640763 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.640853 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.640877 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.640912 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.640936 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.744103 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.744191 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.744215 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.744245 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.744266 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.847423 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.847546 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.847576 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.847607 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.847630 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.944084 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:28 crc kubenswrapper[4672]: E0217 16:04:28.944274 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.947136 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 14:00:32.10818511 +0000 UTC Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.950715 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.950789 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.950816 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.950853 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:28 crc kubenswrapper[4672]: I0217 16:04:28.950875 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:28Z","lastTransitionTime":"2026-02-17T16:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.054500 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.054623 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.054647 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.054678 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.054700 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.157840 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.157912 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.157931 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.157955 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.157971 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.260776 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.260846 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.260866 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.260891 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.260910 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.364974 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.365039 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.365098 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.365121 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.365138 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.412612 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.412710 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.412738 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.412771 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.412797 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4672]: E0217 16:04:29.433333 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.438495 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.438568 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.438586 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.438609 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.438626 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4672]: E0217 16:04:29.459415 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.464373 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.464443 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.464467 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.464502 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.464592 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4672]: E0217 16:04:29.484823 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.489680 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.489737 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.489755 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.489778 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.489796 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4672]: E0217 16:04:29.509402 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.514291 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.514348 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.514368 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.514392 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.514411 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4672]: E0217 16:04:29.534386 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:29Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:29 crc kubenswrapper[4672]: E0217 16:04:29.534670 4672 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.536727 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.536780 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.536798 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.536822 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.536839 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.640144 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.640246 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.640263 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.640288 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.640306 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.744868 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.744943 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.744960 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.744986 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.745004 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.849565 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.849633 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.849653 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.849679 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.849703 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.944643 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:29 crc kubenswrapper[4672]: E0217 16:04:29.944843 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.944917 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.946132 4672 scope.go:117] "RemoveContainer" containerID="9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.946729 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:29 crc kubenswrapper[4672]: E0217 16:04:29.947021 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:29 crc kubenswrapper[4672]: E0217 16:04:29.946910 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.947370 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:23:50.797826141 +0000 UTC Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.952648 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.952706 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.952725 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.952749 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:29 crc kubenswrapper[4672]: I0217 16:04:29.952767 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:29Z","lastTransitionTime":"2026-02-17T16:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.056017 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.056078 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.056095 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.056123 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.056144 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:30Z","lastTransitionTime":"2026-02-17T16:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.159340 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.159413 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.159438 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.159466 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.159483 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:30Z","lastTransitionTime":"2026-02-17T16:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.263229 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.263271 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.263285 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.263303 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.263317 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:30Z","lastTransitionTime":"2026-02-17T16:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.367243 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.367298 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.367314 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.367336 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.367352 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:30Z","lastTransitionTime":"2026-02-17T16:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.470169 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.470234 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.470247 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.470265 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.470297 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:30Z","lastTransitionTime":"2026-02-17T16:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.572722 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.572768 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.572782 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.572800 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.572813 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:30Z","lastTransitionTime":"2026-02-17T16:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.603408 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4f9wc_98a910a1-b5f0-4f34-9d76-6474c753e8e7/ovnkube-controller/2.log" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.611383 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerStarted","Data":"432ab3a5ae33d1f4de114a70bbc405e9c0346cbd9c935aeac9e44d0586f569d1"} Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.612120 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.630906 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.645890 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.658347 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.669804 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.675428 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.675483 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.675502 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.675568 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.675586 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:30Z","lastTransitionTime":"2026-02-17T16:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.684096 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"786083a3-395c-4659-b58a-a5517a9aa843\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://514d00f8857c64df263abe974d69503c1ac4ea7d4c78f57e5826d58208bb79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9d911c777bbcce655fc6993bdd85da5df16a4402e54b628b839c796f7c784d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7b3941c4c057228fded474417203e3aeb95fcc1df8094bde7b35fd223eec22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.698141 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.711778 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.733834 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.747173 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.759360 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.769160 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.778190 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.778282 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.778299 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.778320 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.778334 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:30Z","lastTransitionTime":"2026-02-17T16:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.799578 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://432ab3a5ae33d1f4de114a70bbc405e9c0346cbd9c935aeac9e44d0586f569d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:02Z\\\",\\\"message\\\":\\\"s/factory.go:140\\\\nI0217 16:04:02.973587 6352 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:04:02.973759 6352 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:04:02.974155 6352 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:04:02.974619 6352 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 16:04:02.974650 6352 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 16:04:02.974667 6352 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 16:04:02.974674 6352 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 16:04:02.974690 6352 factory.go:656] Stopping watch factory\\\\nI0217 16:04:02.974708 6352 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:04:02.974735 6352 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 16:04:02.974750 6352 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 16:04:02.974758 6352 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 16\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.815275 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4897d237d880d7e444b27a13aba3e1e2d3a7ab13092c77bc1978c08f9ce3e2a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77542e439619fe148e71b29dda8c7c1957550c206d27bd12aa640f991b7ab96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qfvh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.831851 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hqdz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"712be02c-2ccc-4989-aecb-653745bacb0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hqdz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.845639 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.861176 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.882836 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.882886 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.882897 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.882914 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.882925 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:30Z","lastTransitionTime":"2026-02-17T16:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.885347 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.904369 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f95d42a206c5e9b8e4b546034635db87f5912e543fea24cccde60817511eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:25Z\\\",\\\"message\\\":\\\"2026-02-17T16:03:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fd2c4888-2f49-43b8-af52-10fe708bb3a8\\\\n2026-02-17T16:03:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fd2c4888-2f49-43b8-af52-10fe708bb3a8 to /host/opt/cni/bin/\\\\n2026-02-17T16:03:40Z [verbose] multus-daemon started\\\\n2026-02-17T16:03:40Z [verbose] Readiness Indicator file check\\\\n2026-02-17T16:04:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:04:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:30Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.944909 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:30 crc kubenswrapper[4672]: E0217 16:04:30.945050 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.947942 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 02:39:05.111708392 +0000 UTC Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.985274 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.985341 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.985359 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.985384 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:30 crc kubenswrapper[4672]: I0217 16:04:30.985401 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:30Z","lastTransitionTime":"2026-02-17T16:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.088728 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.088783 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.088800 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.088825 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.088841 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:31Z","lastTransitionTime":"2026-02-17T16:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.192090 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.192140 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.192152 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.192169 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.192183 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:31Z","lastTransitionTime":"2026-02-17T16:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.294991 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.295047 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.295063 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.295084 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.295102 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:31Z","lastTransitionTime":"2026-02-17T16:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.398532 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.398605 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.398623 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.398647 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.398664 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:31Z","lastTransitionTime":"2026-02-17T16:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.502453 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.502557 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.502582 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.502608 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.502625 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:31Z","lastTransitionTime":"2026-02-17T16:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.606166 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.606226 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.606244 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.606267 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.606283 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:31Z","lastTransitionTime":"2026-02-17T16:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.617383 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4f9wc_98a910a1-b5f0-4f34-9d76-6474c753e8e7/ovnkube-controller/3.log" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.618455 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4f9wc_98a910a1-b5f0-4f34-9d76-6474c753e8e7/ovnkube-controller/2.log" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.623003 4672 generic.go:334] "Generic (PLEG): container finished" podID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerID="432ab3a5ae33d1f4de114a70bbc405e9c0346cbd9c935aeac9e44d0586f569d1" exitCode=1 Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.623058 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerDied","Data":"432ab3a5ae33d1f4de114a70bbc405e9c0346cbd9c935aeac9e44d0586f569d1"} Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.623106 4672 scope.go:117] "RemoveContainer" containerID="9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.624304 4672 scope.go:117] "RemoveContainer" containerID="432ab3a5ae33d1f4de114a70bbc405e9c0346cbd9c935aeac9e44d0586f569d1" Feb 17 16:04:31 crc kubenswrapper[4672]: E0217 16:04:31.624771 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4f9wc_openshift-ovn-kubernetes(98a910a1-b5f0-4f34-9d76-6474c753e8e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.658594 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://432ab3a5ae33d1f4de114a70bbc405e9c0346cbd9c935aeac9e44d0586f569d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:02Z\\\",\\\"message\\\":\\\"s/factory.go:140\\\\nI0217 16:04:02.973587 6352 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:04:02.973759 6352 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:04:02.974155 6352 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:04:02.974619 6352 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 16:04:02.974650 6352 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 16:04:02.974667 6352 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 16:04:02.974674 6352 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 16:04:02.974690 6352 factory.go:656] Stopping watch factory\\\\nI0217 16:04:02.974708 6352 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:04:02.974735 6352 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 16:04:02.974750 6352 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 16:04:02.974758 6352 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 16\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://432ab3a5ae33d1f4de114a70bbc405e9c0346cbd9c935aeac9e44d0586f569d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:31Z\\\",\\\"message\\\":\\\":208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 16:04:30.991601 6772 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 16:04:30.991987 6772 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 16:04:30.992158 6772 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:30.992314 6772 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:04:30.992389 6772 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:04:30.992465 6772 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0217 16:04:30.992574 6772 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:30.992666 6772 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:30.993059 6772 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.678238 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4897d237d880d7e444b27a13aba3e1e2d3a7ab13092c77bc1978c08f9ce3e2a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77542e439619fe148e71b29dda8c7c1957550c206d27bd12aa640f991b7ab96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qfvh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.689741 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hqdz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"712be02c-2ccc-4989-aecb-653745bacb0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hqdz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.709174 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.709229 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.709248 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.709272 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.709291 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:31Z","lastTransitionTime":"2026-02-17T16:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.716471 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.733217 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.751776 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.768265 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.783639 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.802862 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.812000 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.812053 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.812066 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.812083 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.812095 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:31Z","lastTransitionTime":"2026-02-17T16:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.818563 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.836618 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f95d42a206c5e9b8e4b546034635db87f5912e543fea24cccde60817511eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:25Z\\\",\\\"message\\\":\\\"2026-02-17T16:03:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fd2c4888-2f49-43b8-af52-10fe708bb3a8\\\\n2026-02-17T16:03:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fd2c4888-2f49-43b8-af52-10fe708bb3a8 to /host/opt/cni/bin/\\\\n2026-02-17T16:03:40Z [verbose] multus-daemon started\\\\n2026-02-17T16:03:40Z [verbose] Readiness Indicator file check\\\\n2026-02-17T16:04:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:04:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.850727 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.865202 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.878751 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.890568 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.907987 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"786083a3-395c-4659-b58a-a5517a9aa843\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://514d00f8857c64df263abe974d69503c1ac4ea7d4c78f57e5826d58208bb79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9d911c777bbcce655fc6993bdd85da5df16a4402e54b628b839c796f7c784d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7b3941c4c057228fded474417203e3aeb95fcc1df8094bde7b35fd223eec22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.914586 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.914675 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.914693 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.915209 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.915236 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:31Z","lastTransitionTime":"2026-02-17T16:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.923904 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.941662 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.945692 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:31 crc kubenswrapper[4672]: E0217 16:04:31.946115 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.945951 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:31 crc kubenswrapper[4672]: E0217 16:04:31.946651 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.945769 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:31 crc kubenswrapper[4672]: E0217 16:04:31.947063 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.948093 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 08:15:51.099730297 +0000 UTC Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.963007 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.978160 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"786083a3-395c-4659-b58a-a5517a9aa843\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://514d00f8857c64df263abe974d69503c1ac4ea7d4c78f57e5826d58208bb79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9d911c777bbcce655fc6993bdd85da5df16a4402e54b628b839c796f7c784d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7b3941c4c057228fded474417203e3aeb95fcc1df8094bde7b35fd223eec22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:31 crc kubenswrapper[4672]: I0217 16:04:31.994413 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:31Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.009639 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.018891 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.019127 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.019400 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.019690 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.019947 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:32Z","lastTransitionTime":"2026-02-17T16:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.021596 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.034049 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.055581 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://432ab3a5ae33d1f4de114a70bbc405e9c0346cbd9c935aeac9e44d0586f569d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9056a32a71e639dc5097e83b11c69037abde76ccd4e3305f13a6617fe15dc4f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:02Z\\\",\\\"message\\\":\\\"s/factory.go:140\\\\nI0217 16:04:02.973587 6352 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:04:02.973759 6352 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:04:02.974155 6352 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 16:04:02.974619 6352 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 16:04:02.974650 6352 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 16:04:02.974667 6352 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 16:04:02.974674 6352 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 16:04:02.974690 6352 factory.go:656] Stopping watch factory\\\\nI0217 16:04:02.974708 6352 ovnkube.go:599] Stopped ovnkube\\\\nI0217 16:04:02.974735 6352 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 16:04:02.974750 6352 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 16:04:02.974758 6352 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 16\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://432ab3a5ae33d1f4de114a70bbc405e9c0346cbd9c935aeac9e44d0586f569d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:31Z\\\",\\\"message\\\":\\\":208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 16:04:30.991601 6772 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 16:04:30.991987 6772 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 16:04:30.992158 6772 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:30.992314 6772 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:04:30.992389 6772 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:04:30.992465 6772 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0217 16:04:30.992574 6772 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:30.992666 6772 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:30.993059 6772 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.067720 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4897d237d880d7e444b27a13aba3e1e2d3a7ab13092c77bc1978c08f9ce3e2a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77542e439619fe148e71b29dda8c7c1957550c206d27bd12aa640f991b7ab96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qfvh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.078687 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hqdz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"712be02c-2ccc-4989-aecb-653745bacb0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hqdz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.101018 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.118614 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.122048 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.122102 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.122115 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.122131 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.122142 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:32Z","lastTransitionTime":"2026-02-17T16:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.136679 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f95d42a206c5e9b8e4b546034635db87f5912e543fea24cccde60817511eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:25Z\\\",\\\"message\\\":\\\"2026-02-17T16:03:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fd2c4888-2f49-43b8-af52-10fe708bb3a8\\\\n2026-02-17T16:03:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fd2c4888-2f49-43b8-af52-10fe708bb3a8 to /host/opt/cni/bin/\\\\n2026-02-17T16:03:40Z [verbose] multus-daemon started\\\\n2026-02-17T16:03:40Z [verbose] Readiness Indicator file check\\\\n2026-02-17T16:04:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:04:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.151335 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.165768 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.180504 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.192495 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.207795 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.221910 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.224716 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.224769 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.224798 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.224814 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.224824 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:32Z","lastTransitionTime":"2026-02-17T16:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.328456 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.328572 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.328610 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.328637 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.328655 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:32Z","lastTransitionTime":"2026-02-17T16:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.431277 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.431335 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.431351 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.431374 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.431391 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:32Z","lastTransitionTime":"2026-02-17T16:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.534395 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.534473 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.534490 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.534549 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.534569 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:32Z","lastTransitionTime":"2026-02-17T16:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.629436 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4f9wc_98a910a1-b5f0-4f34-9d76-6474c753e8e7/ovnkube-controller/3.log" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.636123 4672 scope.go:117] "RemoveContainer" containerID="432ab3a5ae33d1f4de114a70bbc405e9c0346cbd9c935aeac9e44d0586f569d1" Feb 17 16:04:32 crc kubenswrapper[4672]: E0217 16:04:32.636444 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4f9wc_openshift-ovn-kubernetes(98a910a1-b5f0-4f34-9d76-6474c753e8e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.637362 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.639702 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.639744 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.639773 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.639799 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:32Z","lastTransitionTime":"2026-02-17T16:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.653864 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.676136 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.695622 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.716974 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f95d42a206c5e9b8e4b546034635db87f5912e543fea24cccde60817511eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:25Z\\\",\\\"message\\\":\\\"2026-02-17T16:03:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fd2c4888-2f49-43b8-af52-10fe708bb3a8\\\\n2026-02-17T16:03:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fd2c4888-2f49-43b8-af52-10fe708bb3a8 to /host/opt/cni/bin/\\\\n2026-02-17T16:03:40Z [verbose] multus-daemon started\\\\n2026-02-17T16:03:40Z [verbose] Readiness Indicator file check\\\\n2026-02-17T16:04:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:04:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.733925 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.742805 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.743015 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.743167 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.743315 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.743453 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:32Z","lastTransitionTime":"2026-02-17T16:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.755205 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.774773 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.791043 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.809135 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"786083a3-395c-4659-b58a-a5517a9aa843\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://514d00f8857c64df263abe974d69503c1ac4ea7d4c78f57e5826d58208bb79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9d911c777bbcce655fc6993bdd85da5df16a4402e54b628b839c796f7c784d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7b3941c4c057228fded474417203e3aeb95fcc1df8094bde7b35fd223eec22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.828879 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.847074 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.847124 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.847141 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.847167 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.847183 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:32Z","lastTransitionTime":"2026-02-17T16:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.851393 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.868638 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4897d237d880d7e444b27a13aba3e1e2d3a7ab13092c77bc1978c08f9ce3e2a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77542e439619fe148e71b29dda8c7c1957550c206d27bd12aa640f991b7ab96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qfvh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.885125 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hqdz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"712be02c-2ccc-4989-aecb-653745bacb0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hqdz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.918458 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.941064 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.944222 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:32 crc kubenswrapper[4672]: E0217 16:04:32.944390 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.949199 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 11:14:53.089454336 +0000 UTC Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.951471 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.951571 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.951592 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.951618 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.951643 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:32Z","lastTransitionTime":"2026-02-17T16:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.960669 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:32 crc kubenswrapper[4672]: I0217 16:04:32.976942 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:32Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.008101 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://432ab3a5ae33d1f4de114a70bbc405e9c0346cbd9c935aeac9e44d0586f569d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://432ab3a5ae33d1f4de114a70bbc405e9c0346cbd9c935aeac9e44d0586f569d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:31Z\\\",\\\"message\\\":\\\":208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 16:04:30.991601 6772 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 16:04:30.991987 6772 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 16:04:30.992158 6772 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:30.992314 6772 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:04:30.992389 6772 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:04:30.992465 6772 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0217 16:04:30.992574 6772 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:30.992666 6772 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:30.993059 6772 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4f9wc_openshift-ovn-kubernetes(98a910a1-b5f0-4f34-9d76-6474c753e8e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:33Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.054671 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.054803 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.054824 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.054846 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.054865 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:33Z","lastTransitionTime":"2026-02-17T16:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.157982 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.158065 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.158088 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.158119 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.158145 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:33Z","lastTransitionTime":"2026-02-17T16:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.261781 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.261851 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.261869 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.261891 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.261909 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:33Z","lastTransitionTime":"2026-02-17T16:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.365393 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.365476 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.365505 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.365573 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.365596 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:33Z","lastTransitionTime":"2026-02-17T16:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.468914 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.469324 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.469468 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.469707 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.469845 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:33Z","lastTransitionTime":"2026-02-17T16:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.572120 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.572378 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.572438 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.572566 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.572663 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:33Z","lastTransitionTime":"2026-02-17T16:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.674843 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.674882 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.674894 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.674911 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.674923 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:33Z","lastTransitionTime":"2026-02-17T16:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.778813 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.778887 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.778909 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.778939 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.778962 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:33Z","lastTransitionTime":"2026-02-17T16:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.881033 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.881394 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.881590 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.881762 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.881920 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:33Z","lastTransitionTime":"2026-02-17T16:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.944500 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.944623 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.944763 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:33 crc kubenswrapper[4672]: E0217 16:04:33.945267 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:33 crc kubenswrapper[4672]: E0217 16:04:33.945026 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:33 crc kubenswrapper[4672]: E0217 16:04:33.945415 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.949604 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 16:09:52.254639789 +0000 UTC Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.984846 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.984929 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.984947 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.984970 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:33 crc kubenswrapper[4672]: I0217 16:04:33.984988 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:33Z","lastTransitionTime":"2026-02-17T16:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.088197 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.088271 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.088294 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.088325 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.088349 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:34Z","lastTransitionTime":"2026-02-17T16:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.191589 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.191935 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.191960 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.191987 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.192004 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:34Z","lastTransitionTime":"2026-02-17T16:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.294764 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.294824 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.294845 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.294871 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.294889 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:34Z","lastTransitionTime":"2026-02-17T16:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.398362 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.398429 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.398447 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.398471 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.398491 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:34Z","lastTransitionTime":"2026-02-17T16:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.501427 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.501491 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.501538 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.501564 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.501583 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:34Z","lastTransitionTime":"2026-02-17T16:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.604264 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.604350 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.604373 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.604405 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.604431 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:34Z","lastTransitionTime":"2026-02-17T16:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.707600 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.707659 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.707675 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.707701 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.707729 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:34Z","lastTransitionTime":"2026-02-17T16:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.810851 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.811007 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.811042 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.811072 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.811092 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:34Z","lastTransitionTime":"2026-02-17T16:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.913713 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.913755 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.913766 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.913781 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.913793 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:34Z","lastTransitionTime":"2026-02-17T16:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.944722 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:34 crc kubenswrapper[4672]: E0217 16:04:34.944899 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:34 crc kubenswrapper[4672]: I0217 16:04:34.949965 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 13:17:28.48123822 +0000 UTC Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.016830 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.016887 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.016910 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.016940 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.016961 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:35Z","lastTransitionTime":"2026-02-17T16:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.120085 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.120173 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.120183 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.120198 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.120210 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:35Z","lastTransitionTime":"2026-02-17T16:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.223487 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.223586 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.223607 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.223630 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.223647 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:35Z","lastTransitionTime":"2026-02-17T16:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.326030 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.326102 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.326115 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.326133 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.326147 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:35Z","lastTransitionTime":"2026-02-17T16:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.428940 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.429007 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.429030 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.429061 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.429084 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:35Z","lastTransitionTime":"2026-02-17T16:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.532644 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.532714 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.532754 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.532788 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.532818 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:35Z","lastTransitionTime":"2026-02-17T16:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.635577 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.635674 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.635692 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.635860 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.635877 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:35Z","lastTransitionTime":"2026-02-17T16:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.638761 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:04:35 crc kubenswrapper[4672]: E0217 16:04:35.638909 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.638891103 +0000 UTC m=+148.392979855 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.638939 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.638986 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:35 crc kubenswrapper[4672]: E0217 16:04:35.639084 4672 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:04:35 crc kubenswrapper[4672]: E0217 16:04:35.639096 4672 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:04:35 crc kubenswrapper[4672]: E0217 16:04:35.639125 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.63911649 +0000 UTC m=+148.393205232 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 16:04:35 crc kubenswrapper[4672]: E0217 16:04:35.639156 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.63913797 +0000 UTC m=+148.393226742 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.738301 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.738353 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.738363 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.738379 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.738394 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:35Z","lastTransitionTime":"2026-02-17T16:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.739420 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.739550 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:35 crc kubenswrapper[4672]: E0217 16:04:35.739636 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:04:35 crc kubenswrapper[4672]: E0217 16:04:35.739656 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:04:35 crc kubenswrapper[4672]: E0217 16:04:35.739668 4672 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:04:35 crc kubenswrapper[4672]: E0217 16:04:35.739725 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.739707487 +0000 UTC m=+148.493796319 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:04:35 crc kubenswrapper[4672]: E0217 16:04:35.739729 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 16:04:35 crc kubenswrapper[4672]: E0217 16:04:35.739771 4672 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 16:04:35 crc kubenswrapper[4672]: E0217 16:04:35.739795 4672 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:04:35 crc kubenswrapper[4672]: E0217 16:04:35.739882 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.739861922 +0000 UTC m=+148.493950684 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.840822 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.840901 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.840921 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.840943 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.840962 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:35Z","lastTransitionTime":"2026-02-17T16:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.943206 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.943274 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.943299 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.943329 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.943352 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:35Z","lastTransitionTime":"2026-02-17T16:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.943961 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.944044 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.943971 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:35 crc kubenswrapper[4672]: E0217 16:04:35.944157 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:35 crc kubenswrapper[4672]: E0217 16:04:35.944327 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:35 crc kubenswrapper[4672]: E0217 16:04:35.944453 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:35 crc kubenswrapper[4672]: I0217 16:04:35.950897 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 22:02:31.434483582 +0000 UTC Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.046371 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.046421 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.046436 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.046457 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.046471 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:36Z","lastTransitionTime":"2026-02-17T16:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.148957 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.149027 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.149045 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.149068 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.149084 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:36Z","lastTransitionTime":"2026-02-17T16:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.253302 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.253631 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.253657 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.253709 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.253921 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:36Z","lastTransitionTime":"2026-02-17T16:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.356894 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.356961 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.356983 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.357012 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.357037 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:36Z","lastTransitionTime":"2026-02-17T16:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.459683 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.459739 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.459750 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.459766 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.459776 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:36Z","lastTransitionTime":"2026-02-17T16:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.563356 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.563437 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.563465 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.563494 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.563556 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:36Z","lastTransitionTime":"2026-02-17T16:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.668961 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.669019 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.669033 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.669054 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.669072 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:36Z","lastTransitionTime":"2026-02-17T16:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.771658 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.771721 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.771739 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.771762 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.771784 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:36Z","lastTransitionTime":"2026-02-17T16:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.875112 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.875176 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.875195 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.875220 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.875242 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:36Z","lastTransitionTime":"2026-02-17T16:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.944635 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:36 crc kubenswrapper[4672]: E0217 16:04:36.944829 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.951717 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 13:18:53.022711931 +0000 UTC Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.978444 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.978540 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.978554 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.978574 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:36 crc kubenswrapper[4672]: I0217 16:04:36.978590 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:36Z","lastTransitionTime":"2026-02-17T16:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.081910 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.081966 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.081989 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.082017 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.082036 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:37Z","lastTransitionTime":"2026-02-17T16:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.185150 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.185214 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.185232 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.185257 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.185277 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:37Z","lastTransitionTime":"2026-02-17T16:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.288160 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.288232 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.288249 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.288273 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.288290 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:37Z","lastTransitionTime":"2026-02-17T16:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.391690 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.391792 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.391809 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.391834 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.391851 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:37Z","lastTransitionTime":"2026-02-17T16:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.494199 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.494497 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.494624 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.494728 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.494797 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:37Z","lastTransitionTime":"2026-02-17T16:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.596972 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.597215 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.597379 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.597575 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.597748 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:37Z","lastTransitionTime":"2026-02-17T16:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.701028 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.701079 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.701089 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.701106 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.701120 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:37Z","lastTransitionTime":"2026-02-17T16:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.804132 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.804203 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.804214 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.804227 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.804236 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:37Z","lastTransitionTime":"2026-02-17T16:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.907943 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.908294 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.908497 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.908696 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.908851 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:37Z","lastTransitionTime":"2026-02-17T16:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.944819 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:37 crc kubenswrapper[4672]: E0217 16:04:37.944993 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.944844 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.945157 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:37 crc kubenswrapper[4672]: E0217 16:04:37.945216 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:37 crc kubenswrapper[4672]: E0217 16:04:37.945454 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:37 crc kubenswrapper[4672]: I0217 16:04:37.952038 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 12:52:22.581193387 +0000 UTC Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.011743 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.012120 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.012299 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.012480 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.012704 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:38Z","lastTransitionTime":"2026-02-17T16:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.115987 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.116029 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.116038 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.116053 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.116067 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:38Z","lastTransitionTime":"2026-02-17T16:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.219665 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.219722 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.219740 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.219769 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.219787 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:38Z","lastTransitionTime":"2026-02-17T16:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.323157 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.323229 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.323252 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.323284 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.323306 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:38Z","lastTransitionTime":"2026-02-17T16:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.426253 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.426321 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.426343 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.426377 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.426402 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:38Z","lastTransitionTime":"2026-02-17T16:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.529576 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.529655 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.529680 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.529710 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.529736 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:38Z","lastTransitionTime":"2026-02-17T16:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.633294 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.633737 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.633918 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.634109 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.634256 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:38Z","lastTransitionTime":"2026-02-17T16:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.737278 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.737343 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.737353 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.737371 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.737383 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:38Z","lastTransitionTime":"2026-02-17T16:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.846344 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.846420 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.846440 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.846467 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.846485 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:38Z","lastTransitionTime":"2026-02-17T16:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.944708 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:38 crc kubenswrapper[4672]: E0217 16:04:38.944916 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.949793 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.949891 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.949919 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.949955 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.950080 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:38Z","lastTransitionTime":"2026-02-17T16:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:38 crc kubenswrapper[4672]: I0217 16:04:38.952947 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 13:43:31.411388619 +0000 UTC Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.052952 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.053105 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.053135 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.053165 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.053188 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.156595 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.156660 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.156679 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.156703 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.156719 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.259376 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.259412 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.259422 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.259438 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.259450 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.363398 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.363464 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.363487 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.363548 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.363566 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.466852 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.467200 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.467319 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.467468 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.467648 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.570708 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.570794 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.570815 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.570842 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.570861 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.585392 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.585612 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.585729 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.585841 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.585929 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4672]: E0217 16:04:39.605293 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.609465 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.609509 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.609527 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.609565 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.609649 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4672]: E0217 16:04:39.628040 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.632185 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.632315 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.632405 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.632507 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.632617 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4672]: E0217 16:04:39.649750 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.654613 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.654665 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.654679 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.654697 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.654708 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4672]: E0217 16:04:39.675514 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.680263 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.680309 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.680321 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.680342 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.680355 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4672]: E0217 16:04:39.697584 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:39Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:39 crc kubenswrapper[4672]: E0217 16:04:39.697727 4672 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.699287 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.699317 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.699328 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.699343 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.699355 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.802189 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.802578 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.802738 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.802883 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.803018 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.905312 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.905382 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.905396 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.905413 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.905426 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:39Z","lastTransitionTime":"2026-02-17T16:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.944952 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.945159 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.945203 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:39 crc kubenswrapper[4672]: E0217 16:04:39.945409 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:39 crc kubenswrapper[4672]: E0217 16:04:39.945606 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:39 crc kubenswrapper[4672]: E0217 16:04:39.945718 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.953393 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 01:24:22.678642244 +0000 UTC Feb 17 16:04:39 crc kubenswrapper[4672]: I0217 16:04:39.958800 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.009046 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.009088 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.009101 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.009118 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.009130 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:40Z","lastTransitionTime":"2026-02-17T16:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.112380 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.112810 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.112991 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.113146 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.113250 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:40Z","lastTransitionTime":"2026-02-17T16:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.216642 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.216905 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.217127 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.217347 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.217467 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:40Z","lastTransitionTime":"2026-02-17T16:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.320121 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.320849 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.321138 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.321428 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.321734 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:40Z","lastTransitionTime":"2026-02-17T16:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.424359 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.424429 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.424446 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.424470 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.424487 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:40Z","lastTransitionTime":"2026-02-17T16:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.527876 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.527965 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.527983 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.528006 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.528022 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:40Z","lastTransitionTime":"2026-02-17T16:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.631313 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.631374 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.631394 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.631419 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.631437 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:40Z","lastTransitionTime":"2026-02-17T16:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.735015 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.735084 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.735100 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.735126 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.735143 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:40Z","lastTransitionTime":"2026-02-17T16:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.838416 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.838480 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.838496 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.838550 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.838570 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:40Z","lastTransitionTime":"2026-02-17T16:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.941495 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.941606 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.941629 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.941657 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.941681 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:40Z","lastTransitionTime":"2026-02-17T16:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.944008 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:40 crc kubenswrapper[4672]: E0217 16:04:40.944202 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:40 crc kubenswrapper[4672]: I0217 16:04:40.954311 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:52:39.927010771 +0000 UTC Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.044624 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.044695 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.044714 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.044739 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.044763 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:41Z","lastTransitionTime":"2026-02-17T16:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.148137 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.148201 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.148219 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.148244 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.148264 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:41Z","lastTransitionTime":"2026-02-17T16:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.251854 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.251923 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.251946 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.251975 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.251997 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:41Z","lastTransitionTime":"2026-02-17T16:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.354898 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.354988 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.355008 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.355038 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.355051 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:41Z","lastTransitionTime":"2026-02-17T16:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.457831 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.457895 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.457912 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.457935 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.457951 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:41Z","lastTransitionTime":"2026-02-17T16:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.561663 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.561721 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.561738 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.561761 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.561784 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:41Z","lastTransitionTime":"2026-02-17T16:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.665568 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.665619 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.665635 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.665658 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.665674 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:41Z","lastTransitionTime":"2026-02-17T16:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.769154 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.769222 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.769243 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.769271 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.769291 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:41Z","lastTransitionTime":"2026-02-17T16:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.872541 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.872588 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.872601 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.872619 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.872631 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:41Z","lastTransitionTime":"2026-02-17T16:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.944290 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.944370 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:41 crc kubenswrapper[4672]: E0217 16:04:41.944571 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.944676 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:41 crc kubenswrapper[4672]: E0217 16:04:41.945265 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:41 crc kubenswrapper[4672]: E0217 16:04:41.945539 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.954708 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 15:44:04.100005959 +0000 UTC Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.965132 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.976120 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.976178 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.976196 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.976220 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.976239 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:41Z","lastTransitionTime":"2026-02-17T16:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.982562 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:41 crc kubenswrapper[4672]: I0217 16:04:41.997079 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:41Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.011779 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.031482 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"786083a3-395c-4659-b58a-a5517a9aa843\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://514d00f8857c64df263abe974d69503c1ac4ea7d4c78f57e5826d58208bb79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9d911c777bbcce655fc6993bdd85da5df16a4402e54b628b839c796f7c784d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7b3941c4c057228fded474417203e3aeb95fcc1df8094bde7b35fd223eec22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.053695 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.080988 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.081062 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.081082 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.081108 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.081127 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:42Z","lastTransitionTime":"2026-02-17T16:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.083730 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.095737 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f98ea0-af9c-44d0-845f-2881b2d5bc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18ddfa41dd4d4d96d358a9443339bd93c045a41dade757c2a9602284057c347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6bae4bfe272b613c05076933d2ffcc4369c52d96e17ee03e2f415c145c6f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6bae4bfe272b613c05076933d2ffcc4369c52d96e17ee03e2f415c145c6f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.123007 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.141781 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.155399 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.165478 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.185253 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.185313 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.185335 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.185365 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.185386 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:42Z","lastTransitionTime":"2026-02-17T16:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.195249 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://432ab3a5ae33d1f4de114a70bbc405e9c0346cbd9c935aeac9e44d0586f569d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://432ab3a5ae33d1f4de114a70bbc405e9c0346cbd9c935aeac9e44d0586f569d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:31Z\\\",\\\"message\\\":\\\":208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 16:04:30.991601 6772 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 16:04:30.991987 6772 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 16:04:30.992158 6772 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:30.992314 6772 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:04:30.992389 6772 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:04:30.992465 6772 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0217 16:04:30.992574 6772 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:30.992666 6772 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:30.993059 6772 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4f9wc_openshift-ovn-kubernetes(98a910a1-b5f0-4f34-9d76-6474c753e8e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.214250 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4897d237d880d7e444b27a13aba3e1e2d3a7ab13092c77bc1978c08f9ce3e2a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77542e439619fe148e71b29dda8c7c1957550c206d27bd12aa640f991b7ab96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qfvh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.236023 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hqdz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"712be02c-2ccc-4989-aecb-653745bacb0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hqdz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.254136 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.269181 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.282638 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.288195 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.288279 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.288305 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.288336 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.288359 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:42Z","lastTransitionTime":"2026-02-17T16:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.305213 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f95d42a206c5e9b8e4b546034635db87f5912e543fea24cccde60817511eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:25Z\\\",\\\"message\\\":\\\"2026-02-17T16:03:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fd2c4888-2f49-43b8-af52-10fe708bb3a8\\\\n2026-02-17T16:03:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fd2c4888-2f49-43b8-af52-10fe708bb3a8 to /host/opt/cni/bin/\\\\n2026-02-17T16:03:40Z [verbose] multus-daemon started\\\\n2026-02-17T16:03:40Z [verbose] Readiness Indicator file check\\\\n2026-02-17T16:04:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:04:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:42Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.392034 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.392087 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.392099 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.392118 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.392131 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:42Z","lastTransitionTime":"2026-02-17T16:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.494637 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.494688 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.494699 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.494716 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.494730 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:42Z","lastTransitionTime":"2026-02-17T16:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.598267 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.598395 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.598415 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.598440 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.598458 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:42Z","lastTransitionTime":"2026-02-17T16:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.702141 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.702226 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.702250 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.702280 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.702302 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:42Z","lastTransitionTime":"2026-02-17T16:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.805313 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.805404 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.805422 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.805446 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.805463 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:42Z","lastTransitionTime":"2026-02-17T16:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.908320 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.908383 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.908400 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.908425 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.908442 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:42Z","lastTransitionTime":"2026-02-17T16:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.944757 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:42 crc kubenswrapper[4672]: E0217 16:04:42.945015 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:42 crc kubenswrapper[4672]: I0217 16:04:42.954858 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:54:58.741580039 +0000 UTC Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.011748 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.011809 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.011844 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.011874 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.011896 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:43Z","lastTransitionTime":"2026-02-17T16:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.114966 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.115019 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.115035 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.115057 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.115076 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:43Z","lastTransitionTime":"2026-02-17T16:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.218384 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.218454 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.218473 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.218496 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.218547 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:43Z","lastTransitionTime":"2026-02-17T16:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.321764 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.321822 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.321864 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.321887 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.321904 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:43Z","lastTransitionTime":"2026-02-17T16:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.425792 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.425873 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.425893 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.425999 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.426025 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:43Z","lastTransitionTime":"2026-02-17T16:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.529102 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.529163 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.529180 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.529204 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.529221 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:43Z","lastTransitionTime":"2026-02-17T16:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.632028 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.632097 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.632117 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.632152 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.632174 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:43Z","lastTransitionTime":"2026-02-17T16:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.735877 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.735968 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.735994 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.736032 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.736057 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:43Z","lastTransitionTime":"2026-02-17T16:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.839180 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.839258 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.839281 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.839309 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.839329 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:43Z","lastTransitionTime":"2026-02-17T16:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.942694 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.942757 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.942776 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.942807 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.942830 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:43Z","lastTransitionTime":"2026-02-17T16:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.944029 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:43 crc kubenswrapper[4672]: E0217 16:04:43.944184 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.944040 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.944238 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:43 crc kubenswrapper[4672]: E0217 16:04:43.944322 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:43 crc kubenswrapper[4672]: E0217 16:04:43.944434 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:43 crc kubenswrapper[4672]: I0217 16:04:43.955354 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:45:36.266507359 +0000 UTC Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.046302 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.046389 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.046412 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.046442 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.046469 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:44Z","lastTransitionTime":"2026-02-17T16:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.149706 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.149780 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.149803 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.149834 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.149863 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:44Z","lastTransitionTime":"2026-02-17T16:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.252601 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.252697 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.252722 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.252765 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.252790 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:44Z","lastTransitionTime":"2026-02-17T16:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.355777 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.355845 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.355864 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.355889 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.355909 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:44Z","lastTransitionTime":"2026-02-17T16:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.459835 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.459912 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.459926 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.459948 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.459962 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:44Z","lastTransitionTime":"2026-02-17T16:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.563122 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.563195 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.563215 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.563240 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.563258 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:44Z","lastTransitionTime":"2026-02-17T16:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.666751 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.666894 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.666974 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.667014 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.667074 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:44Z","lastTransitionTime":"2026-02-17T16:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.770982 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.771042 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.771061 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.771084 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.771102 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:44Z","lastTransitionTime":"2026-02-17T16:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.874292 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.874372 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.874391 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.874416 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.874434 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:44Z","lastTransitionTime":"2026-02-17T16:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.944122 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:44 crc kubenswrapper[4672]: E0217 16:04:44.944333 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.956389 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 00:18:35.352838166 +0000 UTC Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.978095 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.978160 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.978181 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.978209 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:44 crc kubenswrapper[4672]: I0217 16:04:44.978233 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:44Z","lastTransitionTime":"2026-02-17T16:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.081354 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.081413 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.081433 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.081459 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.081477 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:45Z","lastTransitionTime":"2026-02-17T16:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.184193 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.184270 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.184300 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.184331 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.184354 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:45Z","lastTransitionTime":"2026-02-17T16:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.287482 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.287874 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.288058 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.288255 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.288421 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:45Z","lastTransitionTime":"2026-02-17T16:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.397022 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.397102 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.397118 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.397137 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.397151 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:45Z","lastTransitionTime":"2026-02-17T16:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.499871 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.499939 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.499958 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.499986 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.500003 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:45Z","lastTransitionTime":"2026-02-17T16:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.604089 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.604155 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.604177 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.604206 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.604231 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:45Z","lastTransitionTime":"2026-02-17T16:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.706624 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.706668 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.706679 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.706695 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.706706 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:45Z","lastTransitionTime":"2026-02-17T16:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.810101 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.810173 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.810187 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.810208 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.810223 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:45Z","lastTransitionTime":"2026-02-17T16:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.913482 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.913573 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.913585 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.913604 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.913618 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:45Z","lastTransitionTime":"2026-02-17T16:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.944681 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.944782 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:45 crc kubenswrapper[4672]: E0217 16:04:45.944871 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.944975 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:45 crc kubenswrapper[4672]: E0217 16:04:45.945138 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:45 crc kubenswrapper[4672]: E0217 16:04:45.945271 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:45 crc kubenswrapper[4672]: I0217 16:04:45.957018 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 17:59:36.162341765 +0000 UTC Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.015913 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.016279 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.016435 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.016642 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.016811 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:46Z","lastTransitionTime":"2026-02-17T16:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.120578 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.120628 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.120639 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.120656 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.120668 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:46Z","lastTransitionTime":"2026-02-17T16:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.223462 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.223534 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.223548 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.223564 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.223601 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:46Z","lastTransitionTime":"2026-02-17T16:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.326994 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.327058 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.327073 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.327090 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.327104 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:46Z","lastTransitionTime":"2026-02-17T16:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.429681 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.429742 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.429759 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.429873 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.429899 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:46Z","lastTransitionTime":"2026-02-17T16:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.533277 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.534052 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.534161 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.534292 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.534439 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:46Z","lastTransitionTime":"2026-02-17T16:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.637486 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.637581 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.637599 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.637627 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.637646 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:46Z","lastTransitionTime":"2026-02-17T16:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.740439 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.740538 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.740557 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.740581 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.740598 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:46Z","lastTransitionTime":"2026-02-17T16:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.842847 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.842920 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.842939 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.842966 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.842988 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:46Z","lastTransitionTime":"2026-02-17T16:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.945150 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:46 crc kubenswrapper[4672]: E0217 16:04:46.945418 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.946645 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.946721 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.946748 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.946780 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.946803 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:46Z","lastTransitionTime":"2026-02-17T16:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:46 crc kubenswrapper[4672]: I0217 16:04:46.957932 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 13:34:45.772586149 +0000 UTC Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.050415 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.050477 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.050502 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.050566 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.050590 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:47Z","lastTransitionTime":"2026-02-17T16:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.153302 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.153356 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.153372 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.153393 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.153409 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:47Z","lastTransitionTime":"2026-02-17T16:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.256012 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.256083 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.256110 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.256141 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.256167 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:47Z","lastTransitionTime":"2026-02-17T16:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.358556 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.358641 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.358654 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.358670 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.358681 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:47Z","lastTransitionTime":"2026-02-17T16:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.461423 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.461493 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.461560 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.461591 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.462068 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:47Z","lastTransitionTime":"2026-02-17T16:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.565287 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.565347 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.565364 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.565386 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.565402 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:47Z","lastTransitionTime":"2026-02-17T16:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.668453 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.668545 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.668571 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.668602 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.668626 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:47Z","lastTransitionTime":"2026-02-17T16:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.771419 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.771490 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.771504 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.771548 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.771564 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:47Z","lastTransitionTime":"2026-02-17T16:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.874178 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.874238 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.874249 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.874271 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.874287 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:47Z","lastTransitionTime":"2026-02-17T16:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.944146 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.944181 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.944337 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:47 crc kubenswrapper[4672]: E0217 16:04:47.944425 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:47 crc kubenswrapper[4672]: E0217 16:04:47.944637 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:47 crc kubenswrapper[4672]: E0217 16:04:47.944808 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.945727 4672 scope.go:117] "RemoveContainer" containerID="432ab3a5ae33d1f4de114a70bbc405e9c0346cbd9c935aeac9e44d0586f569d1" Feb 17 16:04:47 crc kubenswrapper[4672]: E0217 16:04:47.945949 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4f9wc_openshift-ovn-kubernetes(98a910a1-b5f0-4f34-9d76-6474c753e8e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.958824 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 01:58:10.331692228 +0000 UTC Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.976669 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.976735 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.976747 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.976764 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:47 crc kubenswrapper[4672]: I0217 16:04:47.976779 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:47Z","lastTransitionTime":"2026-02-17T16:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.079950 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.080021 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.080054 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.080096 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.080120 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:48Z","lastTransitionTime":"2026-02-17T16:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.184229 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.184313 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.184342 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.184383 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.184407 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:48Z","lastTransitionTime":"2026-02-17T16:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.287276 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.287345 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.287368 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.287397 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.287419 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:48Z","lastTransitionTime":"2026-02-17T16:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.390746 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.390805 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.390820 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.390840 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.390863 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:48Z","lastTransitionTime":"2026-02-17T16:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.493866 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.493927 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.493947 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.493973 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.493988 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:48Z","lastTransitionTime":"2026-02-17T16:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.597245 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.597316 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.597338 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.597373 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.597396 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:48Z","lastTransitionTime":"2026-02-17T16:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.700407 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.700554 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.700576 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.700601 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.700621 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:48Z","lastTransitionTime":"2026-02-17T16:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.803663 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.803738 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.803750 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.803767 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.803779 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:48Z","lastTransitionTime":"2026-02-17T16:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.907354 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.907413 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.907429 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.907456 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.907473 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:48Z","lastTransitionTime":"2026-02-17T16:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.944128 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:48 crc kubenswrapper[4672]: E0217 16:04:48.944428 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:48 crc kubenswrapper[4672]: I0217 16:04:48.959575 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 00:25:58.706325155 +0000 UTC Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.010862 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.010938 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.010965 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.010994 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.011018 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:49Z","lastTransitionTime":"2026-02-17T16:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.114455 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.114557 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.114577 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.114601 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.114619 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:49Z","lastTransitionTime":"2026-02-17T16:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.218236 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.218295 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.218318 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.218348 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.218372 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:49Z","lastTransitionTime":"2026-02-17T16:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.322216 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.322280 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.322301 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.322329 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.322347 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:49Z","lastTransitionTime":"2026-02-17T16:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.425254 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.425322 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.425343 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.425392 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.425410 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:49Z","lastTransitionTime":"2026-02-17T16:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.529180 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.529248 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.529265 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.529288 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.529305 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:49Z","lastTransitionTime":"2026-02-17T16:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.631142 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.631245 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.631254 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.631267 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.631275 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:49Z","lastTransitionTime":"2026-02-17T16:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.734247 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.734323 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.734343 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.734366 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.734386 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:49Z","lastTransitionTime":"2026-02-17T16:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.836694 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.836739 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.836749 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.836766 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.836778 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:49Z","lastTransitionTime":"2026-02-17T16:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.939853 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.940026 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.940109 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.940145 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.940199 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:49Z","lastTransitionTime":"2026-02-17T16:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.944355 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.944404 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.944432 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:49 crc kubenswrapper[4672]: E0217 16:04:49.944649 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:49 crc kubenswrapper[4672]: E0217 16:04:49.944757 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:49 crc kubenswrapper[4672]: E0217 16:04:49.944865 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:49 crc kubenswrapper[4672]: I0217 16:04:49.960118 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 04:36:44.030643348 +0000 UTC Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.043846 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.043922 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.043944 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.043975 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.043997 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:50Z","lastTransitionTime":"2026-02-17T16:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.052379 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.052456 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.052482 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.052554 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.052573 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:50Z","lastTransitionTime":"2026-02-17T16:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:50 crc kubenswrapper[4672]: E0217 16:04:50.076758 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.083255 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.083326 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.083337 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.083359 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.083371 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:50Z","lastTransitionTime":"2026-02-17T16:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:50 crc kubenswrapper[4672]: E0217 16:04:50.105686 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.112257 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.112356 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.112407 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.112433 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.112477 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:50Z","lastTransitionTime":"2026-02-17T16:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:50 crc kubenswrapper[4672]: E0217 16:04:50.136874 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.142594 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.142664 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.142683 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.142707 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.142730 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:50Z","lastTransitionTime":"2026-02-17T16:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:50 crc kubenswrapper[4672]: E0217 16:04:50.165046 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.171424 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.171497 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.171543 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.171573 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.171592 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:50Z","lastTransitionTime":"2026-02-17T16:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:50 crc kubenswrapper[4672]: E0217 16:04:50.194271 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"793c4034-4ed2-49c9-abb4-00e3faa205d0\\\",\\\"systemUUID\\\":\\\"561271bd-298c-447a-8ba6-beca2786bcfb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:50Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:50 crc kubenswrapper[4672]: E0217 16:04:50.194599 4672 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.197267 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.197330 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.197357 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.197392 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.197418 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:50Z","lastTransitionTime":"2026-02-17T16:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.300403 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.300464 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.300483 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.300505 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.300565 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:50Z","lastTransitionTime":"2026-02-17T16:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.404037 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.404117 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.404141 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.404173 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.404198 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:50Z","lastTransitionTime":"2026-02-17T16:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.507415 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.507491 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.507558 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.507590 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.507612 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:50Z","lastTransitionTime":"2026-02-17T16:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.610684 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.610752 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.610776 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.610806 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.610830 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:50Z","lastTransitionTime":"2026-02-17T16:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.713782 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.713852 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.713878 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.713905 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.713927 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:50Z","lastTransitionTime":"2026-02-17T16:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.817282 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.817333 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.817351 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.817374 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.817394 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:50Z","lastTransitionTime":"2026-02-17T16:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.920502 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.920779 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.920802 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.920823 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.920839 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:50Z","lastTransitionTime":"2026-02-17T16:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.943924 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:50 crc kubenswrapper[4672]: E0217 16:04:50.944121 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:50 crc kubenswrapper[4672]: I0217 16:04:50.961197 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 05:18:06.224383097 +0000 UTC Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.023861 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.023924 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.023950 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.023975 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.023996 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:51Z","lastTransitionTime":"2026-02-17T16:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.126808 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.127609 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.127870 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.128051 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.128190 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:51Z","lastTransitionTime":"2026-02-17T16:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.230786 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.230844 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.230867 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.230894 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.230915 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:51Z","lastTransitionTime":"2026-02-17T16:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.333459 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.333552 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.333576 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.333602 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.333623 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:51Z","lastTransitionTime":"2026-02-17T16:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.436781 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.436842 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.436863 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.436890 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.436912 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:51Z","lastTransitionTime":"2026-02-17T16:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.539456 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.539565 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.539590 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.539619 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.539639 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:51Z","lastTransitionTime":"2026-02-17T16:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.643117 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.643198 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.643227 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.643254 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.643278 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:51Z","lastTransitionTime":"2026-02-17T16:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.745795 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.746478 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.746646 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.746727 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.746795 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:51Z","lastTransitionTime":"2026-02-17T16:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.850849 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.850909 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.850919 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.850943 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.850955 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:51Z","lastTransitionTime":"2026-02-17T16:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.944505 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.944542 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:51 crc kubenswrapper[4672]: E0217 16:04:51.944843 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:51 crc kubenswrapper[4672]: E0217 16:04:51.945066 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.945178 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:51 crc kubenswrapper[4672]: E0217 16:04:51.945557 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.953229 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.953282 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.953301 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.953324 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.953344 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:51Z","lastTransitionTime":"2026-02-17T16:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.961331 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 03:00:50.265784896 +0000 UTC Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.963872 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d539581-cd17-46b9-8668-271c89565030\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0217 16:03:25.590243 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 16:03:25.593094 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1557813218/tls.crt::/tmp/serving-cert-1557813218/tls.key\\\\\\\"\\\\nI0217 16:03:31.673012 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 16:03:31.680487 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 16:03:31.680607 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 16:03:31.680666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 16:03:31.680693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 16:03:31.686069 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 16:03:31.686102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686109 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 16:03:31.686115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 16:03:31.686119 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 16:03:31.686123 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 16:03:31.686126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 16:03:31.686134 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 16:03:31.689123 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.979549 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60439043687f49e6a6ffb68cabcec619397fb6994e804b3c129cdc3c4cb6631d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:51 crc kubenswrapper[4672]: I0217 16:04:51.993571 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vst6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a619f2f-0992-4440-ac8c-bc513eaf2cfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d4bb48bf3275028f344bc73ea59e23721f24ba646e485b99181dce129096003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5bjxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vst6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:51Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.041093 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98a910a1-b5f0-4f34-9d76-6474c753e8e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://432ab3a5ae33d1f4de114a70bbc405e9c0346cbd9c935aeac9e44d0586f569d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://432ab3a5ae33d1f4de114a70bbc405e9c0346cbd9c935aeac9e44d0586f569d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:31Z\\\",\\\"message\\\":\\\":208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 16:04:30.991601 6772 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 16:04:30.991987 6772 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 16:04:30.992158 6772 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:30.992314 6772 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:04:30.992389 6772 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0217 16:04:30.992465 6772 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0217 16:04:30.992574 6772 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:30.992666 6772 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 16:04:30.993059 6772 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:04:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4f9wc_openshift-ovn-kubernetes(98a910a1-b5f0-4f34-9d76-6474c753e8e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t59bf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4f9wc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.056688 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.056752 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.056771 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.056796 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.056812 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:52Z","lastTransitionTime":"2026-02-17T16:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.067631 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e418bd1-d1c0-4f75-8fb2-6c74780f648c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4897d237d880d7e444b27a13aba3e1e2d3a7ab13092c77bc1978c08f9ce3e2a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77542e439619fe148e71b29dda8c7c1957550c206d27bd12aa640f991b7ab96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwmgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qfvh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.081366 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hqdz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"712be02c-2ccc-4989-aecb-653745bacb0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pbx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hqdz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.093044 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24f98ea0-af9c-44d0-845f-2881b2d5bc76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d18ddfa41dd4d4d96d358a9443339bd93c045a41dade757c2a9602284057c347\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff6bae4bfe272b613c05076933d2ffcc4369c52d96e17ee03e2f415c145c6f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff6bae4bfe272b613c05076933d2ffcc4369c52d96e17ee03e2f415c145c6f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.117734 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8f74075-94e1-42e3-ab2c-b8f955ab5243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba3c60da22b77e2230dc732204814325960cd7a5b01b71d7fc8644305c09f0d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a95f7a0d71d84b7e6337b00720ec38dbebec6e3df18438ef39a6d315cd1617c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae6c7774fecb4fd12775119593d61b5a3fa954d20a04f08ebea36643491a740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5fe7175715d27c2635e1fbe900bb1edb7fbfb0cadd7aeda718fb209429db6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86982fa8c2597a415cb002bbafb954b2d57444056e0ef22a701ea4063e29dcf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82e6e1f3103eccd13dff86734795ad34e9b088614a3511cf77fbe0a41b2587a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66981e623cd38ba17f442d34f2696cc0ea6259409f9e6a5365f293827ebbf439\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a50e1f1880c9ad90ac304e6ed2d02068170e3791cb3398abebc63633441635d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.130901 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cd2c6-74a5-4567-a141-be56c668e566\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2e80bcc09d3a2f37ff69baa34fba8f223e11ce83224b820ba1cb4b6cc8df6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kl6qq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-d6dhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.144358 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5jjr2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:04:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7f95d42a206c5e9b8e4b546034635db87f5912e543fea24cccde60817511eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T16:04:25Z\\\",\\\"message\\\":\\\"2026-02-17T16:03:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fd2c4888-2f49-43b8-af52-10fe708bb3a8\\\\n2026-02-17T16:03:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fd2c4888-2f49-43b8-af52-10fe708bb3a8 to /host/opt/cni/bin/\\\\n2026-02-17T16:03:40Z [verbose] multus-daemon started\\\\n2026-02-17T16:03:40Z [verbose] Readiness Indicator file check\\\\n2026-02-17T16:04:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:04:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql9k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jjr2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.154984 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a10c9-304a-4bd2-859a-3b048ad9bdb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16b49286ea33e1f1ae14ac09905593e189319bae5b2bb3a04932e341ff75b528\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12589c55c0e37da817797803a41724ee1a12a572e0fbb0210cfeeeb8e3c5e672\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0af28defd5c3ec77267c5b7d20c5780fc4309ab5932c22db1b6ee7ced830627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.160199 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.160347 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.160439 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.160554 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.160645 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:52Z","lastTransitionTime":"2026-02-17T16:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.169977 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600e2fefd97f6c554dc7f9ccbf277994e9f3fed5a2f9a727320afc936ea753fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d5edb45d008bbbd897e1e6780babcc821dc9abf4ee893deceb071e40944141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.182872 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.192775 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2g6fq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffeb52c8-e4ea-4211-8265-c0e72f364fcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5ede7ba7694732d9f2cedbd2457c3ab638e067106bc5a3c6415f1dd70c86a72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9hsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2g6fq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.204746 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.216771 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.232328 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n84l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1ec84d-96ba-4a95-a24b-c9142495d70d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86645990eea64dfe6b5933473b48df128ceaa3f4fe9da4f8307442da1b6ad808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5299bd8e41a853641d24e590f389ceeab8bd835b55d5a7c3513257094a7a2290\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8fd0d9abcfae7c4058afcd54359b2685f41b11f97ff1bc651f0b565f2868\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://849b7bb41251bdad6a83de86752c91a2349cd7ea6cc902d4648a6019b2b867d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c71759b01c0fc6de78f76ef14e9b1cdeb053c838487f49f310094e4c4319d551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd989c91d99f951ddc44970800b25095f1363f1aea2758773a248c1c678e640a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f4cf41afedbef6e989ca3c9cc600233cdace38365ef39896900f6fe6cb29c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ftr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n84l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.244926 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"786083a3-395c-4659-b58a-a5517a9aa843\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://514d00f8857c64df263abe974d69503c1ac4ea7d4c78f57e5826d58208bb79f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9d911c777bbcce655fc6993bdd85da5df16a4402e54b628b839c796f7c784d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7b3941c4c057228fded474417203e3aeb95fcc1df8094bde7b35fd223eec22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://807c21eba860dd45d3dcd3a39ced8648f94e884925efe110065621238ad2e6f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T16:03:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T16:03:13Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T16:03:12Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.260372 4672 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T16:03:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abe4db1edd1f7e8872efcd5149196d174b54c6c80c6153559ecc83591047d1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T16:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T16:04:52Z is after 2025-08-24T17:21:41Z" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.263788 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.263918 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.264006 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.264084 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.264149 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:52Z","lastTransitionTime":"2026-02-17T16:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.366815 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.367154 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.367227 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.367336 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.367409 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:52Z","lastTransitionTime":"2026-02-17T16:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.470017 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.470055 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.470067 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.470085 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.470096 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:52Z","lastTransitionTime":"2026-02-17T16:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.572125 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.572188 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.572206 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.572230 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.572247 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:52Z","lastTransitionTime":"2026-02-17T16:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.675318 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.675579 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.675693 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.675837 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.675917 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:52Z","lastTransitionTime":"2026-02-17T16:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.778493 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.779820 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.779904 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.779944 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.779987 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:52Z","lastTransitionTime":"2026-02-17T16:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.884106 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.884559 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.884712 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.884840 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.884998 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:52Z","lastTransitionTime":"2026-02-17T16:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.944699 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:52 crc kubenswrapper[4672]: E0217 16:04:52.945144 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.961914 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:40:39.859761341 +0000 UTC Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.988968 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.989026 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.989039 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.989057 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:52 crc kubenswrapper[4672]: I0217 16:04:52.989070 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:52Z","lastTransitionTime":"2026-02-17T16:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.092026 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.092100 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.092114 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.092135 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.092156 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:53Z","lastTransitionTime":"2026-02-17T16:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.195435 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.195495 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.195543 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.195568 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.195588 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:53Z","lastTransitionTime":"2026-02-17T16:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.298842 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.298911 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.298930 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.298956 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.298974 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:53Z","lastTransitionTime":"2026-02-17T16:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.402721 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.403322 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.403505 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.403671 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.403788 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:53Z","lastTransitionTime":"2026-02-17T16:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.506479 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.506590 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.506610 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.506641 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.506662 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:53Z","lastTransitionTime":"2026-02-17T16:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.609787 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.609846 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.609859 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.609877 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.609889 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:53Z","lastTransitionTime":"2026-02-17T16:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.713175 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.713267 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.713286 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.713313 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.713332 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:53Z","lastTransitionTime":"2026-02-17T16:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.816611 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.816677 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.816698 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.816724 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.816742 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:53Z","lastTransitionTime":"2026-02-17T16:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.919869 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.920221 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.920381 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.920631 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.920859 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:53Z","lastTransitionTime":"2026-02-17T16:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.944436 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.944450 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.944622 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:53 crc kubenswrapper[4672]: E0217 16:04:53.945466 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:53 crc kubenswrapper[4672]: E0217 16:04:53.945158 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:53 crc kubenswrapper[4672]: E0217 16:04:53.945607 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:53 crc kubenswrapper[4672]: I0217 16:04:53.962958 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 12:42:34.195465267 +0000 UTC Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.024075 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.024151 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.024169 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.024195 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.024218 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:54Z","lastTransitionTime":"2026-02-17T16:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.127221 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.127291 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.127315 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.127343 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.127365 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:54Z","lastTransitionTime":"2026-02-17T16:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.230384 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.230440 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.230457 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.230477 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.230490 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:54Z","lastTransitionTime":"2026-02-17T16:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.332923 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.332981 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.333002 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.333027 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.333040 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:54Z","lastTransitionTime":"2026-02-17T16:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.436230 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.436284 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.436297 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.436315 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.436327 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:54Z","lastTransitionTime":"2026-02-17T16:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.538989 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.539051 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.539067 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.539088 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.539105 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:54Z","lastTransitionTime":"2026-02-17T16:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.641628 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.641677 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.641691 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.641707 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.641718 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:54Z","lastTransitionTime":"2026-02-17T16:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.744939 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.745041 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.745057 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.745077 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.745092 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:54Z","lastTransitionTime":"2026-02-17T16:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.848211 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.848297 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.848322 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.848354 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.848379 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:54Z","lastTransitionTime":"2026-02-17T16:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.945012 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:54 crc kubenswrapper[4672]: E0217 16:04:54.945277 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.951757 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.951848 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.951870 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.951899 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.951919 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:54Z","lastTransitionTime":"2026-02-17T16:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:54 crc kubenswrapper[4672]: I0217 16:04:54.963370 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 16:02:11.094481375 +0000 UTC Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.054070 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.054138 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.054157 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.054180 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.054194 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:55Z","lastTransitionTime":"2026-02-17T16:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.156715 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.156767 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.156780 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.156798 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.156809 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:55Z","lastTransitionTime":"2026-02-17T16:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.259979 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.260049 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.260072 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.260103 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.260126 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:55Z","lastTransitionTime":"2026-02-17T16:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.363765 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.363847 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.363869 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.363905 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.363930 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:55Z","lastTransitionTime":"2026-02-17T16:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.466940 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.467026 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.467047 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.467072 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.467090 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:55Z","lastTransitionTime":"2026-02-17T16:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.569763 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs\") pod \"network-metrics-daemon-hqdz9\" (UID: \"712be02c-2ccc-4989-aecb-653745bacb0d\") " pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:55 crc kubenswrapper[4672]: E0217 16:04:55.570070 4672 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:04:55 crc kubenswrapper[4672]: E0217 16:04:55.570212 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs podName:712be02c-2ccc-4989-aecb-653745bacb0d nodeName:}" failed. No retries permitted until 2026-02-17 16:05:59.570178273 +0000 UTC m=+168.324267055 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs") pod "network-metrics-daemon-hqdz9" (UID: "712be02c-2ccc-4989-aecb-653745bacb0d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.570777 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.571009 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.571161 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.571385 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.571581 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:55Z","lastTransitionTime":"2026-02-17T16:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.675053 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.675115 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.675141 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.675170 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.675191 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:55Z","lastTransitionTime":"2026-02-17T16:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.778342 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.778402 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.778419 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.778442 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.778457 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:55Z","lastTransitionTime":"2026-02-17T16:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.882016 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.882080 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.882103 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.882129 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.882148 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:55Z","lastTransitionTime":"2026-02-17T16:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.944075 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.944140 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:55 crc kubenswrapper[4672]: E0217 16:04:55.944260 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.944357 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:55 crc kubenswrapper[4672]: E0217 16:04:55.944465 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:55 crc kubenswrapper[4672]: E0217 16:04:55.944885 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.964101 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:11:56.956920999 +0000 UTC Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.985824 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.985923 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.985943 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.985969 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:55 crc kubenswrapper[4672]: I0217 16:04:55.985988 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:55Z","lastTransitionTime":"2026-02-17T16:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.088320 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.088365 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.088382 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.088404 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.088421 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:56Z","lastTransitionTime":"2026-02-17T16:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.191211 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.191252 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.191265 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.191280 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.191289 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:56Z","lastTransitionTime":"2026-02-17T16:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.294315 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.294395 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.294428 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.294457 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.294476 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:56Z","lastTransitionTime":"2026-02-17T16:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.397917 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.398001 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.398027 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.398059 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.398093 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:56Z","lastTransitionTime":"2026-02-17T16:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.500487 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.500599 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.500628 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.500656 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.500678 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:56Z","lastTransitionTime":"2026-02-17T16:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.603706 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.603850 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.603875 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.603902 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.603923 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:56Z","lastTransitionTime":"2026-02-17T16:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.707239 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.707307 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.707329 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.707357 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.707380 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:56Z","lastTransitionTime":"2026-02-17T16:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.810199 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.810259 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.810276 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.810304 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.810321 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:56Z","lastTransitionTime":"2026-02-17T16:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.914318 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.914388 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.914409 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.914435 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.914452 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:56Z","lastTransitionTime":"2026-02-17T16:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.944931 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:56 crc kubenswrapper[4672]: E0217 16:04:56.945103 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:56 crc kubenswrapper[4672]: I0217 16:04:56.965305 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 22:37:53.72470484 +0000 UTC Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.017289 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.017351 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.017372 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.017401 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.017422 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:57Z","lastTransitionTime":"2026-02-17T16:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.119985 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.120060 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.120082 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.120110 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.120127 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:57Z","lastTransitionTime":"2026-02-17T16:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.223492 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.223581 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.223593 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.223619 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.223633 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:57Z","lastTransitionTime":"2026-02-17T16:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.326044 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.326119 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.326142 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.326173 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.326197 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:57Z","lastTransitionTime":"2026-02-17T16:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.429451 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.429555 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.429582 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.429612 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.429634 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:57Z","lastTransitionTime":"2026-02-17T16:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.533639 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.533698 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.533715 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.533738 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.533754 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:57Z","lastTransitionTime":"2026-02-17T16:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.637339 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.637400 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.637422 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.637451 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.637471 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:57Z","lastTransitionTime":"2026-02-17T16:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.739898 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.739952 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.739969 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.739993 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.740009 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:57Z","lastTransitionTime":"2026-02-17T16:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.842912 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.842979 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.843004 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.843032 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.843055 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:57Z","lastTransitionTime":"2026-02-17T16:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.944045 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.944084 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:57 crc kubenswrapper[4672]: E0217 16:04:57.944269 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:57 crc kubenswrapper[4672]: E0217 16:04:57.944503 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.944913 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:57 crc kubenswrapper[4672]: E0217 16:04:57.945165 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.946023 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.946077 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.946099 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.946125 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.946146 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:57Z","lastTransitionTime":"2026-02-17T16:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:57 crc kubenswrapper[4672]: I0217 16:04:57.966376 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:55:51.438120486 +0000 UTC Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.048812 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.048863 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.048879 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.048900 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.048917 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:58Z","lastTransitionTime":"2026-02-17T16:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.151873 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.151952 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.151970 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.151997 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.152015 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:58Z","lastTransitionTime":"2026-02-17T16:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.255373 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.255422 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.255440 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.255463 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.255482 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:58Z","lastTransitionTime":"2026-02-17T16:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.357853 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.357899 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.357917 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.357943 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.357959 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:58Z","lastTransitionTime":"2026-02-17T16:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.460703 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.460738 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.460756 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.460777 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.460793 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:58Z","lastTransitionTime":"2026-02-17T16:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.565288 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.565349 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.565372 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.565400 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.565421 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:58Z","lastTransitionTime":"2026-02-17T16:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.668628 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.668675 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.668691 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.668711 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.668727 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:58Z","lastTransitionTime":"2026-02-17T16:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.771878 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.771935 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.771951 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.771974 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.771991 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:58Z","lastTransitionTime":"2026-02-17T16:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.876034 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.876128 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.876155 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.876187 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.876221 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:58Z","lastTransitionTime":"2026-02-17T16:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.944193 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:04:58 crc kubenswrapper[4672]: E0217 16:04:58.944403 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.967474 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:59:02.585256604 +0000 UTC Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.978817 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.978875 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.978893 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.978916 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:58 crc kubenswrapper[4672]: I0217 16:04:58.978938 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:58Z","lastTransitionTime":"2026-02-17T16:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.081954 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.081994 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.082002 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.082015 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.082024 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:59Z","lastTransitionTime":"2026-02-17T16:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.185239 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.185278 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.185291 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.185309 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.185324 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:59Z","lastTransitionTime":"2026-02-17T16:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.288704 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.288774 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.288796 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.288825 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.288847 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:59Z","lastTransitionTime":"2026-02-17T16:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.392877 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.392938 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.392954 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.392975 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.392989 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:59Z","lastTransitionTime":"2026-02-17T16:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.495992 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.496089 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.496157 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.496180 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.496239 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:59Z","lastTransitionTime":"2026-02-17T16:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.598674 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.598736 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.598761 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.598789 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.598810 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:59Z","lastTransitionTime":"2026-02-17T16:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.702765 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.702825 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.702843 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.702867 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.702885 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:59Z","lastTransitionTime":"2026-02-17T16:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.806849 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.807265 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.807578 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.807777 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.807966 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:59Z","lastTransitionTime":"2026-02-17T16:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.911453 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.911889 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.912049 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.912186 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.912324 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:04:59Z","lastTransitionTime":"2026-02-17T16:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.944310 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.944421 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.944552 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.945469 4672 scope.go:117] "RemoveContainer" containerID="432ab3a5ae33d1f4de114a70bbc405e9c0346cbd9c935aeac9e44d0586f569d1" Feb 17 16:04:59 crc kubenswrapper[4672]: E0217 16:04:59.945778 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:04:59 crc kubenswrapper[4672]: E0217 16:04:59.945786 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4f9wc_openshift-ovn-kubernetes(98a910a1-b5f0-4f34-9d76-6474c753e8e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" Feb 17 16:04:59 crc kubenswrapper[4672]: E0217 16:04:59.946173 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:04:59 crc kubenswrapper[4672]: E0217 16:04:59.945945 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:04:59 crc kubenswrapper[4672]: I0217 16:04:59.968426 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 18:26:05.985944803 +0000 UTC Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.015768 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.015833 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.015850 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.015874 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.015891 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:05:00Z","lastTransitionTime":"2026-02-17T16:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.119252 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.119339 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.119363 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.119392 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.119410 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:05:00Z","lastTransitionTime":"2026-02-17T16:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.222254 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.222301 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.222365 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.222406 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.222427 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:05:00Z","lastTransitionTime":"2026-02-17T16:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.324842 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.324911 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.324928 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.324952 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.324973 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:05:00Z","lastTransitionTime":"2026-02-17T16:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.428188 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.428253 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.428270 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.428294 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.428311 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:05:00Z","lastTransitionTime":"2026-02-17T16:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.429703 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.429772 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.429785 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.429802 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.429816 4672 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T16:05:00Z","lastTransitionTime":"2026-02-17T16:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.504489 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-5mv4w"] Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.505066 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5mv4w" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.508298 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.508378 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.508387 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.513709 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.524733 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94fc639b-cd46-42fe-a597-4909e6abe07c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5mv4w\" (UID: \"94fc639b-cd46-42fe-a597-4909e6abe07c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5mv4w" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.524787 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/94fc639b-cd46-42fe-a597-4909e6abe07c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5mv4w\" (UID: \"94fc639b-cd46-42fe-a597-4909e6abe07c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5mv4w" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.524880 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94fc639b-cd46-42fe-a597-4909e6abe07c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5mv4w\" (UID: \"94fc639b-cd46-42fe-a597-4909e6abe07c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5mv4w" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.524974 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/94fc639b-cd46-42fe-a597-4909e6abe07c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5mv4w\" (UID: \"94fc639b-cd46-42fe-a597-4909e6abe07c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5mv4w" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.525020 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/94fc639b-cd46-42fe-a597-4909e6abe07c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5mv4w\" (UID: \"94fc639b-cd46-42fe-a597-4909e6abe07c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5mv4w" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.594621 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qfvh7" podStartSLOduration=83.594589594 podStartE2EDuration="1m23.594589594s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:00.57524831 +0000 UTC m=+109.329337072" watchObservedRunningTime="2026-02-17 16:05:00.594589594 +0000 UTC m=+109.348678356" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.625957 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94fc639b-cd46-42fe-a597-4909e6abe07c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5mv4w\" (UID: \"94fc639b-cd46-42fe-a597-4909e6abe07c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5mv4w" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.626023 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/94fc639b-cd46-42fe-a597-4909e6abe07c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5mv4w\" (UID: \"94fc639b-cd46-42fe-a597-4909e6abe07c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5mv4w" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.626069 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/94fc639b-cd46-42fe-a597-4909e6abe07c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5mv4w\" (UID: \"94fc639b-cd46-42fe-a597-4909e6abe07c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5mv4w" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.626142 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94fc639b-cd46-42fe-a597-4909e6abe07c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5mv4w\" (UID: \"94fc639b-cd46-42fe-a597-4909e6abe07c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5mv4w" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.626197 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/94fc639b-cd46-42fe-a597-4909e6abe07c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5mv4w\" (UID: \"94fc639b-cd46-42fe-a597-4909e6abe07c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5mv4w" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.626787 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/94fc639b-cd46-42fe-a597-4909e6abe07c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5mv4w\" (UID: \"94fc639b-cd46-42fe-a597-4909e6abe07c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5mv4w" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.626891 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/94fc639b-cd46-42fe-a597-4909e6abe07c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5mv4w\" (UID: \"94fc639b-cd46-42fe-a597-4909e6abe07c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5mv4w" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.628065 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/94fc639b-cd46-42fe-a597-4909e6abe07c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5mv4w\" (UID: \"94fc639b-cd46-42fe-a597-4909e6abe07c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5mv4w" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.639990 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94fc639b-cd46-42fe-a597-4909e6abe07c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5mv4w\" (UID: \"94fc639b-cd46-42fe-a597-4909e6abe07c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5mv4w" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.658065 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=21.658038769 podStartE2EDuration="21.658038769s" podCreationTimestamp="2026-02-17 16:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:00.611578 +0000 UTC m=+109.365666782" watchObservedRunningTime="2026-02-17 16:05:00.658038769 +0000 UTC m=+109.412127541" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.658468 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=86.658459191 podStartE2EDuration="1m26.658459191s" podCreationTimestamp="2026-02-17 16:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:00.652688956 +0000 UTC m=+109.406777698" watchObservedRunningTime="2026-02-17 16:05:00.658459191 +0000 UTC m=+109.412547963" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.661574 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94fc639b-cd46-42fe-a597-4909e6abe07c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5mv4w\" (UID: \"94fc639b-cd46-42fe-a597-4909e6abe07c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5mv4w" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.682406 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.682383745 podStartE2EDuration="1m28.682383745s" podCreationTimestamp="2026-02-17 16:03:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:00.67871858 +0000 UTC m=+109.432807322" watchObservedRunningTime="2026-02-17 16:05:00.682383745 +0000 UTC m=+109.436472487" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.714400 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-vst6k" podStartSLOduration=84.71436899 podStartE2EDuration="1m24.71436899s" podCreationTimestamp="2026-02-17 16:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:00.713031122 +0000 UTC m=+109.467119874" watchObservedRunningTime="2026-02-17 16:05:00.71436899 +0000 UTC m=+109.468457752" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.732083 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=89.732065116 podStartE2EDuration="1m29.732065116s" podCreationTimestamp="2026-02-17 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:00.72867999 +0000 UTC m=+109.482768732" watchObservedRunningTime="2026-02-17 16:05:00.732065116 +0000 UTC m=+109.486153858" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.765294 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podStartSLOduration=84.765262346 podStartE2EDuration="1m24.765262346s" podCreationTimestamp="2026-02-17 16:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:00.764975418 +0000 UTC m=+109.519064150" watchObservedRunningTime="2026-02-17 16:05:00.765262346 +0000 UTC m=+109.519351128" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.784502 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-5jjr2" podStartSLOduration=84.784472236 podStartE2EDuration="1m24.784472236s" podCreationTimestamp="2026-02-17 16:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:00.783648472 +0000 UTC m=+109.537737214" watchObservedRunningTime="2026-02-17 16:05:00.784472236 +0000 UTC m=+109.538561008" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.832458 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5mv4w" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.885841 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-2g6fq" podStartSLOduration=84.885814175 podStartE2EDuration="1m24.885814175s" podCreationTimestamp="2026-02-17 16:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:00.87026734 +0000 UTC m=+109.624356112" watchObservedRunningTime="2026-02-17 16:05:00.885814175 +0000 UTC m=+109.639902937" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.899449 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=61.899425554 podStartE2EDuration="1m1.899425554s" podCreationTimestamp="2026-02-17 16:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:00.899222348 +0000 UTC m=+109.653311080" watchObservedRunningTime="2026-02-17 16:05:00.899425554 +0000 UTC m=+109.653514296" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.928933 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-n84l8" podStartSLOduration=84.928915808 podStartE2EDuration="1m24.928915808s" podCreationTimestamp="2026-02-17 16:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:00.926624602 +0000 UTC m=+109.680713374" watchObservedRunningTime="2026-02-17 16:05:00.928915808 +0000 UTC m=+109.683004550" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.943838 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:00 crc kubenswrapper[4672]: E0217 16:05:00.944009 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.968764 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 08:47:27.416064486 +0000 UTC Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.968812 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 17 16:05:00 crc kubenswrapper[4672]: I0217 16:05:00.975951 4672 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 16:05:01 crc kubenswrapper[4672]: I0217 16:05:01.743374 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5mv4w" event={"ID":"94fc639b-cd46-42fe-a597-4909e6abe07c","Type":"ContainerStarted","Data":"6c9616ee4e153e44d508eeeb7edb93f3aa1e9cbfeaa4247fd449a9ecf1e5c93b"} Feb 17 16:05:01 crc kubenswrapper[4672]: I0217 16:05:01.743782 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5mv4w" event={"ID":"94fc639b-cd46-42fe-a597-4909e6abe07c","Type":"ContainerStarted","Data":"163ffb1830d5a839d7a2f52fd100d5a5d9e270cc3e51e8f9ecddade7e659b20d"} Feb 17 16:05:01 crc kubenswrapper[4672]: I0217 16:05:01.944545 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:05:01 crc kubenswrapper[4672]: I0217 16:05:01.944587 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:01 crc kubenswrapper[4672]: I0217 16:05:01.944814 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:01 crc kubenswrapper[4672]: E0217 16:05:01.946550 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:05:01 crc kubenswrapper[4672]: E0217 16:05:01.946630 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:05:01 crc kubenswrapper[4672]: E0217 16:05:01.946697 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:05:02 crc kubenswrapper[4672]: I0217 16:05:02.944571 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:02 crc kubenswrapper[4672]: E0217 16:05:02.944825 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:03 crc kubenswrapper[4672]: I0217 16:05:03.944134 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:03 crc kubenswrapper[4672]: I0217 16:05:03.944259 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:05:03 crc kubenswrapper[4672]: E0217 16:05:03.944316 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:05:03 crc kubenswrapper[4672]: I0217 16:05:03.944482 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:03 crc kubenswrapper[4672]: E0217 16:05:03.944574 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:05:03 crc kubenswrapper[4672]: E0217 16:05:03.944725 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:05:04 crc kubenswrapper[4672]: I0217 16:05:04.944762 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:04 crc kubenswrapper[4672]: E0217 16:05:04.945184 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:05 crc kubenswrapper[4672]: I0217 16:05:05.944943 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:05 crc kubenswrapper[4672]: I0217 16:05:05.944998 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:05:05 crc kubenswrapper[4672]: I0217 16:05:05.945014 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:05 crc kubenswrapper[4672]: E0217 16:05:05.945192 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:05:05 crc kubenswrapper[4672]: E0217 16:05:05.945253 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:05:05 crc kubenswrapper[4672]: E0217 16:05:05.945384 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:05:06 crc kubenswrapper[4672]: I0217 16:05:06.944698 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:06 crc kubenswrapper[4672]: E0217 16:05:06.945225 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:07 crc kubenswrapper[4672]: I0217 16:05:07.944582 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:05:07 crc kubenswrapper[4672]: I0217 16:05:07.944629 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:07 crc kubenswrapper[4672]: E0217 16:05:07.944816 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:05:07 crc kubenswrapper[4672]: I0217 16:05:07.944865 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:07 crc kubenswrapper[4672]: E0217 16:05:07.945004 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:05:07 crc kubenswrapper[4672]: E0217 16:05:07.945204 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:05:08 crc kubenswrapper[4672]: I0217 16:05:08.944580 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:08 crc kubenswrapper[4672]: E0217 16:05:08.944770 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:09 crc kubenswrapper[4672]: I0217 16:05:09.943921 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:05:09 crc kubenswrapper[4672]: I0217 16:05:09.943961 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:09 crc kubenswrapper[4672]: E0217 16:05:09.944138 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:05:09 crc kubenswrapper[4672]: I0217 16:05:09.944192 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:09 crc kubenswrapper[4672]: E0217 16:05:09.944352 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:05:09 crc kubenswrapper[4672]: E0217 16:05:09.944557 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:05:10 crc kubenswrapper[4672]: I0217 16:05:10.944748 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:10 crc kubenswrapper[4672]: E0217 16:05:10.944882 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:10 crc kubenswrapper[4672]: I0217 16:05:10.945588 4672 scope.go:117] "RemoveContainer" containerID="432ab3a5ae33d1f4de114a70bbc405e9c0346cbd9c935aeac9e44d0586f569d1" Feb 17 16:05:10 crc kubenswrapper[4672]: E0217 16:05:10.945766 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4f9wc_openshift-ovn-kubernetes(98a910a1-b5f0-4f34-9d76-6474c753e8e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" Feb 17 16:05:11 crc kubenswrapper[4672]: I0217 16:05:11.944328 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:05:11 crc kubenswrapper[4672]: I0217 16:05:11.944363 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:11 crc kubenswrapper[4672]: E0217 16:05:11.946768 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:05:11 crc kubenswrapper[4672]: I0217 16:05:11.946791 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:11 crc kubenswrapper[4672]: E0217 16:05:11.946880 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:05:11 crc kubenswrapper[4672]: E0217 16:05:11.946968 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:05:11 crc kubenswrapper[4672]: E0217 16:05:11.969565 4672 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 17 16:05:12 crc kubenswrapper[4672]: E0217 16:05:12.105501 4672 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:05:12 crc kubenswrapper[4672]: I0217 16:05:12.784061 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5jjr2_edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe/kube-multus/1.log" Feb 17 16:05:12 crc kubenswrapper[4672]: I0217 16:05:12.784817 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5jjr2_edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe/kube-multus/0.log" Feb 17 16:05:12 crc kubenswrapper[4672]: I0217 16:05:12.784902 4672 generic.go:334] "Generic (PLEG): container finished" podID="edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe" containerID="f7f95d42a206c5e9b8e4b546034635db87f5912e543fea24cccde60817511eaa" exitCode=1 Feb 17 16:05:12 crc kubenswrapper[4672]: I0217 16:05:12.784954 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5jjr2" event={"ID":"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe","Type":"ContainerDied","Data":"f7f95d42a206c5e9b8e4b546034635db87f5912e543fea24cccde60817511eaa"} Feb 17 16:05:12 crc kubenswrapper[4672]: I0217 16:05:12.785015 4672 scope.go:117] "RemoveContainer" containerID="0c5985f47fa75e948d85d4404b8a2df3ab6b1f73d7b074553dbf4e3894cad73c" Feb 17 16:05:12 crc kubenswrapper[4672]: I0217 16:05:12.785664 4672 scope.go:117] "RemoveContainer" containerID="f7f95d42a206c5e9b8e4b546034635db87f5912e543fea24cccde60817511eaa" Feb 17 16:05:12 crc kubenswrapper[4672]: E0217 16:05:12.785921 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-5jjr2_openshift-multus(edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe)\"" pod="openshift-multus/multus-5jjr2" podUID="edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe" Feb 17 16:05:12 crc kubenswrapper[4672]: I0217 16:05:12.807715 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5mv4w" podStartSLOduration=96.807696975 podStartE2EDuration="1m36.807696975s" podCreationTimestamp="2026-02-17 16:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:01.762266406 +0000 UTC m=+110.516355168" watchObservedRunningTime="2026-02-17 16:05:12.807696975 +0000 UTC m=+121.561785717" Feb 17 16:05:12 crc kubenswrapper[4672]: I0217 16:05:12.944479 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:12 crc kubenswrapper[4672]: E0217 16:05:12.944889 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:13 crc kubenswrapper[4672]: I0217 16:05:13.792083 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5jjr2_edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe/kube-multus/1.log" Feb 17 16:05:13 crc kubenswrapper[4672]: I0217 16:05:13.944154 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:13 crc kubenswrapper[4672]: E0217 16:05:13.944307 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:05:13 crc kubenswrapper[4672]: I0217 16:05:13.944163 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:13 crc kubenswrapper[4672]: I0217 16:05:13.944196 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:05:13 crc kubenswrapper[4672]: E0217 16:05:13.944620 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:05:13 crc kubenswrapper[4672]: E0217 16:05:13.944736 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:05:14 crc kubenswrapper[4672]: I0217 16:05:14.944294 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:14 crc kubenswrapper[4672]: E0217 16:05:14.944483 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:15 crc kubenswrapper[4672]: I0217 16:05:15.944158 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:15 crc kubenswrapper[4672]: I0217 16:05:15.944193 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:15 crc kubenswrapper[4672]: E0217 16:05:15.944343 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:05:15 crc kubenswrapper[4672]: I0217 16:05:15.944384 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:05:15 crc kubenswrapper[4672]: E0217 16:05:15.944498 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:05:15 crc kubenswrapper[4672]: E0217 16:05:15.944625 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:05:16 crc kubenswrapper[4672]: I0217 16:05:16.944179 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:16 crc kubenswrapper[4672]: E0217 16:05:16.944360 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:17 crc kubenswrapper[4672]: E0217 16:05:17.107339 4672 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:05:17 crc kubenswrapper[4672]: I0217 16:05:17.944489 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:05:17 crc kubenswrapper[4672]: I0217 16:05:17.944632 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:17 crc kubenswrapper[4672]: I0217 16:05:17.944737 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:17 crc kubenswrapper[4672]: E0217 16:05:17.945195 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:05:17 crc kubenswrapper[4672]: E0217 16:05:17.945424 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:05:17 crc kubenswrapper[4672]: E0217 16:05:17.945075 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:05:18 crc kubenswrapper[4672]: I0217 16:05:18.944627 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:18 crc kubenswrapper[4672]: E0217 16:05:18.944750 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:19 crc kubenswrapper[4672]: I0217 16:05:19.944699 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:05:19 crc kubenswrapper[4672]: I0217 16:05:19.944830 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:19 crc kubenswrapper[4672]: I0217 16:05:19.944709 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:19 crc kubenswrapper[4672]: E0217 16:05:19.944913 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:05:19 crc kubenswrapper[4672]: E0217 16:05:19.945087 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:05:19 crc kubenswrapper[4672]: E0217 16:05:19.945273 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:05:20 crc kubenswrapper[4672]: I0217 16:05:20.944734 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:20 crc kubenswrapper[4672]: E0217 16:05:20.945177 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:21 crc kubenswrapper[4672]: I0217 16:05:21.944443 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:21 crc kubenswrapper[4672]: I0217 16:05:21.944553 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:21 crc kubenswrapper[4672]: E0217 16:05:21.945826 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:05:21 crc kubenswrapper[4672]: I0217 16:05:21.945933 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:05:21 crc kubenswrapper[4672]: E0217 16:05:21.945977 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:05:21 crc kubenswrapper[4672]: E0217 16:05:21.946109 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:05:21 crc kubenswrapper[4672]: I0217 16:05:21.947118 4672 scope.go:117] "RemoveContainer" containerID="432ab3a5ae33d1f4de114a70bbc405e9c0346cbd9c935aeac9e44d0586f569d1" Feb 17 16:05:22 crc kubenswrapper[4672]: E0217 16:05:22.109561 4672 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:05:22 crc kubenswrapper[4672]: I0217 16:05:22.832605 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4f9wc_98a910a1-b5f0-4f34-9d76-6474c753e8e7/ovnkube-controller/3.log" Feb 17 16:05:22 crc kubenswrapper[4672]: I0217 16:05:22.835738 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerStarted","Data":"01204ff3ef7dac68664104a29bdd8064f75c4fe495d66b88961534a94f68e9ae"} Feb 17 16:05:22 crc kubenswrapper[4672]: I0217 16:05:22.836474 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:05:22 crc kubenswrapper[4672]: I0217 16:05:22.879682 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" podStartSLOduration=105.879665757 podStartE2EDuration="1m45.879665757s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:22.878166194 +0000 UTC m=+131.632254956" watchObservedRunningTime="2026-02-17 16:05:22.879665757 +0000 UTC m=+131.633754489" Feb 17 16:05:22 crc kubenswrapper[4672]: I0217 16:05:22.916825 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hqdz9"] Feb 17 16:05:22 crc kubenswrapper[4672]: I0217 16:05:22.916993 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:05:22 crc kubenswrapper[4672]: E0217 16:05:22.917126 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:05:22 crc kubenswrapper[4672]: I0217 16:05:22.944879 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:22 crc kubenswrapper[4672]: E0217 16:05:22.945066 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:23 crc kubenswrapper[4672]: I0217 16:05:23.944726 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:23 crc kubenswrapper[4672]: I0217 16:05:23.945212 4672 scope.go:117] "RemoveContainer" containerID="f7f95d42a206c5e9b8e4b546034635db87f5912e543fea24cccde60817511eaa" Feb 17 16:05:23 crc kubenswrapper[4672]: I0217 16:05:23.944806 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:23 crc kubenswrapper[4672]: E0217 16:05:23.945387 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:05:23 crc kubenswrapper[4672]: E0217 16:05:23.945507 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:05:24 crc kubenswrapper[4672]: I0217 16:05:24.846429 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5jjr2_edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe/kube-multus/1.log" Feb 17 16:05:24 crc kubenswrapper[4672]: I0217 16:05:24.846549 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5jjr2" event={"ID":"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe","Type":"ContainerStarted","Data":"397bf27fea3d27b5db56ccb8cc9ebd9e8401dd883e3c22d9d2e8f76a4f63c577"} Feb 17 16:05:24 crc kubenswrapper[4672]: I0217 16:05:24.944835 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:05:24 crc kubenswrapper[4672]: E0217 16:05:24.944983 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:05:24 crc kubenswrapper[4672]: I0217 16:05:24.944835 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:24 crc kubenswrapper[4672]: E0217 16:05:24.945129 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:25 crc kubenswrapper[4672]: I0217 16:05:25.944103 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:25 crc kubenswrapper[4672]: I0217 16:05:25.944173 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:25 crc kubenswrapper[4672]: E0217 16:05:25.944342 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 16:05:25 crc kubenswrapper[4672]: E0217 16:05:25.944462 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 16:05:26 crc kubenswrapper[4672]: I0217 16:05:26.944408 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:26 crc kubenswrapper[4672]: I0217 16:05:26.944481 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:05:26 crc kubenswrapper[4672]: E0217 16:05:26.944950 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 16:05:26 crc kubenswrapper[4672]: E0217 16:05:26.945113 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hqdz9" podUID="712be02c-2ccc-4989-aecb-653745bacb0d" Feb 17 16:05:27 crc kubenswrapper[4672]: I0217 16:05:27.944579 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:27 crc kubenswrapper[4672]: I0217 16:05:27.944601 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:27 crc kubenswrapper[4672]: I0217 16:05:27.948025 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 16:05:27 crc kubenswrapper[4672]: I0217 16:05:27.948745 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 16:05:27 crc kubenswrapper[4672]: I0217 16:05:27.948795 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 16:05:27 crc kubenswrapper[4672]: I0217 16:05:27.949134 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 16:05:28 crc kubenswrapper[4672]: I0217 16:05:28.945898 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:05:28 crc kubenswrapper[4672]: I0217 16:05:28.945966 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:28 crc kubenswrapper[4672]: I0217 16:05:28.949279 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 16:05:28 crc kubenswrapper[4672]: I0217 16:05:28.949832 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.895178 4672 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.940377 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb"] Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.941373 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.945691 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.955877 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cdjsz"] Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.956957 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-cdjsz" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.960181 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.960957 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.961118 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.961319 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.961454 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.961697 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.961792 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.964596 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7p722"] Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.965275 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.966315 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.973494 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-bn2m6"] Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.974491 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bn2m6" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.979750 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h6hxj"] Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.980290 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h6hxj" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.981378 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.982464 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fds9q"] Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.984025 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.991119 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.991487 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.991590 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.991783 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.991841 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.991984 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.992052 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.992346 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.992638 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.992974 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.993258 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.993456 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.993701 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.994153 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.994636 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.994834 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.991792 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.996318 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.996554 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.996798 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.997008 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.997213 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 16:05:30 crc kubenswrapper[4672]: I0217 16:05:30.997409 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:30.997991 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wtm9c"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.004263 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6grzz"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.004992 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6grzz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:30.997420 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.005994 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.006252 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:30.997473 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.008111 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lzwwl"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:30.997551 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:30.997615 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.008868 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lxmvh"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.009417 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-b62wz"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.009775 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wtm9c" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.010037 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-b62wz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.010819 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:30.997663 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.011415 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lxmvh" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:30.997706 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:30.997747 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:30.997791 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:30.998144 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.000324 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.031056 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.031291 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfmgn"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.032268 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-d9vk6"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.036539 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.039115 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.039944 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-vndpv"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.054293 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.054421 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.054501 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.054619 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.056033 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.056177 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.056307 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.056361 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.056611 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.056732 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.056845 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.057711 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfmgn" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.057752 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.057931 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.058208 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.058344 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8cvrf"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.057711 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.058641 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.058747 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8cvrf" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.059075 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-audit-dir\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.059121 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-etcd-client\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.059143 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-serving-cert\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.059167 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwdf7\" (UniqueName: \"kubernetes.io/projected/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-kube-api-access-wwdf7\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.059196 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-encryption-config\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.059217 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.059225 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.059247 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.059265 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-audit-policies\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.059085 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-vndpv" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.059468 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.059679 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.060166 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.061202 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.061603 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.061724 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.061807 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.061836 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.061990 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.062073 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.062148 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.066171 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.066291 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.066358 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.066420 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.066484 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.066568 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.066634 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.066720 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.066790 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.066861 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.070988 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.071224 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.071437 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.071579 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.071688 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.072049 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.072253 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.072395 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.072554 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.072675 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.075550 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px67v"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.075993 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gcffc"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.076222 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lnsj7"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.076550 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.076598 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px67v" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.076789 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gcffc" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.077740 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-v9lrm"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.078443 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-v9lrm" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.078488 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-q54wv"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.079146 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q54wv" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.079448 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.081667 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.081836 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.081931 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.082102 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-76xtz"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.082545 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pxnrp"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.082863 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pxnrp" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.083091 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-76xtz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.086131 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hgfxd"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.086847 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dwz6v"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.087190 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dwz6v" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.087400 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hgfxd" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.087895 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6ljr8"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.088577 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6ljr8" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.096253 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbvwk"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.096681 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbvwk" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.097312 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mqrdm"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.111663 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.111895 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.112135 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.112489 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.113527 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.113877 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8ggmr"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.114687 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.114953 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.115027 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8ggmr" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.115246 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-mqrdm" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.115729 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.116316 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8wlwl"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.116846 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8wlwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.116970 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.118432 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bkxss"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.118786 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bk22j"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.118853 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bkxss" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.119226 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g6m8r"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.119611 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g6m8r" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.119747 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.120102 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-76rxw"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.120443 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-76rxw" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.120971 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ltvxx"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.121613 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ltvxx" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.122056 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-b79hz"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.122525 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b79hz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.123820 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.123857 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f7zb5"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.124488 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n4nj9"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.124568 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f7zb5" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.125222 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n4nj9" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.125260 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7p722"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.126203 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h6hxj"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.127133 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.128235 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wtm9c"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.129236 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fds9q"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.129967 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cdjsz"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.131214 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-7prtt"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.131827 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-7prtt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.131872 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-b62wz"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.133199 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gcffc"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.134106 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8cvrf"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.135098 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-d9vk6"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.136617 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lnsj7"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.137877 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.139593 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hgfxd"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.140453 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-76xtz"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.141661 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6grzz"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.143262 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pxnrp"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.143591 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.144076 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mqrdm"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.144846 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lxmvh"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.146020 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lzwwl"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.147222 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-76rxw"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.148334 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfmgn"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.149247 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dwz6v"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.150197 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbvwk"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.151241 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px67v"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.153598 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6ljr8"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.154704 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g6m8r"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.155974 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8wlwl"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.157576 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-q54wv"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.162354 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxnk2\" (UniqueName: \"kubernetes.io/projected/6119a50b-94a4-4095-b14c-f009fe646312-kube-api-access-mxnk2\") pod \"machine-api-operator-5694c8668f-cdjsz\" (UID: \"6119a50b-94a4-4095-b14c-f009fe646312\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cdjsz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.162408 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4206b74-7012-47af-9344-253aa7453e86-config\") pod \"controller-manager-879f6c89f-7p722\" (UID: \"a4206b74-7012-47af-9344-253aa7453e86\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.162437 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-node-pullsecrets\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.162483 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.162534 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bk22j"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.162566 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-audit-policies\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.162597 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aae518f7-37fe-4bd1-9c5b-ba5186684ebd-config\") pod \"authentication-operator-69f744f599-wtm9c\" (UID: \"aae518f7-37fe-4bd1-9c5b-ba5186684ebd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wtm9c" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.162619 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-audit\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.162652 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/59e82a1f-2c6a-4938-9696-ffe2eac280ce-audit-dir\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.162672 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06bbdf78-c2ef-42f1-8f0d-952f07a4b678-config\") pod \"machine-approver-56656f9798-bn2m6\" (UID: \"06bbdf78-c2ef-42f1-8f0d-952f07a4b678\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bn2m6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.162697 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks7zk\" (UniqueName: \"kubernetes.io/projected/750ef8f5-44ad-4016-8894-0b2a05430464-kube-api-access-ks7zk\") pod \"console-f9d7485db-d9vk6\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.162732 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-config\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.162763 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.162789 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-console-config\") pod \"console-f9d7485db-d9vk6\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.162816 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d98488b-d521-4207-a7b8-23b37cb1ef98-service-ca-bundle\") pod \"router-default-5444994796-vndpv\" (UID: \"1d98488b-d521-4207-a7b8-23b37cb1ef98\") " pod="openshift-ingress/router-default-5444994796-vndpv" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.162899 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-audit-dir\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.162958 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d45s\" (UniqueName: \"kubernetes.io/projected/73969925-7fe2-4e3a-9ede-d1bd990f7f71-kube-api-access-9d45s\") pod \"ingress-operator-5b745b69d9-8cvrf\" (UID: \"73969925-7fe2-4e3a-9ede-d1bd990f7f71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8cvrf" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.162979 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/506b8374-2f07-427a-bf3b-44b1f6f022b5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zfmgn\" (UID: \"506b8374-2f07-427a-bf3b-44b1f6f022b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfmgn" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.162980 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-audit-dir\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163002 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-audit-policies\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163051 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-serving-cert\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163084 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73969925-7fe2-4e3a-9ede-d1bd990f7f71-trusted-ca\") pod \"ingress-operator-5b745b69d9-8cvrf\" (UID: \"73969925-7fe2-4e3a-9ede-d1bd990f7f71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8cvrf" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163103 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5948d11-a6da-4f21-a6e8-413a28791775-serving-cert\") pod \"route-controller-manager-6576b87f9c-vwl87\" (UID: \"b5948d11-a6da-4f21-a6e8-413a28791775\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163130 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-serving-cert\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163151 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4206b74-7012-47af-9344-253aa7453e86-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7p722\" (UID: \"a4206b74-7012-47af-9344-253aa7453e86\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163175 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163197 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d95f63b-d4f4-4da3-a741-c69b49b9233c-metrics-tls\") pod \"dns-operator-744455d44c-b62wz\" (UID: \"4d95f63b-d4f4-4da3-a741-c69b49b9233c\") " pod="openshift-dns-operator/dns-operator-744455d44c-b62wz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163216 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aae518f7-37fe-4bd1-9c5b-ba5186684ebd-serving-cert\") pod \"authentication-operator-69f744f599-wtm9c\" (UID: \"aae518f7-37fe-4bd1-9c5b-ba5186684ebd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wtm9c" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163236 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163255 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw54j\" (UniqueName: \"kubernetes.io/projected/655735f2-25f3-4cf3-8b40-a35184576e33-kube-api-access-xw54j\") pod \"openshift-apiserver-operator-796bbdcf4f-h6hxj\" (UID: \"655735f2-25f3-4cf3-8b40-a35184576e33\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h6hxj" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163275 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btntj\" (UniqueName: \"kubernetes.io/projected/b5948d11-a6da-4f21-a6e8-413a28791775-kube-api-access-btntj\") pod \"route-controller-manager-6576b87f9c-vwl87\" (UID: \"b5948d11-a6da-4f21-a6e8-413a28791775\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163290 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-service-ca\") pod \"console-f9d7485db-d9vk6\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163307 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-oauth-serving-cert\") pod \"console-f9d7485db-d9vk6\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163326 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163345 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73969925-7fe2-4e3a-9ede-d1bd990f7f71-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8cvrf\" (UID: \"73969925-7fe2-4e3a-9ede-d1bd990f7f71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8cvrf" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163376 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d98488b-d521-4207-a7b8-23b37cb1ef98-metrics-certs\") pod \"router-default-5444994796-vndpv\" (UID: \"1d98488b-d521-4207-a7b8-23b37cb1ef98\") " pod="openshift-ingress/router-default-5444994796-vndpv" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163408 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6119a50b-94a4-4095-b14c-f009fe646312-images\") pod \"machine-api-operator-5694c8668f-cdjsz\" (UID: \"6119a50b-94a4-4095-b14c-f009fe646312\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cdjsz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163427 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163446 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e8f5285-7002-4472-be6a-a21731ccaf67-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-lxmvh\" (UID: \"5e8f5285-7002-4472-be6a-a21731ccaf67\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lxmvh" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163464 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/43a5e5af-ba41-4a32-9893-1c17a54e7024-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6grzz\" (UID: \"43a5e5af-ba41-4a32-9893-1c17a54e7024\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6grzz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163490 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6119a50b-94a4-4095-b14c-f009fe646312-config\") pod \"machine-api-operator-5694c8668f-cdjsz\" (UID: \"6119a50b-94a4-4095-b14c-f009fe646312\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cdjsz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163525 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-image-import-ca\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163542 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/06bbdf78-c2ef-42f1-8f0d-952f07a4b678-machine-approver-tls\") pod \"machine-approver-56656f9798-bn2m6\" (UID: \"06bbdf78-c2ef-42f1-8f0d-952f07a4b678\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bn2m6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163557 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1d98488b-d521-4207-a7b8-23b37cb1ef98-default-certificate\") pod \"router-default-5444994796-vndpv\" (UID: \"1d98488b-d521-4207-a7b8-23b37cb1ef98\") " pod="openshift-ingress/router-default-5444994796-vndpv" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163578 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/655735f2-25f3-4cf3-8b40-a35184576e33-config\") pod \"openshift-apiserver-operator-796bbdcf4f-h6hxj\" (UID: \"655735f2-25f3-4cf3-8b40-a35184576e33\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h6hxj" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163597 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzg4b\" (UniqueName: \"kubernetes.io/projected/4d95f63b-d4f4-4da3-a741-c69b49b9233c-kube-api-access-dzg4b\") pod \"dns-operator-744455d44c-b62wz\" (UID: \"4d95f63b-d4f4-4da3-a741-c69b49b9233c\") " pod="openshift-dns-operator/dns-operator-744455d44c-b62wz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163621 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/506b8374-2f07-427a-bf3b-44b1f6f022b5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zfmgn\" (UID: \"506b8374-2f07-427a-bf3b-44b1f6f022b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfmgn" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163643 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtvnk\" (UniqueName: \"kubernetes.io/projected/59e82a1f-2c6a-4938-9696-ffe2eac280ce-kube-api-access-wtvnk\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163664 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163374 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-audit-policies\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163719 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163704 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.163980 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfhg6\" (UniqueName: \"kubernetes.io/projected/506b8374-2f07-427a-bf3b-44b1f6f022b5-kube-api-access-rfhg6\") pod \"openshift-controller-manager-operator-756b6f6bc6-zfmgn\" (UID: \"506b8374-2f07-427a-bf3b-44b1f6f022b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfmgn" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.164004 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.164037 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae518f7-37fe-4bd1-9c5b-ba5186684ebd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wtm9c\" (UID: \"aae518f7-37fe-4bd1-9c5b-ba5186684ebd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wtm9c" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.164227 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcrk9\" (UniqueName: \"kubernetes.io/projected/aae518f7-37fe-4bd1-9c5b-ba5186684ebd-kube-api-access-rcrk9\") pod \"authentication-operator-69f744f599-wtm9c\" (UID: \"aae518f7-37fe-4bd1-9c5b-ba5186684ebd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wtm9c" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.164285 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.164313 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-audit-dir\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.164345 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqsxt\" (UniqueName: \"kubernetes.io/projected/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-kube-api-access-kqsxt\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.164364 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/750ef8f5-44ad-4016-8894-0b2a05430464-console-serving-cert\") pod \"console-f9d7485db-d9vk6\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.164472 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-etcd-serving-ca\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.164559 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4206b74-7012-47af-9344-253aa7453e86-serving-cert\") pod \"controller-manager-879f6c89f-7p722\" (UID: \"a4206b74-7012-47af-9344-253aa7453e86\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.164597 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/06bbdf78-c2ef-42f1-8f0d-952f07a4b678-auth-proxy-config\") pod \"machine-approver-56656f9798-bn2m6\" (UID: \"06bbdf78-c2ef-42f1-8f0d-952f07a4b678\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bn2m6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.164648 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktcrs\" (UniqueName: \"kubernetes.io/projected/a4206b74-7012-47af-9344-253aa7453e86-kube-api-access-ktcrs\") pod \"controller-manager-879f6c89f-7p722\" (UID: \"a4206b74-7012-47af-9344-253aa7453e86\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.164680 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-encryption-config\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.164707 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5948d11-a6da-4f21-a6e8-413a28791775-config\") pod \"route-controller-manager-6576b87f9c-vwl87\" (UID: \"b5948d11-a6da-4f21-a6e8-413a28791775\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.164734 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1d98488b-d521-4207-a7b8-23b37cb1ef98-stats-auth\") pod \"router-default-5444994796-vndpv\" (UID: \"1d98488b-d521-4207-a7b8-23b37cb1ef98\") " pod="openshift-ingress/router-default-5444994796-vndpv" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.164758 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6mh9\" (UniqueName: \"kubernetes.io/projected/06bbdf78-c2ef-42f1-8f0d-952f07a4b678-kube-api-access-g6mh9\") pod \"machine-approver-56656f9798-bn2m6\" (UID: \"06bbdf78-c2ef-42f1-8f0d-952f07a4b678\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bn2m6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.164809 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvj6w\" (UniqueName: \"kubernetes.io/projected/43a5e5af-ba41-4a32-9893-1c17a54e7024-kube-api-access-rvj6w\") pod \"openshift-config-operator-7777fb866f-6grzz\" (UID: \"43a5e5af-ba41-4a32-9893-1c17a54e7024\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6grzz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.164838 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.164867 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.164900 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43a5e5af-ba41-4a32-9893-1c17a54e7024-serving-cert\") pod \"openshift-config-operator-7777fb866f-6grzz\" (UID: \"43a5e5af-ba41-4a32-9893-1c17a54e7024\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6grzz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.164936 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4206b74-7012-47af-9344-253aa7453e86-client-ca\") pod \"controller-manager-879f6c89f-7p722\" (UID: \"a4206b74-7012-47af-9344-253aa7453e86\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.164988 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-etcd-client\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.165031 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnxzj\" (UniqueName: \"kubernetes.io/projected/1d98488b-d521-4207-a7b8-23b37cb1ef98-kube-api-access-lnxzj\") pod \"router-default-5444994796-vndpv\" (UID: \"1d98488b-d521-4207-a7b8-23b37cb1ef98\") " pod="openshift-ingress/router-default-5444994796-vndpv" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.165048 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-etcd-client\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.165049 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.165071 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/73969925-7fe2-4e3a-9ede-d1bd990f7f71-metrics-tls\") pod \"ingress-operator-5b745b69d9-8cvrf\" (UID: \"73969925-7fe2-4e3a-9ede-d1bd990f7f71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8cvrf" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.165094 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwdf7\" (UniqueName: \"kubernetes.io/projected/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-kube-api-access-wwdf7\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.165126 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6119a50b-94a4-4095-b14c-f009fe646312-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cdjsz\" (UID: \"6119a50b-94a4-4095-b14c-f009fe646312\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cdjsz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.165146 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.165163 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/750ef8f5-44ad-4016-8894-0b2a05430464-console-oauth-config\") pod \"console-f9d7485db-d9vk6\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.165275 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5948d11-a6da-4f21-a6e8-413a28791775-client-ca\") pod \"route-controller-manager-6576b87f9c-vwl87\" (UID: \"b5948d11-a6da-4f21-a6e8-413a28791775\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.165308 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae518f7-37fe-4bd1-9c5b-ba5186684ebd-service-ca-bundle\") pod \"authentication-operator-69f744f599-wtm9c\" (UID: \"aae518f7-37fe-4bd1-9c5b-ba5186684ebd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wtm9c" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.165330 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb857\" (UniqueName: \"kubernetes.io/projected/5e8f5285-7002-4472-be6a-a21731ccaf67-kube-api-access-bb857\") pod \"cluster-samples-operator-665b6dd947-lxmvh\" (UID: \"5e8f5285-7002-4472-be6a-a21731ccaf67\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lxmvh" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.165360 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-encryption-config\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.165395 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.165414 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-trusted-ca-bundle\") pod \"console-f9d7485db-d9vk6\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.165499 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/655735f2-25f3-4cf3-8b40-a35184576e33-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-h6hxj\" (UID: \"655735f2-25f3-4cf3-8b40-a35184576e33\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h6hxj" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.172593 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.173196 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-encryption-config\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.173300 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-serving-cert\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.178633 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bkxss"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.179161 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-etcd-client\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.181526 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-b79hz"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.182820 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-v9lrm"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.184160 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.186204 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-s2n6p"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.187435 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-s2n6p" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.187690 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zgwq2"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.189299 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n4nj9"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.189541 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.189896 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8ggmr"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.190784 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f7zb5"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.192192 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-s2n6p"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.193380 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ltvxx"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.194771 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zgwq2"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.196197 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-pp6hc"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.196742 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pp6hc" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.197519 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pp6hc"] Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.231067 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.243538 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.264669 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.266404 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-image-import-ca\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.266435 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/06bbdf78-c2ef-42f1-8f0d-952f07a4b678-machine-approver-tls\") pod \"machine-approver-56656f9798-bn2m6\" (UID: \"06bbdf78-c2ef-42f1-8f0d-952f07a4b678\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bn2m6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.266458 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1d98488b-d521-4207-a7b8-23b37cb1ef98-default-certificate\") pod \"router-default-5444994796-vndpv\" (UID: \"1d98488b-d521-4207-a7b8-23b37cb1ef98\") " pod="openshift-ingress/router-default-5444994796-vndpv" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.266480 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/655735f2-25f3-4cf3-8b40-a35184576e33-config\") pod \"openshift-apiserver-operator-796bbdcf4f-h6hxj\" (UID: \"655735f2-25f3-4cf3-8b40-a35184576e33\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h6hxj" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.266500 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzg4b\" (UniqueName: \"kubernetes.io/projected/4d95f63b-d4f4-4da3-a741-c69b49b9233c-kube-api-access-dzg4b\") pod \"dns-operator-744455d44c-b62wz\" (UID: \"4d95f63b-d4f4-4da3-a741-c69b49b9233c\") " pod="openshift-dns-operator/dns-operator-744455d44c-b62wz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.266531 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtvnk\" (UniqueName: \"kubernetes.io/projected/59e82a1f-2c6a-4938-9696-ffe2eac280ce-kube-api-access-wtvnk\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.266552 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/506b8374-2f07-427a-bf3b-44b1f6f022b5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zfmgn\" (UID: \"506b8374-2f07-427a-bf3b-44b1f6f022b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfmgn" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.266573 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfhg6\" (UniqueName: \"kubernetes.io/projected/506b8374-2f07-427a-bf3b-44b1f6f022b5-kube-api-access-rfhg6\") pod \"openshift-controller-manager-operator-756b6f6bc6-zfmgn\" (UID: \"506b8374-2f07-427a-bf3b-44b1f6f022b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfmgn" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.266762 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.266795 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae518f7-37fe-4bd1-9c5b-ba5186684ebd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wtm9c\" (UID: \"aae518f7-37fe-4bd1-9c5b-ba5186684ebd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wtm9c" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.266849 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.266876 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-audit-dir\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.266971 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqsxt\" (UniqueName: \"kubernetes.io/projected/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-kube-api-access-kqsxt\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.266997 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcrk9\" (UniqueName: \"kubernetes.io/projected/aae518f7-37fe-4bd1-9c5b-ba5186684ebd-kube-api-access-rcrk9\") pod \"authentication-operator-69f744f599-wtm9c\" (UID: \"aae518f7-37fe-4bd1-9c5b-ba5186684ebd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wtm9c" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.267024 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/750ef8f5-44ad-4016-8894-0b2a05430464-console-serving-cert\") pod \"console-f9d7485db-d9vk6\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.267041 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-etcd-serving-ca\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.267063 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c83e6ac1-9d88-475f-b293-e3accaf7b812-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6ljr8\" (UID: \"c83e6ac1-9d88-475f-b293-e3accaf7b812\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6ljr8" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.267837 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/798d1803-87c5-4e9e-a29e-660f313c283c-srv-cert\") pod \"catalog-operator-68c6474976-bbvwk\" (UID: \"798d1803-87c5-4e9e-a29e-660f313c283c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbvwk" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.267864 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae518f7-37fe-4bd1-9c5b-ba5186684ebd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wtm9c\" (UID: \"aae518f7-37fe-4bd1-9c5b-ba5186684ebd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wtm9c" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.267788 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.267905 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4206b74-7012-47af-9344-253aa7453e86-serving-cert\") pod \"controller-manager-879f6c89f-7p722\" (UID: \"a4206b74-7012-47af-9344-253aa7453e86\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.267265 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-audit-dir\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.267650 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/655735f2-25f3-4cf3-8b40-a35184576e33-config\") pod \"openshift-apiserver-operator-796bbdcf4f-h6hxj\" (UID: \"655735f2-25f3-4cf3-8b40-a35184576e33\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h6hxj" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.267927 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/06bbdf78-c2ef-42f1-8f0d-952f07a4b678-auth-proxy-config\") pod \"machine-approver-56656f9798-bn2m6\" (UID: \"06bbdf78-c2ef-42f1-8f0d-952f07a4b678\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bn2m6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.267972 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-encryption-config\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.267990 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5948d11-a6da-4f21-a6e8-413a28791775-config\") pod \"route-controller-manager-6576b87f9c-vwl87\" (UID: \"b5948d11-a6da-4f21-a6e8-413a28791775\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268006 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1d98488b-d521-4207-a7b8-23b37cb1ef98-stats-auth\") pod \"router-default-5444994796-vndpv\" (UID: \"1d98488b-d521-4207-a7b8-23b37cb1ef98\") " pod="openshift-ingress/router-default-5444994796-vndpv" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268026 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktcrs\" (UniqueName: \"kubernetes.io/projected/a4206b74-7012-47af-9344-253aa7453e86-kube-api-access-ktcrs\") pod \"controller-manager-879f6c89f-7p722\" (UID: \"a4206b74-7012-47af-9344-253aa7453e86\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268045 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6mh9\" (UniqueName: \"kubernetes.io/projected/06bbdf78-c2ef-42f1-8f0d-952f07a4b678-kube-api-access-g6mh9\") pod \"machine-approver-56656f9798-bn2m6\" (UID: \"06bbdf78-c2ef-42f1-8f0d-952f07a4b678\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bn2m6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268027 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-etcd-serving-ca\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268062 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvj6w\" (UniqueName: \"kubernetes.io/projected/43a5e5af-ba41-4a32-9893-1c17a54e7024-kube-api-access-rvj6w\") pod \"openshift-config-operator-7777fb866f-6grzz\" (UID: \"43a5e5af-ba41-4a32-9893-1c17a54e7024\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6grzz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268078 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43a5e5af-ba41-4a32-9893-1c17a54e7024-serving-cert\") pod \"openshift-config-operator-7777fb866f-6grzz\" (UID: \"43a5e5af-ba41-4a32-9893-1c17a54e7024\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6grzz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268095 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268110 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268130 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4206b74-7012-47af-9344-253aa7453e86-client-ca\") pod \"controller-manager-879f6c89f-7p722\" (UID: \"a4206b74-7012-47af-9344-253aa7453e86\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268157 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnxzj\" (UniqueName: \"kubernetes.io/projected/1d98488b-d521-4207-a7b8-23b37cb1ef98-kube-api-access-lnxzj\") pod \"router-default-5444994796-vndpv\" (UID: \"1d98488b-d521-4207-a7b8-23b37cb1ef98\") " pod="openshift-ingress/router-default-5444994796-vndpv" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268174 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-etcd-client\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268192 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/73969925-7fe2-4e3a-9ede-d1bd990f7f71-metrics-tls\") pod \"ingress-operator-5b745b69d9-8cvrf\" (UID: \"73969925-7fe2-4e3a-9ede-d1bd990f7f71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8cvrf" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268210 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268230 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/750ef8f5-44ad-4016-8894-0b2a05430464-console-oauth-config\") pod \"console-f9d7485db-d9vk6\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268257 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6119a50b-94a4-4095-b14c-f009fe646312-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cdjsz\" (UID: \"6119a50b-94a4-4095-b14c-f009fe646312\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cdjsz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268275 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae518f7-37fe-4bd1-9c5b-ba5186684ebd-service-ca-bundle\") pod \"authentication-operator-69f744f599-wtm9c\" (UID: \"aae518f7-37fe-4bd1-9c5b-ba5186684ebd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wtm9c" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268294 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb857\" (UniqueName: \"kubernetes.io/projected/5e8f5285-7002-4472-be6a-a21731ccaf67-kube-api-access-bb857\") pod \"cluster-samples-operator-665b6dd947-lxmvh\" (UID: \"5e8f5285-7002-4472-be6a-a21731ccaf67\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lxmvh" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268295 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-image-import-ca\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268316 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f4bd667f-f40c-402e-96f4-6978225fc1ed-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-f7zb5\" (UID: \"f4bd667f-f40c-402e-96f4-6978225fc1ed\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f7zb5" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268337 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f4sr\" (UniqueName: \"kubernetes.io/projected/798d1803-87c5-4e9e-a29e-660f313c283c-kube-api-access-6f4sr\") pod \"catalog-operator-68c6474976-bbvwk\" (UID: \"798d1803-87c5-4e9e-a29e-660f313c283c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbvwk" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268355 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5948d11-a6da-4f21-a6e8-413a28791775-client-ca\") pod \"route-controller-manager-6576b87f9c-vwl87\" (UID: \"b5948d11-a6da-4f21-a6e8-413a28791775\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268370 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268387 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-trusted-ca-bundle\") pod \"console-f9d7485db-d9vk6\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268406 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/655735f2-25f3-4cf3-8b40-a35184576e33-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-h6hxj\" (UID: \"655735f2-25f3-4cf3-8b40-a35184576e33\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h6hxj" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268426 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxnk2\" (UniqueName: \"kubernetes.io/projected/6119a50b-94a4-4095-b14c-f009fe646312-kube-api-access-mxnk2\") pod \"machine-api-operator-5694c8668f-cdjsz\" (UID: \"6119a50b-94a4-4095-b14c-f009fe646312\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cdjsz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268445 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wxpr\" (UniqueName: \"kubernetes.io/projected/f4bd667f-f40c-402e-96f4-6978225fc1ed-kube-api-access-7wxpr\") pod \"package-server-manager-789f6589d5-f7zb5\" (UID: \"f4bd667f-f40c-402e-96f4-6978225fc1ed\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f7zb5" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268467 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268489 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4206b74-7012-47af-9344-253aa7453e86-config\") pod \"controller-manager-879f6c89f-7p722\" (UID: \"a4206b74-7012-47af-9344-253aa7453e86\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268504 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-node-pullsecrets\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268544 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aae518f7-37fe-4bd1-9c5b-ba5186684ebd-config\") pod \"authentication-operator-69f744f599-wtm9c\" (UID: \"aae518f7-37fe-4bd1-9c5b-ba5186684ebd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wtm9c" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268564 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-audit\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268585 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06bbdf78-c2ef-42f1-8f0d-952f07a4b678-config\") pod \"machine-approver-56656f9798-bn2m6\" (UID: \"06bbdf78-c2ef-42f1-8f0d-952f07a4b678\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bn2m6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268610 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/59e82a1f-2c6a-4938-9696-ffe2eac280ce-audit-dir\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268628 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-config\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268644 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks7zk\" (UniqueName: \"kubernetes.io/projected/750ef8f5-44ad-4016-8894-0b2a05430464-kube-api-access-ks7zk\") pod \"console-f9d7485db-d9vk6\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268668 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268685 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-console-config\") pod \"console-f9d7485db-d9vk6\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268702 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d98488b-d521-4207-a7b8-23b37cb1ef98-service-ca-bundle\") pod \"router-default-5444994796-vndpv\" (UID: \"1d98488b-d521-4207-a7b8-23b37cb1ef98\") " pod="openshift-ingress/router-default-5444994796-vndpv" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268728 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/798d1803-87c5-4e9e-a29e-660f313c283c-profile-collector-cert\") pod \"catalog-operator-68c6474976-bbvwk\" (UID: \"798d1803-87c5-4e9e-a29e-660f313c283c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbvwk" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268747 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d45s\" (UniqueName: \"kubernetes.io/projected/73969925-7fe2-4e3a-9ede-d1bd990f7f71-kube-api-access-9d45s\") pod \"ingress-operator-5b745b69d9-8cvrf\" (UID: \"73969925-7fe2-4e3a-9ede-d1bd990f7f71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8cvrf" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268770 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/506b8374-2f07-427a-bf3b-44b1f6f022b5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zfmgn\" (UID: \"506b8374-2f07-427a-bf3b-44b1f6f022b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfmgn" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268789 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-audit-policies\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268806 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-serving-cert\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268822 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73969925-7fe2-4e3a-9ede-d1bd990f7f71-trusted-ca\") pod \"ingress-operator-5b745b69d9-8cvrf\" (UID: \"73969925-7fe2-4e3a-9ede-d1bd990f7f71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8cvrf" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268841 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4206b74-7012-47af-9344-253aa7453e86-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7p722\" (UID: \"a4206b74-7012-47af-9344-253aa7453e86\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268858 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268859 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/06bbdf78-c2ef-42f1-8f0d-952f07a4b678-auth-proxy-config\") pod \"machine-approver-56656f9798-bn2m6\" (UID: \"06bbdf78-c2ef-42f1-8f0d-952f07a4b678\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bn2m6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268874 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d95f63b-d4f4-4da3-a741-c69b49b9233c-metrics-tls\") pod \"dns-operator-744455d44c-b62wz\" (UID: \"4d95f63b-d4f4-4da3-a741-c69b49b9233c\") " pod="openshift-dns-operator/dns-operator-744455d44c-b62wz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268902 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5948d11-a6da-4f21-a6e8-413a28791775-serving-cert\") pod \"route-controller-manager-6576b87f9c-vwl87\" (UID: \"b5948d11-a6da-4f21-a6e8-413a28791775\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268927 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aae518f7-37fe-4bd1-9c5b-ba5186684ebd-serving-cert\") pod \"authentication-operator-69f744f599-wtm9c\" (UID: \"aae518f7-37fe-4bd1-9c5b-ba5186684ebd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wtm9c" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268963 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btntj\" (UniqueName: \"kubernetes.io/projected/b5948d11-a6da-4f21-a6e8-413a28791775-kube-api-access-btntj\") pod \"route-controller-manager-6576b87f9c-vwl87\" (UID: \"b5948d11-a6da-4f21-a6e8-413a28791775\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.268988 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-service-ca\") pod \"console-f9d7485db-d9vk6\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.269011 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-oauth-serving-cert\") pod \"console-f9d7485db-d9vk6\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.269038 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c83e6ac1-9d88-475f-b293-e3accaf7b812-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6ljr8\" (UID: \"c83e6ac1-9d88-475f-b293-e3accaf7b812\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6ljr8" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.269061 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.269080 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw54j\" (UniqueName: \"kubernetes.io/projected/655735f2-25f3-4cf3-8b40-a35184576e33-kube-api-access-xw54j\") pod \"openshift-apiserver-operator-796bbdcf4f-h6hxj\" (UID: \"655735f2-25f3-4cf3-8b40-a35184576e33\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h6hxj" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.269101 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73969925-7fe2-4e3a-9ede-d1bd990f7f71-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8cvrf\" (UID: \"73969925-7fe2-4e3a-9ede-d1bd990f7f71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8cvrf" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.269119 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d98488b-d521-4207-a7b8-23b37cb1ef98-metrics-certs\") pod \"router-default-5444994796-vndpv\" (UID: \"1d98488b-d521-4207-a7b8-23b37cb1ef98\") " pod="openshift-ingress/router-default-5444994796-vndpv" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.269138 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.269155 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.269175 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e8f5285-7002-4472-be6a-a21731ccaf67-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-lxmvh\" (UID: \"5e8f5285-7002-4472-be6a-a21731ccaf67\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lxmvh" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.269192 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c83e6ac1-9d88-475f-b293-e3accaf7b812-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6ljr8\" (UID: \"c83e6ac1-9d88-475f-b293-e3accaf7b812\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6ljr8" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.269220 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6119a50b-94a4-4095-b14c-f009fe646312-images\") pod \"machine-api-operator-5694c8668f-cdjsz\" (UID: \"6119a50b-94a4-4095-b14c-f009fe646312\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cdjsz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.269239 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6119a50b-94a4-4095-b14c-f009fe646312-config\") pod \"machine-api-operator-5694c8668f-cdjsz\" (UID: \"6119a50b-94a4-4095-b14c-f009fe646312\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cdjsz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.269256 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/43a5e5af-ba41-4a32-9893-1c17a54e7024-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6grzz\" (UID: \"43a5e5af-ba41-4a32-9893-1c17a54e7024\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6grzz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.269588 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/43a5e5af-ba41-4a32-9893-1c17a54e7024-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6grzz\" (UID: \"43a5e5af-ba41-4a32-9893-1c17a54e7024\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6grzz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.269877 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.269924 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/506b8374-2f07-427a-bf3b-44b1f6f022b5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zfmgn\" (UID: \"506b8374-2f07-427a-bf3b-44b1f6f022b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfmgn" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.271150 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d98488b-d521-4207-a7b8-23b37cb1ef98-service-ca-bundle\") pod \"router-default-5444994796-vndpv\" (UID: \"1d98488b-d521-4207-a7b8-23b37cb1ef98\") " pod="openshift-ingress/router-default-5444994796-vndpv" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.271244 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.271730 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-service-ca\") pod \"console-f9d7485db-d9vk6\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.271796 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/59e82a1f-2c6a-4938-9696-ffe2eac280ce-audit-dir\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.271942 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-console-config\") pod \"console-f9d7485db-d9vk6\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.272468 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/06bbdf78-c2ef-42f1-8f0d-952f07a4b678-machine-approver-tls\") pod \"machine-approver-56656f9798-bn2m6\" (UID: \"06bbdf78-c2ef-42f1-8f0d-952f07a4b678\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bn2m6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.272578 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4206b74-7012-47af-9344-253aa7453e86-client-ca\") pod \"controller-manager-879f6c89f-7p722\" (UID: \"a4206b74-7012-47af-9344-253aa7453e86\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.272928 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/506b8374-2f07-427a-bf3b-44b1f6f022b5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zfmgn\" (UID: \"506b8374-2f07-427a-bf3b-44b1f6f022b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfmgn" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.273045 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06bbdf78-c2ef-42f1-8f0d-952f07a4b678-config\") pod \"machine-approver-56656f9798-bn2m6\" (UID: \"06bbdf78-c2ef-42f1-8f0d-952f07a4b678\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bn2m6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.273127 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-config\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.273144 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6119a50b-94a4-4095-b14c-f009fe646312-images\") pod \"machine-api-operator-5694c8668f-cdjsz\" (UID: \"6119a50b-94a4-4095-b14c-f009fe646312\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cdjsz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.273273 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-oauth-serving-cert\") pod \"console-f9d7485db-d9vk6\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.273376 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/750ef8f5-44ad-4016-8894-0b2a05430464-console-serving-cert\") pod \"console-f9d7485db-d9vk6\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.273389 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6119a50b-94a4-4095-b14c-f009fe646312-config\") pod \"machine-api-operator-5694c8668f-cdjsz\" (UID: \"6119a50b-94a4-4095-b14c-f009fe646312\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cdjsz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.274025 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d98488b-d521-4207-a7b8-23b37cb1ef98-metrics-certs\") pod \"router-default-5444994796-vndpv\" (UID: \"1d98488b-d521-4207-a7b8-23b37cb1ef98\") " pod="openshift-ingress/router-default-5444994796-vndpv" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.274035 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4206b74-7012-47af-9344-253aa7453e86-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7p722\" (UID: \"a4206b74-7012-47af-9344-253aa7453e86\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.274061 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4206b74-7012-47af-9344-253aa7453e86-config\") pod \"controller-manager-879f6c89f-7p722\" (UID: \"a4206b74-7012-47af-9344-253aa7453e86\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.274088 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-node-pullsecrets\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.274598 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.274879 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73969925-7fe2-4e3a-9ede-d1bd990f7f71-trusted-ca\") pod \"ingress-operator-5b745b69d9-8cvrf\" (UID: \"73969925-7fe2-4e3a-9ede-d1bd990f7f71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8cvrf" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.274906 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.275246 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5948d11-a6da-4f21-a6e8-413a28791775-client-ca\") pod \"route-controller-manager-6576b87f9c-vwl87\" (UID: \"b5948d11-a6da-4f21-a6e8-413a28791775\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.275413 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-audit\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.275961 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aae518f7-37fe-4bd1-9c5b-ba5186684ebd-config\") pod \"authentication-operator-69f744f599-wtm9c\" (UID: \"aae518f7-37fe-4bd1-9c5b-ba5186684ebd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wtm9c" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.276075 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d95f63b-d4f4-4da3-a741-c69b49b9233c-metrics-tls\") pod \"dns-operator-744455d44c-b62wz\" (UID: \"4d95f63b-d4f4-4da3-a741-c69b49b9233c\") " pod="openshift-dns-operator/dns-operator-744455d44c-b62wz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.276373 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aae518f7-37fe-4bd1-9c5b-ba5186684ebd-serving-cert\") pod \"authentication-operator-69f744f599-wtm9c\" (UID: \"aae518f7-37fe-4bd1-9c5b-ba5186684ebd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wtm9c" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.276440 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5948d11-a6da-4f21-a6e8-413a28791775-serving-cert\") pod \"route-controller-manager-6576b87f9c-vwl87\" (UID: \"b5948d11-a6da-4f21-a6e8-413a28791775\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.276795 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aae518f7-37fe-4bd1-9c5b-ba5186684ebd-service-ca-bundle\") pod \"authentication-operator-69f744f599-wtm9c\" (UID: \"aae518f7-37fe-4bd1-9c5b-ba5186684ebd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wtm9c" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.276945 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.277226 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/73969925-7fe2-4e3a-9ede-d1bd990f7f71-metrics-tls\") pod \"ingress-operator-5b745b69d9-8cvrf\" (UID: \"73969925-7fe2-4e3a-9ede-d1bd990f7f71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8cvrf" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.277367 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-trusted-ca-bundle\") pod \"console-f9d7485db-d9vk6\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.277669 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6119a50b-94a4-4095-b14c-f009fe646312-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cdjsz\" (UID: \"6119a50b-94a4-4095-b14c-f009fe646312\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cdjsz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.277818 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/655735f2-25f3-4cf3-8b40-a35184576e33-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-h6hxj\" (UID: \"655735f2-25f3-4cf3-8b40-a35184576e33\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h6hxj" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.277917 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-audit-policies\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.277961 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e8f5285-7002-4472-be6a-a21731ccaf67-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-lxmvh\" (UID: \"5e8f5285-7002-4472-be6a-a21731ccaf67\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lxmvh" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.278335 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.278466 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1d98488b-d521-4207-a7b8-23b37cb1ef98-stats-auth\") pod \"router-default-5444994796-vndpv\" (UID: \"1d98488b-d521-4207-a7b8-23b37cb1ef98\") " pod="openshift-ingress/router-default-5444994796-vndpv" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.278612 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.279018 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/750ef8f5-44ad-4016-8894-0b2a05430464-console-oauth-config\") pod \"console-f9d7485db-d9vk6\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.279448 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-serving-cert\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.280121 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.280655 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5948d11-a6da-4f21-a6e8-413a28791775-config\") pod \"route-controller-manager-6576b87f9c-vwl87\" (UID: \"b5948d11-a6da-4f21-a6e8-413a28791775\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.281315 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4206b74-7012-47af-9344-253aa7453e86-serving-cert\") pod \"controller-manager-879f6c89f-7p722\" (UID: \"a4206b74-7012-47af-9344-253aa7453e86\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.281442 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.281535 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1d98488b-d521-4207-a7b8-23b37cb1ef98-default-certificate\") pod \"router-default-5444994796-vndpv\" (UID: \"1d98488b-d521-4207-a7b8-23b37cb1ef98\") " pod="openshift-ingress/router-default-5444994796-vndpv" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.281874 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-encryption-config\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.281970 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-etcd-client\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.282727 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.283911 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43a5e5af-ba41-4a32-9893-1c17a54e7024-serving-cert\") pod \"openshift-config-operator-7777fb866f-6grzz\" (UID: \"43a5e5af-ba41-4a32-9893-1c17a54e7024\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6grzz" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.284460 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.285234 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.304701 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.323036 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.343811 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.364043 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.370239 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c83e6ac1-9d88-475f-b293-e3accaf7b812-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6ljr8\" (UID: \"c83e6ac1-9d88-475f-b293-e3accaf7b812\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6ljr8" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.370315 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c83e6ac1-9d88-475f-b293-e3accaf7b812-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6ljr8\" (UID: \"c83e6ac1-9d88-475f-b293-e3accaf7b812\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6ljr8" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.370412 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c83e6ac1-9d88-475f-b293-e3accaf7b812-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6ljr8\" (UID: \"c83e6ac1-9d88-475f-b293-e3accaf7b812\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6ljr8" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.370450 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/798d1803-87c5-4e9e-a29e-660f313c283c-srv-cert\") pod \"catalog-operator-68c6474976-bbvwk\" (UID: \"798d1803-87c5-4e9e-a29e-660f313c283c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbvwk" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.370582 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f4sr\" (UniqueName: \"kubernetes.io/projected/798d1803-87c5-4e9e-a29e-660f313c283c-kube-api-access-6f4sr\") pod \"catalog-operator-68c6474976-bbvwk\" (UID: \"798d1803-87c5-4e9e-a29e-660f313c283c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbvwk" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.370626 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f4bd667f-f40c-402e-96f4-6978225fc1ed-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-f7zb5\" (UID: \"f4bd667f-f40c-402e-96f4-6978225fc1ed\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f7zb5" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.370676 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wxpr\" (UniqueName: \"kubernetes.io/projected/f4bd667f-f40c-402e-96f4-6978225fc1ed-kube-api-access-7wxpr\") pod \"package-server-manager-789f6589d5-f7zb5\" (UID: \"f4bd667f-f40c-402e-96f4-6978225fc1ed\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f7zb5" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.370760 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/798d1803-87c5-4e9e-a29e-660f313c283c-profile-collector-cert\") pod \"catalog-operator-68c6474976-bbvwk\" (UID: \"798d1803-87c5-4e9e-a29e-660f313c283c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbvwk" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.385056 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.414463 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.424271 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.448786 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.465128 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.484246 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.504505 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.524557 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.546833 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.564594 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.584857 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.604286 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.624381 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.645175 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.664779 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.684677 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.704288 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.725066 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.744036 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.765012 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.784960 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.803790 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.824857 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.844158 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.864615 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.884799 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.904768 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.918325 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c83e6ac1-9d88-475f-b293-e3accaf7b812-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6ljr8\" (UID: \"c83e6ac1-9d88-475f-b293-e3accaf7b812\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6ljr8" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.925184 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.932363 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c83e6ac1-9d88-475f-b293-e3accaf7b812-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6ljr8\" (UID: \"c83e6ac1-9d88-475f-b293-e3accaf7b812\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6ljr8" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.944426 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.965406 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.986104 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 16:05:31 crc kubenswrapper[4672]: I0217 16:05:31.996897 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/798d1803-87c5-4e9e-a29e-660f313c283c-srv-cert\") pod \"catalog-operator-68c6474976-bbvwk\" (UID: \"798d1803-87c5-4e9e-a29e-660f313c283c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbvwk" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.004198 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.016753 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/798d1803-87c5-4e9e-a29e-660f313c283c-profile-collector-cert\") pod \"catalog-operator-68c6474976-bbvwk\" (UID: \"798d1803-87c5-4e9e-a29e-660f313c283c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbvwk" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.025067 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.044326 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.065877 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.085966 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.104879 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.122159 4672 request.go:700] Waited for 1.006372947s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0 Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.126577 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.142445 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.145454 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.164623 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.185370 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.204773 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.224977 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.244351 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.265407 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.284806 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.306229 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.325293 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.345277 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.365227 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 16:05:32 crc kubenswrapper[4672]: E0217 16:05:32.371425 4672 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 17 16:05:32 crc kubenswrapper[4672]: E0217 16:05:32.371500 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4bd667f-f40c-402e-96f4-6978225fc1ed-package-server-manager-serving-cert podName:f4bd667f-f40c-402e-96f4-6978225fc1ed nodeName:}" failed. No retries permitted until 2026-02-17 16:05:32.871475777 +0000 UTC m=+141.625564519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f4bd667f-f40c-402e-96f4-6978225fc1ed-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-f7zb5" (UID: "f4bd667f-f40c-402e-96f4-6978225fc1ed") : failed to sync secret cache: timed out waiting for the condition Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.395385 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.405123 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.424758 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.444311 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.465291 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.485924 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.505270 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.524911 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.544479 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.564745 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.584621 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.605809 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.625162 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.644889 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.666192 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.684834 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.705243 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.724436 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.744813 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.765801 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.825074 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.831643 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwdf7\" (UniqueName: \"kubernetes.io/projected/4fc91364-276e-4cc3-bf44-3e5dad5ad06e-kube-api-access-wwdf7\") pod \"apiserver-7bbb656c7d-kfzvb\" (UID: \"4fc91364-276e-4cc3-bf44-3e5dad5ad06e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.843894 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.864417 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.884359 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.897833 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f4bd667f-f40c-402e-96f4-6978225fc1ed-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-f7zb5\" (UID: \"f4bd667f-f40c-402e-96f4-6978225fc1ed\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f7zb5" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.901916 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f4bd667f-f40c-402e-96f4-6978225fc1ed-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-f7zb5\" (UID: \"f4bd667f-f40c-402e-96f4-6978225fc1ed\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f7zb5" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.905063 4672 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.923853 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.945339 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.964587 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 16:05:32 crc kubenswrapper[4672]: I0217 16:05:32.984607 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.004735 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.074289 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtvnk\" (UniqueName: \"kubernetes.io/projected/59e82a1f-2c6a-4938-9696-ffe2eac280ce-kube-api-access-wtvnk\") pod \"oauth-openshift-558db77b4-fds9q\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.076855 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.094716 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzg4b\" (UniqueName: \"kubernetes.io/projected/4d95f63b-d4f4-4da3-a741-c69b49b9233c-kube-api-access-dzg4b\") pod \"dns-operator-744455d44c-b62wz\" (UID: \"4d95f63b-d4f4-4da3-a741-c69b49b9233c\") " pod="openshift-dns-operator/dns-operator-744455d44c-b62wz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.116916 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfhg6\" (UniqueName: \"kubernetes.io/projected/506b8374-2f07-427a-bf3b-44b1f6f022b5-kube-api-access-rfhg6\") pod \"openshift-controller-manager-operator-756b6f6bc6-zfmgn\" (UID: \"506b8374-2f07-427a-bf3b-44b1f6f022b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfmgn" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.123073 4672 request.go:700] Waited for 1.855349395s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/serviceaccounts/authentication-operator/token Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.138696 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqsxt\" (UniqueName: \"kubernetes.io/projected/eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba-kube-api-access-kqsxt\") pod \"apiserver-76f77b778f-lzwwl\" (UID: \"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba\") " pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.153552 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcrk9\" (UniqueName: \"kubernetes.io/projected/aae518f7-37fe-4bd1-9c5b-ba5186684ebd-kube-api-access-rcrk9\") pod \"authentication-operator-69f744f599-wtm9c\" (UID: \"aae518f7-37fe-4bd1-9c5b-ba5186684ebd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wtm9c" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.172713 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktcrs\" (UniqueName: \"kubernetes.io/projected/a4206b74-7012-47af-9344-253aa7453e86-kube-api-access-ktcrs\") pod \"controller-manager-879f6c89f-7p722\" (UID: \"a4206b74-7012-47af-9344-253aa7453e86\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.187200 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.194045 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6mh9\" (UniqueName: \"kubernetes.io/projected/06bbdf78-c2ef-42f1-8f0d-952f07a4b678-kube-api-access-g6mh9\") pod \"machine-approver-56656f9798-bn2m6\" (UID: \"06bbdf78-c2ef-42f1-8f0d-952f07a4b678\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bn2m6" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.221681 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvj6w\" (UniqueName: \"kubernetes.io/projected/43a5e5af-ba41-4a32-9893-1c17a54e7024-kube-api-access-rvj6w\") pod \"openshift-config-operator-7777fb866f-6grzz\" (UID: \"43a5e5af-ba41-4a32-9893-1c17a54e7024\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6grzz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.236769 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw54j\" (UniqueName: \"kubernetes.io/projected/655735f2-25f3-4cf3-8b40-a35184576e33-kube-api-access-xw54j\") pod \"openshift-apiserver-operator-796bbdcf4f-h6hxj\" (UID: \"655735f2-25f3-4cf3-8b40-a35184576e33\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h6hxj" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.241802 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-b62wz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.242990 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.246192 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73969925-7fe2-4e3a-9ede-d1bd990f7f71-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8cvrf\" (UID: \"73969925-7fe2-4e3a-9ede-d1bd990f7f71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8cvrf" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.259422 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wtm9c" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.274725 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfmgn" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.283282 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btntj\" (UniqueName: \"kubernetes.io/projected/b5948d11-a6da-4f21-a6e8-413a28791775-kube-api-access-btntj\") pod \"route-controller-manager-6576b87f9c-vwl87\" (UID: \"b5948d11-a6da-4f21-a6e8-413a28791775\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.299544 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnxzj\" (UniqueName: \"kubernetes.io/projected/1d98488b-d521-4207-a7b8-23b37cb1ef98-kube-api-access-lnxzj\") pod \"router-default-5444994796-vndpv\" (UID: \"1d98488b-d521-4207-a7b8-23b37cb1ef98\") " pod="openshift-ingress/router-default-5444994796-vndpv" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.304184 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks7zk\" (UniqueName: \"kubernetes.io/projected/750ef8f5-44ad-4016-8894-0b2a05430464-kube-api-access-ks7zk\") pod \"console-f9d7485db-d9vk6\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.313867 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-vndpv" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.319875 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb857\" (UniqueName: \"kubernetes.io/projected/5e8f5285-7002-4472-be6a-a21731ccaf67-kube-api-access-bb857\") pod \"cluster-samples-operator-665b6dd947-lxmvh\" (UID: \"5e8f5285-7002-4472-be6a-a21731ccaf67\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lxmvh" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.347730 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d45s\" (UniqueName: \"kubernetes.io/projected/73969925-7fe2-4e3a-9ede-d1bd990f7f71-kube-api-access-9d45s\") pod \"ingress-operator-5b745b69d9-8cvrf\" (UID: \"73969925-7fe2-4e3a-9ede-d1bd990f7f71\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8cvrf" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.369184 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxnk2\" (UniqueName: \"kubernetes.io/projected/6119a50b-94a4-4095-b14c-f009fe646312-kube-api-access-mxnk2\") pod \"machine-api-operator-5694c8668f-cdjsz\" (UID: \"6119a50b-94a4-4095-b14c-f009fe646312\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cdjsz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.384090 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c83e6ac1-9d88-475f-b293-e3accaf7b812-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6ljr8\" (UID: \"c83e6ac1-9d88-475f-b293-e3accaf7b812\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6ljr8" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.402092 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f4sr\" (UniqueName: \"kubernetes.io/projected/798d1803-87c5-4e9e-a29e-660f313c283c-kube-api-access-6f4sr\") pod \"catalog-operator-68c6474976-bbvwk\" (UID: \"798d1803-87c5-4e9e-a29e-660f313c283c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbvwk" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.414809 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-cdjsz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.421294 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6ljr8" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.427277 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wxpr\" (UniqueName: \"kubernetes.io/projected/f4bd667f-f40c-402e-96f4-6978225fc1ed-kube-api-access-7wxpr\") pod \"package-server-manager-789f6589d5-f7zb5\" (UID: \"f4bd667f-f40c-402e-96f4-6978225fc1ed\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f7zb5" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.427536 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbvwk" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.439714 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.462243 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bn2m6" Feb 17 16:05:33 crc kubenswrapper[4672]: W0217 16:05:33.479667 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06bbdf78_c2ef_42f1_8f0d_952f07a4b678.slice/crio-8afa06902ec64351d3d517e7558e563f1d18b3f20a4708221722c25d677498dc WatchSource:0}: Error finding container 8afa06902ec64351d3d517e7558e563f1d18b3f20a4708221722c25d677498dc: Status 404 returned error can't find the container with id 8afa06902ec64351d3d517e7558e563f1d18b3f20a4708221722c25d677498dc Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.494876 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h6hxj" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.504548 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c71d687-c381-4e1d-8a4a-ae72f4b00f9f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pxnrp\" (UID: \"7c71d687-c381-4e1d-8a4a-ae72f4b00f9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pxnrp" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.504580 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9318de0-6c45-4506-a667-b8e7180f7584-config\") pod \"kube-controller-manager-operator-78b949d7b-dwz6v\" (UID: \"e9318de0-6c45-4506-a667-b8e7180f7584\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dwz6v" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.504609 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe511f9f-bc6a-4e27-9837-703d6b981fb7-serving-cert\") pod \"etcd-operator-b45778765-mqrdm\" (UID: \"fe511f9f-bc6a-4e27-9837-703d6b981fb7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mqrdm" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.504635 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zv29\" (UniqueName: \"kubernetes.io/projected/9c333aa9-e463-465a-957f-34e571dc6741-kube-api-access-8zv29\") pod \"service-ca-operator-777779d784-76rxw\" (UID: \"9c333aa9-e463-465a-957f-34e571dc6741\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-76rxw" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.504652 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx4q8\" (UniqueName: \"kubernetes.io/projected/fa50bf9f-5749-43e8-9ddb-7f9be1c6d8ce-kube-api-access-fx4q8\") pod \"migrator-59844c95c7-hgfxd\" (UID: \"fa50bf9f-5749-43e8-9ddb-7f9be1c6d8ce\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hgfxd" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.504667 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsv79\" (UniqueName: \"kubernetes.io/projected/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-kube-api-access-gsv79\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.504686 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe511f9f-bc6a-4e27-9837-703d6b981fb7-etcd-service-ca\") pod \"etcd-operator-b45778765-mqrdm\" (UID: \"fe511f9f-bc6a-4e27-9837-703d6b981fb7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mqrdm" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.508582 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8f99610f-8623-43fe-a352-8bb6ced6a41c-proxy-tls\") pod \"machine-config-operator-74547568cd-q54wv\" (UID: \"8f99610f-8623-43fe-a352-8bb6ced6a41c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q54wv" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.508614 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2180d9e0-3678-4bdd-84aa-0dba230aa4e3-config-volume\") pod \"collect-profiles-29522400-b79hz\" (UID: \"2180d9e0-3678-4bdd-84aa-0dba230aa4e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b79hz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.508641 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8f99610f-8623-43fe-a352-8bb6ced6a41c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-q54wv\" (UID: \"8f99610f-8623-43fe-a352-8bb6ced6a41c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q54wv" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.508693 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6vt2\" (UniqueName: \"kubernetes.io/projected/fa3f4bc7-5a08-4820-bba2-12b682296098-kube-api-access-b6vt2\") pod \"cluster-image-registry-operator-dc59b4c8b-px67v\" (UID: \"fa3f4bc7-5a08-4820-bba2-12b682296098\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px67v" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.508719 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpj2t\" (UniqueName: \"kubernetes.io/projected/eab5a9da-bb14-4f97-9c54-eaa7972a047d-kube-api-access-rpj2t\") pod \"kube-storage-version-migrator-operator-b67b599dd-76xtz\" (UID: \"eab5a9da-bb14-4f97-9c54-eaa7972a047d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-76xtz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.508736 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51a45e2f-3728-46d0-b04c-cdec82ed7d58-config\") pod \"console-operator-58897d9998-gcffc\" (UID: \"51a45e2f-3728-46d0-b04c-cdec82ed7d58\") " pod="openshift-console-operator/console-operator-58897d9998-gcffc" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.508750 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee33c3da-fd0b-45a5-8337-b600c2b6b11e-srv-cert\") pod \"olm-operator-6b444d44fb-bkxss\" (UID: \"ee33c3da-fd0b-45a5-8337-b600c2b6b11e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bkxss" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.508774 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-bound-sa-token\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.508790 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c333aa9-e463-465a-957f-34e571dc6741-serving-cert\") pod \"service-ca-operator-777779d784-76rxw\" (UID: \"9c333aa9-e463-465a-957f-34e571dc6741\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-76rxw" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.508804 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9318de0-6c45-4506-a667-b8e7180f7584-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dwz6v\" (UID: \"e9318de0-6c45-4506-a667-b8e7180f7584\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dwz6v" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.508828 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2180d9e0-3678-4bdd-84aa-0dba230aa4e3-secret-volume\") pod \"collect-profiles-29522400-b79hz\" (UID: \"2180d9e0-3678-4bdd-84aa-0dba230aa4e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b79hz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.508846 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtwwh\" (UniqueName: \"kubernetes.io/projected/685f4abd-5760-4f83-b975-0986b69d4cc3-kube-api-access-qtwwh\") pod \"machine-config-server-7prtt\" (UID: \"685f4abd-5760-4f83-b975-0986b69d4cc3\") " pod="openshift-machine-config-operator/machine-config-server-7prtt" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.508869 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2ed3c87a-d599-4e91-92ce-377ddef564da-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bk22j\" (UID: \"2ed3c87a-d599-4e91-92ce-377ddef564da\") " pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.508883 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51a45e2f-3728-46d0-b04c-cdec82ed7d58-trusted-ca\") pod \"console-operator-58897d9998-gcffc\" (UID: \"51a45e2f-3728-46d0-b04c-cdec82ed7d58\") " pod="openshift-console-operator/console-operator-58897d9998-gcffc" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.508923 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5l7j\" (UniqueName: \"kubernetes.io/projected/908f0c62-5b97-4c11-8b5d-6454f36295f6-kube-api-access-f5l7j\") pod \"downloads-7954f5f757-v9lrm\" (UID: \"908f0c62-5b97-4c11-8b5d-6454f36295f6\") " pod="openshift-console/downloads-7954f5f757-v9lrm" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.508937 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eab5a9da-bb14-4f97-9c54-eaa7972a047d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-76xtz\" (UID: \"eab5a9da-bb14-4f97-9c54-eaa7972a047d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-76xtz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.508965 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jmm8\" (UniqueName: \"kubernetes.io/projected/00f1ee53-d03e-47ad-bf0e-d04589199cb5-kube-api-access-8jmm8\") pod \"multus-admission-controller-857f4d67dd-ltvxx\" (UID: \"00f1ee53-d03e-47ad-bf0e-d04589199cb5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ltvxx" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.509011 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa3f4bc7-5a08-4820-bba2-12b682296098-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-px67v\" (UID: \"fa3f4bc7-5a08-4820-bba2-12b682296098\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px67v" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.509058 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c333aa9-e463-465a-957f-34e571dc6741-config\") pod \"service-ca-operator-777779d784-76rxw\" (UID: \"9c333aa9-e463-465a-957f-34e571dc6741\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-76rxw" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.509084 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8f99610f-8623-43fe-a352-8bb6ced6a41c-images\") pod \"machine-config-operator-74547568cd-q54wv\" (UID: \"8f99610f-8623-43fe-a352-8bb6ced6a41c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q54wv" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.509110 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wljng\" (UniqueName: \"kubernetes.io/projected/8f99610f-8623-43fe-a352-8bb6ced6a41c-kube-api-access-wljng\") pod \"machine-config-operator-74547568cd-q54wv\" (UID: \"8f99610f-8623-43fe-a352-8bb6ced6a41c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q54wv" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.509125 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvbf8\" (UniqueName: \"kubernetes.io/projected/51a45e2f-3728-46d0-b04c-cdec82ed7d58-kube-api-access-xvbf8\") pod \"console-operator-58897d9998-gcffc\" (UID: \"51a45e2f-3728-46d0-b04c-cdec82ed7d58\") " pod="openshift-console-operator/console-operator-58897d9998-gcffc" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.509140 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2v25\" (UniqueName: \"kubernetes.io/projected/ee33c3da-fd0b-45a5-8337-b600c2b6b11e-kube-api-access-h2v25\") pod \"olm-operator-6b444d44fb-bkxss\" (UID: \"ee33c3da-fd0b-45a5-8337-b600c2b6b11e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bkxss" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.509154 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrrfw\" (UniqueName: \"kubernetes.io/projected/fe511f9f-bc6a-4e27-9837-703d6b981fb7-kube-api-access-hrrfw\") pod \"etcd-operator-b45778765-mqrdm\" (UID: \"fe511f9f-bc6a-4e27-9837-703d6b981fb7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mqrdm" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.509171 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fe511f9f-bc6a-4e27-9837-703d6b981fb7-etcd-ca\") pod \"etcd-operator-b45778765-mqrdm\" (UID: \"fe511f9f-bc6a-4e27-9837-703d6b981fb7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mqrdm" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.509187 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/740b08b8-8626-4158-813f-1c10e317f517-tmpfs\") pod \"packageserver-d55dfcdfc-8wlwl\" (UID: \"740b08b8-8626-4158-813f-1c10e317f517\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8wlwl" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.509203 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.509216 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpbdr\" (UniqueName: \"kubernetes.io/projected/2ed3c87a-d599-4e91-92ce-377ddef564da-kube-api-access-qpbdr\") pod \"marketplace-operator-79b997595-bk22j\" (UID: \"2ed3c87a-d599-4e91-92ce-377ddef564da\") " pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.509231 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/685f4abd-5760-4f83-b975-0986b69d4cc3-node-bootstrap-token\") pod \"machine-config-server-7prtt\" (UID: \"685f4abd-5760-4f83-b975-0986b69d4cc3\") " pod="openshift-machine-config-operator/machine-config-server-7prtt" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.509245 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe511f9f-bc6a-4e27-9837-703d6b981fb7-config\") pod \"etcd-operator-b45778765-mqrdm\" (UID: \"fe511f9f-bc6a-4e27-9837-703d6b981fb7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mqrdm" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.509261 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/00f1ee53-d03e-47ad-bf0e-d04589199cb5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ltvxx\" (UID: \"00f1ee53-d03e-47ad-bf0e-d04589199cb5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ltvxx" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.509276 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c71d687-c381-4e1d-8a4a-ae72f4b00f9f-config\") pod \"kube-apiserver-operator-766d6c64bb-pxnrp\" (UID: \"7c71d687-c381-4e1d-8a4a-ae72f4b00f9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pxnrp" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.509290 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ee33c3da-fd0b-45a5-8337-b600c2b6b11e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bkxss\" (UID: \"ee33c3da-fd0b-45a5-8337-b600c2b6b11e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bkxss" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.509313 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/740b08b8-8626-4158-813f-1c10e317f517-webhook-cert\") pod \"packageserver-d55dfcdfc-8wlwl\" (UID: \"740b08b8-8626-4158-813f-1c10e317f517\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8wlwl" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.509363 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4aec7350-9b5f-44c1-9a39-24a95a286233-signing-key\") pod \"service-ca-9c57cc56f-n4nj9\" (UID: \"4aec7350-9b5f-44c1-9a39-24a95a286233\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4nj9" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.509378 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eab5a9da-bb14-4f97-9c54-eaa7972a047d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-76xtz\" (UID: \"eab5a9da-bb14-4f97-9c54-eaa7972a047d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-76xtz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.509392 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51a45e2f-3728-46d0-b04c-cdec82ed7d58-serving-cert\") pod \"console-operator-58897d9998-gcffc\" (UID: \"51a45e2f-3728-46d0-b04c-cdec82ed7d58\") " pod="openshift-console-operator/console-operator-58897d9998-gcffc" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.510443 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjchv\" (UniqueName: \"kubernetes.io/projected/740b08b8-8626-4158-813f-1c10e317f517-kube-api-access-zjchv\") pod \"packageserver-d55dfcdfc-8wlwl\" (UID: \"740b08b8-8626-4158-813f-1c10e317f517\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8wlwl" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.510464 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x7qd\" (UniqueName: \"kubernetes.io/projected/bbc8108a-8a2d-4f9a-af8f-335f0bf8ff6d-kube-api-access-8x7qd\") pod \"machine-config-controller-84d6567774-8ggmr\" (UID: \"bbc8108a-8a2d-4f9a-af8f-335f0bf8ff6d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8ggmr" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.510543 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0bef061-3829-41ea-926f-058de4404865-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-g6m8r\" (UID: \"e0bef061-3829-41ea-926f-058de4404865\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g6m8r" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.510559 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ed3c87a-d599-4e91-92ce-377ddef564da-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bk22j\" (UID: \"2ed3c87a-d599-4e91-92ce-377ddef564da\") " pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.510575 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnmdd\" (UniqueName: \"kubernetes.io/projected/2180d9e0-3678-4bdd-84aa-0dba230aa4e3-kube-api-access-dnmdd\") pod \"collect-profiles-29522400-b79hz\" (UID: \"2180d9e0-3678-4bdd-84aa-0dba230aa4e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b79hz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.510602 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c84xd\" (UniqueName: \"kubernetes.io/projected/e0bef061-3829-41ea-926f-058de4404865-kube-api-access-c84xd\") pod \"control-plane-machine-set-operator-78cbb6b69f-g6m8r\" (UID: \"e0bef061-3829-41ea-926f-058de4404865\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g6m8r" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.510616 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9318de0-6c45-4506-a667-b8e7180f7584-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dwz6v\" (UID: \"e9318de0-6c45-4506-a667-b8e7180f7584\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dwz6v" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.510634 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa3f4bc7-5a08-4820-bba2-12b682296098-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-px67v\" (UID: \"fa3f4bc7-5a08-4820-bba2-12b682296098\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px67v" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.510661 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-registry-tls\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.510684 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/685f4abd-5760-4f83-b975-0986b69d4cc3-certs\") pod \"machine-config-server-7prtt\" (UID: \"685f4abd-5760-4f83-b975-0986b69d4cc3\") " pod="openshift-machine-config-operator/machine-config-server-7prtt" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.510710 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.510725 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/740b08b8-8626-4158-813f-1c10e317f517-apiservice-cert\") pod \"packageserver-d55dfcdfc-8wlwl\" (UID: \"740b08b8-8626-4158-813f-1c10e317f517\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8wlwl" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.510753 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c71d687-c381-4e1d-8a4a-ae72f4b00f9f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pxnrp\" (UID: \"7c71d687-c381-4e1d-8a4a-ae72f4b00f9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pxnrp" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.510771 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bbc8108a-8a2d-4f9a-af8f-335f0bf8ff6d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8ggmr\" (UID: \"bbc8108a-8a2d-4f9a-af8f-335f0bf8ff6d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8ggmr" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.511064 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-trusted-ca\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.511085 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bbc8108a-8a2d-4f9a-af8f-335f0bf8ff6d-proxy-tls\") pod \"machine-config-controller-84d6567774-8ggmr\" (UID: \"bbc8108a-8a2d-4f9a-af8f-335f0bf8ff6d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8ggmr" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.511102 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fe511f9f-bc6a-4e27-9837-703d6b981fb7-etcd-client\") pod \"etcd-operator-b45778765-mqrdm\" (UID: \"fe511f9f-bc6a-4e27-9837-703d6b981fb7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mqrdm" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.511128 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fa3f4bc7-5a08-4820-bba2-12b682296098-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-px67v\" (UID: \"fa3f4bc7-5a08-4820-bba2-12b682296098\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px67v" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.511165 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-registry-certificates\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.511180 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4aec7350-9b5f-44c1-9a39-24a95a286233-signing-cabundle\") pod \"service-ca-9c57cc56f-n4nj9\" (UID: \"4aec7350-9b5f-44c1-9a39-24a95a286233\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4nj9" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.511195 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch5ph\" (UniqueName: \"kubernetes.io/projected/4aec7350-9b5f-44c1-9a39-24a95a286233-kube-api-access-ch5ph\") pod \"service-ca-9c57cc56f-n4nj9\" (UID: \"4aec7350-9b5f-44c1-9a39-24a95a286233\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4nj9" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.511226 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: E0217 16:05:33.514141 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:34.014129426 +0000 UTC m=+142.768218148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.515288 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f7zb5" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.516792 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6grzz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.528480 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.551091 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lxmvh" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.560180 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb"] Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.600609 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.605847 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8cvrf" Feb 17 16:05:33 crc kubenswrapper[4672]: W0217 16:05:33.608581 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fc91364_276e_4cc3_bf44_3e5dad5ad06e.slice/crio-948d09e255b48423335b5578bca724a215ab54287ce46f26aca6474522462b0d WatchSource:0}: Error finding container 948d09e255b48423335b5578bca724a215ab54287ce46f26aca6474522462b0d: Status 404 returned error can't find the container with id 948d09e255b48423335b5578bca724a215ab54287ce46f26aca6474522462b0d Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.612464 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:33 crc kubenswrapper[4672]: E0217 16:05:33.612666 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:34.112642519 +0000 UTC m=+142.866731251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.612716 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/29ca355c-84f8-434c-a892-d0d3c6c78c00-csi-data-dir\") pod \"csi-hostpathplugin-zgwq2\" (UID: \"29ca355c-84f8-434c-a892-d0d3c6c78c00\") " pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.612784 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c71d687-c381-4e1d-8a4a-ae72f4b00f9f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pxnrp\" (UID: \"7c71d687-c381-4e1d-8a4a-ae72f4b00f9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pxnrp" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.612800 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9318de0-6c45-4506-a667-b8e7180f7584-config\") pod \"kube-controller-manager-operator-78b949d7b-dwz6v\" (UID: \"e9318de0-6c45-4506-a667-b8e7180f7584\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dwz6v" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.612824 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe511f9f-bc6a-4e27-9837-703d6b981fb7-serving-cert\") pod \"etcd-operator-b45778765-mqrdm\" (UID: \"fe511f9f-bc6a-4e27-9837-703d6b981fb7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mqrdm" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.612841 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/29ca355c-84f8-434c-a892-d0d3c6c78c00-socket-dir\") pod \"csi-hostpathplugin-zgwq2\" (UID: \"29ca355c-84f8-434c-a892-d0d3c6c78c00\") " pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.612866 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zv29\" (UniqueName: \"kubernetes.io/projected/9c333aa9-e463-465a-957f-34e571dc6741-kube-api-access-8zv29\") pod \"service-ca-operator-777779d784-76rxw\" (UID: \"9c333aa9-e463-465a-957f-34e571dc6741\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-76rxw" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.612882 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx4q8\" (UniqueName: \"kubernetes.io/projected/fa50bf9f-5749-43e8-9ddb-7f9be1c6d8ce-kube-api-access-fx4q8\") pod \"migrator-59844c95c7-hgfxd\" (UID: \"fa50bf9f-5749-43e8-9ddb-7f9be1c6d8ce\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hgfxd" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.612904 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsv79\" (UniqueName: \"kubernetes.io/projected/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-kube-api-access-gsv79\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.612920 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe511f9f-bc6a-4e27-9837-703d6b981fb7-etcd-service-ca\") pod \"etcd-operator-b45778765-mqrdm\" (UID: \"fe511f9f-bc6a-4e27-9837-703d6b981fb7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mqrdm" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.612952 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/29ca355c-84f8-434c-a892-d0d3c6c78c00-registration-dir\") pod \"csi-hostpathplugin-zgwq2\" (UID: \"29ca355c-84f8-434c-a892-d0d3c6c78c00\") " pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.612987 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8f99610f-8623-43fe-a352-8bb6ced6a41c-proxy-tls\") pod \"machine-config-operator-74547568cd-q54wv\" (UID: \"8f99610f-8623-43fe-a352-8bb6ced6a41c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q54wv" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613002 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2180d9e0-3678-4bdd-84aa-0dba230aa4e3-config-volume\") pod \"collect-profiles-29522400-b79hz\" (UID: \"2180d9e0-3678-4bdd-84aa-0dba230aa4e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b79hz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613027 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8f99610f-8623-43fe-a352-8bb6ced6a41c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-q54wv\" (UID: \"8f99610f-8623-43fe-a352-8bb6ced6a41c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q54wv" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613068 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6vt2\" (UniqueName: \"kubernetes.io/projected/fa3f4bc7-5a08-4820-bba2-12b682296098-kube-api-access-b6vt2\") pod \"cluster-image-registry-operator-dc59b4c8b-px67v\" (UID: \"fa3f4bc7-5a08-4820-bba2-12b682296098\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px67v" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613089 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpj2t\" (UniqueName: \"kubernetes.io/projected/eab5a9da-bb14-4f97-9c54-eaa7972a047d-kube-api-access-rpj2t\") pod \"kube-storage-version-migrator-operator-b67b599dd-76xtz\" (UID: \"eab5a9da-bb14-4f97-9c54-eaa7972a047d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-76xtz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613107 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51a45e2f-3728-46d0-b04c-cdec82ed7d58-config\") pod \"console-operator-58897d9998-gcffc\" (UID: \"51a45e2f-3728-46d0-b04c-cdec82ed7d58\") " pod="openshift-console-operator/console-operator-58897d9998-gcffc" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613122 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee33c3da-fd0b-45a5-8337-b600c2b6b11e-srv-cert\") pod \"olm-operator-6b444d44fb-bkxss\" (UID: \"ee33c3da-fd0b-45a5-8337-b600c2b6b11e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bkxss" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613137 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-bound-sa-token\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613154 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c333aa9-e463-465a-957f-34e571dc6741-serving-cert\") pod \"service-ca-operator-777779d784-76rxw\" (UID: \"9c333aa9-e463-465a-957f-34e571dc6741\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-76rxw" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613169 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9318de0-6c45-4506-a667-b8e7180f7584-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dwz6v\" (UID: \"e9318de0-6c45-4506-a667-b8e7180f7584\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dwz6v" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613197 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2180d9e0-3678-4bdd-84aa-0dba230aa4e3-secret-volume\") pod \"collect-profiles-29522400-b79hz\" (UID: \"2180d9e0-3678-4bdd-84aa-0dba230aa4e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b79hz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613214 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvtgt\" (UniqueName: \"kubernetes.io/projected/0f0958c0-4b04-4d8e-9bbb-8dd838c9b966-kube-api-access-vvtgt\") pod \"dns-default-s2n6p\" (UID: \"0f0958c0-4b04-4d8e-9bbb-8dd838c9b966\") " pod="openshift-dns/dns-default-s2n6p" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613243 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtwwh\" (UniqueName: \"kubernetes.io/projected/685f4abd-5760-4f83-b975-0986b69d4cc3-kube-api-access-qtwwh\") pod \"machine-config-server-7prtt\" (UID: \"685f4abd-5760-4f83-b975-0986b69d4cc3\") " pod="openshift-machine-config-operator/machine-config-server-7prtt" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613260 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2ed3c87a-d599-4e91-92ce-377ddef564da-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bk22j\" (UID: \"2ed3c87a-d599-4e91-92ce-377ddef564da\") " pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613276 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51a45e2f-3728-46d0-b04c-cdec82ed7d58-trusted-ca\") pod \"console-operator-58897d9998-gcffc\" (UID: \"51a45e2f-3728-46d0-b04c-cdec82ed7d58\") " pod="openshift-console-operator/console-operator-58897d9998-gcffc" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613297 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5l7j\" (UniqueName: \"kubernetes.io/projected/908f0c62-5b97-4c11-8b5d-6454f36295f6-kube-api-access-f5l7j\") pod \"downloads-7954f5f757-v9lrm\" (UID: \"908f0c62-5b97-4c11-8b5d-6454f36295f6\") " pod="openshift-console/downloads-7954f5f757-v9lrm" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613311 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eab5a9da-bb14-4f97-9c54-eaa7972a047d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-76xtz\" (UID: \"eab5a9da-bb14-4f97-9c54-eaa7972a047d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-76xtz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613326 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jmm8\" (UniqueName: \"kubernetes.io/projected/00f1ee53-d03e-47ad-bf0e-d04589199cb5-kube-api-access-8jmm8\") pod \"multus-admission-controller-857f4d67dd-ltvxx\" (UID: \"00f1ee53-d03e-47ad-bf0e-d04589199cb5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ltvxx" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613372 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa3f4bc7-5a08-4820-bba2-12b682296098-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-px67v\" (UID: \"fa3f4bc7-5a08-4820-bba2-12b682296098\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px67v" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613391 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c333aa9-e463-465a-957f-34e571dc6741-config\") pod \"service-ca-operator-777779d784-76rxw\" (UID: \"9c333aa9-e463-465a-957f-34e571dc6741\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-76rxw" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613409 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8f99610f-8623-43fe-a352-8bb6ced6a41c-images\") pod \"machine-config-operator-74547568cd-q54wv\" (UID: \"8f99610f-8623-43fe-a352-8bb6ced6a41c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q54wv" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613425 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0f0958c0-4b04-4d8e-9bbb-8dd838c9b966-metrics-tls\") pod \"dns-default-s2n6p\" (UID: \"0f0958c0-4b04-4d8e-9bbb-8dd838c9b966\") " pod="openshift-dns/dns-default-s2n6p" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613455 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wljng\" (UniqueName: \"kubernetes.io/projected/8f99610f-8623-43fe-a352-8bb6ced6a41c-kube-api-access-wljng\") pod \"machine-config-operator-74547568cd-q54wv\" (UID: \"8f99610f-8623-43fe-a352-8bb6ced6a41c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q54wv" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613470 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvbf8\" (UniqueName: \"kubernetes.io/projected/51a45e2f-3728-46d0-b04c-cdec82ed7d58-kube-api-access-xvbf8\") pod \"console-operator-58897d9998-gcffc\" (UID: \"51a45e2f-3728-46d0-b04c-cdec82ed7d58\") " pod="openshift-console-operator/console-operator-58897d9998-gcffc" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613485 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2v25\" (UniqueName: \"kubernetes.io/projected/ee33c3da-fd0b-45a5-8337-b600c2b6b11e-kube-api-access-h2v25\") pod \"olm-operator-6b444d44fb-bkxss\" (UID: \"ee33c3da-fd0b-45a5-8337-b600c2b6b11e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bkxss" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613522 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrrfw\" (UniqueName: \"kubernetes.io/projected/fe511f9f-bc6a-4e27-9837-703d6b981fb7-kube-api-access-hrrfw\") pod \"etcd-operator-b45778765-mqrdm\" (UID: \"fe511f9f-bc6a-4e27-9837-703d6b981fb7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mqrdm" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613563 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fe511f9f-bc6a-4e27-9837-703d6b981fb7-etcd-ca\") pod \"etcd-operator-b45778765-mqrdm\" (UID: \"fe511f9f-bc6a-4e27-9837-703d6b981fb7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mqrdm" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613584 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/29ca355c-84f8-434c-a892-d0d3c6c78c00-plugins-dir\") pod \"csi-hostpathplugin-zgwq2\" (UID: \"29ca355c-84f8-434c-a892-d0d3c6c78c00\") " pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613607 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/740b08b8-8626-4158-813f-1c10e317f517-tmpfs\") pod \"packageserver-d55dfcdfc-8wlwl\" (UID: \"740b08b8-8626-4158-813f-1c10e317f517\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8wlwl" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613628 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613649 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpbdr\" (UniqueName: \"kubernetes.io/projected/2ed3c87a-d599-4e91-92ce-377ddef564da-kube-api-access-qpbdr\") pod \"marketplace-operator-79b997595-bk22j\" (UID: \"2ed3c87a-d599-4e91-92ce-377ddef564da\") " pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613675 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/685f4abd-5760-4f83-b975-0986b69d4cc3-node-bootstrap-token\") pod \"machine-config-server-7prtt\" (UID: \"685f4abd-5760-4f83-b975-0986b69d4cc3\") " pod="openshift-machine-config-operator/machine-config-server-7prtt" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613689 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe511f9f-bc6a-4e27-9837-703d6b981fb7-config\") pod \"etcd-operator-b45778765-mqrdm\" (UID: \"fe511f9f-bc6a-4e27-9837-703d6b981fb7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mqrdm" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613705 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c71d687-c381-4e1d-8a4a-ae72f4b00f9f-config\") pod \"kube-apiserver-operator-766d6c64bb-pxnrp\" (UID: \"7c71d687-c381-4e1d-8a4a-ae72f4b00f9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pxnrp" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613720 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ee33c3da-fd0b-45a5-8337-b600c2b6b11e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bkxss\" (UID: \"ee33c3da-fd0b-45a5-8337-b600c2b6b11e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bkxss" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613746 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/00f1ee53-d03e-47ad-bf0e-d04589199cb5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ltvxx\" (UID: \"00f1ee53-d03e-47ad-bf0e-d04589199cb5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ltvxx" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613772 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/740b08b8-8626-4158-813f-1c10e317f517-webhook-cert\") pod \"packageserver-d55dfcdfc-8wlwl\" (UID: \"740b08b8-8626-4158-813f-1c10e317f517\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8wlwl" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613839 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4aec7350-9b5f-44c1-9a39-24a95a286233-signing-key\") pod \"service-ca-9c57cc56f-n4nj9\" (UID: \"4aec7350-9b5f-44c1-9a39-24a95a286233\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4nj9" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613876 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eab5a9da-bb14-4f97-9c54-eaa7972a047d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-76xtz\" (UID: \"eab5a9da-bb14-4f97-9c54-eaa7972a047d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-76xtz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613894 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51a45e2f-3728-46d0-b04c-cdec82ed7d58-serving-cert\") pod \"console-operator-58897d9998-gcffc\" (UID: \"51a45e2f-3728-46d0-b04c-cdec82ed7d58\") " pod="openshift-console-operator/console-operator-58897d9998-gcffc" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613912 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x7qd\" (UniqueName: \"kubernetes.io/projected/bbc8108a-8a2d-4f9a-af8f-335f0bf8ff6d-kube-api-access-8x7qd\") pod \"machine-config-controller-84d6567774-8ggmr\" (UID: \"bbc8108a-8a2d-4f9a-af8f-335f0bf8ff6d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8ggmr" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613930 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjchv\" (UniqueName: \"kubernetes.io/projected/740b08b8-8626-4158-813f-1c10e317f517-kube-api-access-zjchv\") pod \"packageserver-d55dfcdfc-8wlwl\" (UID: \"740b08b8-8626-4158-813f-1c10e317f517\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8wlwl" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613973 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ed3c87a-d599-4e91-92ce-377ddef564da-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bk22j\" (UID: \"2ed3c87a-d599-4e91-92ce-377ddef564da\") " pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.613988 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnmdd\" (UniqueName: \"kubernetes.io/projected/2180d9e0-3678-4bdd-84aa-0dba230aa4e3-kube-api-access-dnmdd\") pod \"collect-profiles-29522400-b79hz\" (UID: \"2180d9e0-3678-4bdd-84aa-0dba230aa4e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b79hz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.614006 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0bef061-3829-41ea-926f-058de4404865-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-g6m8r\" (UID: \"e0bef061-3829-41ea-926f-058de4404865\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g6m8r" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.614025 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f0958c0-4b04-4d8e-9bbb-8dd838c9b966-config-volume\") pod \"dns-default-s2n6p\" (UID: \"0f0958c0-4b04-4d8e-9bbb-8dd838c9b966\") " pod="openshift-dns/dns-default-s2n6p" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.614043 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c84xd\" (UniqueName: \"kubernetes.io/projected/e0bef061-3829-41ea-926f-058de4404865-kube-api-access-c84xd\") pod \"control-plane-machine-set-operator-78cbb6b69f-g6m8r\" (UID: \"e0bef061-3829-41ea-926f-058de4404865\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g6m8r" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.614060 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9318de0-6c45-4506-a667-b8e7180f7584-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dwz6v\" (UID: \"e9318de0-6c45-4506-a667-b8e7180f7584\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dwz6v" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.614079 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa3f4bc7-5a08-4820-bba2-12b682296098-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-px67v\" (UID: \"fa3f4bc7-5a08-4820-bba2-12b682296098\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px67v" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.614093 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/685f4abd-5760-4f83-b975-0986b69d4cc3-certs\") pod \"machine-config-server-7prtt\" (UID: \"685f4abd-5760-4f83-b975-0986b69d4cc3\") " pod="openshift-machine-config-operator/machine-config-server-7prtt" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.614109 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94dc47ef-d37b-46be-8696-378acb500013-cert\") pod \"ingress-canary-pp6hc\" (UID: \"94dc47ef-d37b-46be-8696-378acb500013\") " pod="openshift-ingress-canary/ingress-canary-pp6hc" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.614124 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/29ca355c-84f8-434c-a892-d0d3c6c78c00-mountpoint-dir\") pod \"csi-hostpathplugin-zgwq2\" (UID: \"29ca355c-84f8-434c-a892-d0d3c6c78c00\") " pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.614141 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-registry-tls\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.614160 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.614176 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/740b08b8-8626-4158-813f-1c10e317f517-apiservice-cert\") pod \"packageserver-d55dfcdfc-8wlwl\" (UID: \"740b08b8-8626-4158-813f-1c10e317f517\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8wlwl" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.614206 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c71d687-c381-4e1d-8a4a-ae72f4b00f9f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pxnrp\" (UID: \"7c71d687-c381-4e1d-8a4a-ae72f4b00f9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pxnrp" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.614221 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bbc8108a-8a2d-4f9a-af8f-335f0bf8ff6d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8ggmr\" (UID: \"bbc8108a-8a2d-4f9a-af8f-335f0bf8ff6d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8ggmr" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.614245 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4p4f\" (UniqueName: \"kubernetes.io/projected/29ca355c-84f8-434c-a892-d0d3c6c78c00-kube-api-access-x4p4f\") pod \"csi-hostpathplugin-zgwq2\" (UID: \"29ca355c-84f8-434c-a892-d0d3c6c78c00\") " pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.614274 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-trusted-ca\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.614290 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bbc8108a-8a2d-4f9a-af8f-335f0bf8ff6d-proxy-tls\") pod \"machine-config-controller-84d6567774-8ggmr\" (UID: \"bbc8108a-8a2d-4f9a-af8f-335f0bf8ff6d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8ggmr" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.614307 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fe511f9f-bc6a-4e27-9837-703d6b981fb7-etcd-client\") pod \"etcd-operator-b45778765-mqrdm\" (UID: \"fe511f9f-bc6a-4e27-9837-703d6b981fb7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mqrdm" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.614323 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fa3f4bc7-5a08-4820-bba2-12b682296098-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-px67v\" (UID: \"fa3f4bc7-5a08-4820-bba2-12b682296098\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px67v" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.614338 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wbz7\" (UniqueName: \"kubernetes.io/projected/94dc47ef-d37b-46be-8696-378acb500013-kube-api-access-9wbz7\") pod \"ingress-canary-pp6hc\" (UID: \"94dc47ef-d37b-46be-8696-378acb500013\") " pod="openshift-ingress-canary/ingress-canary-pp6hc" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.614354 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch5ph\" (UniqueName: \"kubernetes.io/projected/4aec7350-9b5f-44c1-9a39-24a95a286233-kube-api-access-ch5ph\") pod \"service-ca-9c57cc56f-n4nj9\" (UID: \"4aec7350-9b5f-44c1-9a39-24a95a286233\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4nj9" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.614372 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-registry-certificates\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.614387 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4aec7350-9b5f-44c1-9a39-24a95a286233-signing-cabundle\") pod \"service-ca-9c57cc56f-n4nj9\" (UID: \"4aec7350-9b5f-44c1-9a39-24a95a286233\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4nj9" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.614416 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: E0217 16:05:33.614689 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:34.114682633 +0000 UTC m=+142.868771365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.616345 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9318de0-6c45-4506-a667-b8e7180f7584-config\") pod \"kube-controller-manager-operator-78b949d7b-dwz6v\" (UID: \"e9318de0-6c45-4506-a667-b8e7180f7584\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dwz6v" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.621724 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe511f9f-bc6a-4e27-9837-703d6b981fb7-config\") pod \"etcd-operator-b45778765-mqrdm\" (UID: \"fe511f9f-bc6a-4e27-9837-703d6b981fb7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mqrdm" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.621540 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe511f9f-bc6a-4e27-9837-703d6b981fb7-etcd-service-ca\") pod \"etcd-operator-b45778765-mqrdm\" (UID: \"fe511f9f-bc6a-4e27-9837-703d6b981fb7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mqrdm" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.623327 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51a45e2f-3728-46d0-b04c-cdec82ed7d58-config\") pod \"console-operator-58897d9998-gcffc\" (UID: \"51a45e2f-3728-46d0-b04c-cdec82ed7d58\") " pod="openshift-console-operator/console-operator-58897d9998-gcffc" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.623966 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe511f9f-bc6a-4e27-9837-703d6b981fb7-serving-cert\") pod \"etcd-operator-b45778765-mqrdm\" (UID: \"fe511f9f-bc6a-4e27-9837-703d6b981fb7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mqrdm" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.625048 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2180d9e0-3678-4bdd-84aa-0dba230aa4e3-config-volume\") pod \"collect-profiles-29522400-b79hz\" (UID: \"2180d9e0-3678-4bdd-84aa-0dba230aa4e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b79hz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.625644 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4aec7350-9b5f-44c1-9a39-24a95a286233-signing-cabundle\") pod \"service-ca-9c57cc56f-n4nj9\" (UID: \"4aec7350-9b5f-44c1-9a39-24a95a286233\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4nj9" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.627316 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bbc8108a-8a2d-4f9a-af8f-335f0bf8ff6d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8ggmr\" (UID: \"bbc8108a-8a2d-4f9a-af8f-335f0bf8ff6d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8ggmr" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.630121 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-trusted-ca\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.630906 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee33c3da-fd0b-45a5-8337-b600c2b6b11e-srv-cert\") pod \"olm-operator-6b444d44fb-bkxss\" (UID: \"ee33c3da-fd0b-45a5-8337-b600c2b6b11e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bkxss" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.631099 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4aec7350-9b5f-44c1-9a39-24a95a286233-signing-key\") pod \"service-ca-9c57cc56f-n4nj9\" (UID: \"4aec7350-9b5f-44c1-9a39-24a95a286233\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4nj9" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.631324 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c71d687-c381-4e1d-8a4a-ae72f4b00f9f-config\") pod \"kube-apiserver-operator-766d6c64bb-pxnrp\" (UID: \"7c71d687-c381-4e1d-8a4a-ae72f4b00f9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pxnrp" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.631983 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bbc8108a-8a2d-4f9a-af8f-335f0bf8ff6d-proxy-tls\") pod \"machine-config-controller-84d6567774-8ggmr\" (UID: \"bbc8108a-8a2d-4f9a-af8f-335f0bf8ff6d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8ggmr" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.632098 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.632353 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/740b08b8-8626-4158-813f-1c10e317f517-tmpfs\") pod \"packageserver-d55dfcdfc-8wlwl\" (UID: \"740b08b8-8626-4158-813f-1c10e317f517\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8wlwl" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.632445 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8f99610f-8623-43fe-a352-8bb6ced6a41c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-q54wv\" (UID: \"8f99610f-8623-43fe-a352-8bb6ced6a41c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q54wv" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.633239 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fe511f9f-bc6a-4e27-9837-703d6b981fb7-etcd-ca\") pod \"etcd-operator-b45778765-mqrdm\" (UID: \"fe511f9f-bc6a-4e27-9837-703d6b981fb7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mqrdm" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.633349 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c333aa9-e463-465a-957f-34e571dc6741-config\") pod \"service-ca-operator-777779d784-76rxw\" (UID: \"9c333aa9-e463-465a-957f-34e571dc6741\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-76rxw" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.633653 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fe511f9f-bc6a-4e27-9837-703d6b981fb7-etcd-client\") pod \"etcd-operator-b45778765-mqrdm\" (UID: \"fe511f9f-bc6a-4e27-9837-703d6b981fb7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mqrdm" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.634410 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8f99610f-8623-43fe-a352-8bb6ced6a41c-images\") pod \"machine-config-operator-74547568cd-q54wv\" (UID: \"8f99610f-8623-43fe-a352-8bb6ced6a41c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q54wv" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.633261 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eab5a9da-bb14-4f97-9c54-eaa7972a047d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-76xtz\" (UID: \"eab5a9da-bb14-4f97-9c54-eaa7972a047d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-76xtz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.634783 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-registry-certificates\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.634860 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/00f1ee53-d03e-47ad-bf0e-d04589199cb5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ltvxx\" (UID: \"00f1ee53-d03e-47ad-bf0e-d04589199cb5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ltvxx" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.640137 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa3f4bc7-5a08-4820-bba2-12b682296098-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-px67v\" (UID: \"fa3f4bc7-5a08-4820-bba2-12b682296098\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px67v" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.641130 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-registry-tls\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.641807 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ed3c87a-d599-4e91-92ce-377ddef564da-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bk22j\" (UID: \"2ed3c87a-d599-4e91-92ce-377ddef564da\") " pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.642036 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c71d687-c381-4e1d-8a4a-ae72f4b00f9f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pxnrp\" (UID: \"7c71d687-c381-4e1d-8a4a-ae72f4b00f9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pxnrp" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.642663 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2180d9e0-3678-4bdd-84aa-0dba230aa4e3-secret-volume\") pod \"collect-profiles-29522400-b79hz\" (UID: \"2180d9e0-3678-4bdd-84aa-0dba230aa4e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b79hz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.642704 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/685f4abd-5760-4f83-b975-0986b69d4cc3-certs\") pod \"machine-config-server-7prtt\" (UID: \"685f4abd-5760-4f83-b975-0986b69d4cc3\") " pod="openshift-machine-config-operator/machine-config-server-7prtt" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.643129 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51a45e2f-3728-46d0-b04c-cdec82ed7d58-serving-cert\") pod \"console-operator-58897d9998-gcffc\" (UID: \"51a45e2f-3728-46d0-b04c-cdec82ed7d58\") " pod="openshift-console-operator/console-operator-58897d9998-gcffc" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.643755 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/685f4abd-5760-4f83-b975-0986b69d4cc3-node-bootstrap-token\") pod \"machine-config-server-7prtt\" (UID: \"685f4abd-5760-4f83-b975-0986b69d4cc3\") " pod="openshift-machine-config-operator/machine-config-server-7prtt" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.644403 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/740b08b8-8626-4158-813f-1c10e317f517-apiservice-cert\") pod \"packageserver-d55dfcdfc-8wlwl\" (UID: \"740b08b8-8626-4158-813f-1c10e317f517\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8wlwl" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.644883 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8f99610f-8623-43fe-a352-8bb6ced6a41c-proxy-tls\") pod \"machine-config-operator-74547568cd-q54wv\" (UID: \"8f99610f-8623-43fe-a352-8bb6ced6a41c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q54wv" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.645718 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51a45e2f-3728-46d0-b04c-cdec82ed7d58-trusted-ca\") pod \"console-operator-58897d9998-gcffc\" (UID: \"51a45e2f-3728-46d0-b04c-cdec82ed7d58\") " pod="openshift-console-operator/console-operator-58897d9998-gcffc" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.648011 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ee33c3da-fd0b-45a5-8337-b600c2b6b11e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bkxss\" (UID: \"ee33c3da-fd0b-45a5-8337-b600c2b6b11e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bkxss" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.649480 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/740b08b8-8626-4158-813f-1c10e317f517-webhook-cert\") pod \"packageserver-d55dfcdfc-8wlwl\" (UID: \"740b08b8-8626-4158-813f-1c10e317f517\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8wlwl" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.653619 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0bef061-3829-41ea-926f-058de4404865-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-g6m8r\" (UID: \"e0bef061-3829-41ea-926f-058de4404865\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g6m8r" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.654954 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa3f4bc7-5a08-4820-bba2-12b682296098-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-px67v\" (UID: \"fa3f4bc7-5a08-4820-bba2-12b682296098\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px67v" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.657046 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.665123 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eab5a9da-bb14-4f97-9c54-eaa7972a047d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-76xtz\" (UID: \"eab5a9da-bb14-4f97-9c54-eaa7972a047d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-76xtz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.669854 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c333aa9-e463-465a-957f-34e571dc6741-serving-cert\") pod \"service-ca-operator-777779d784-76rxw\" (UID: \"9c333aa9-e463-465a-957f-34e571dc6741\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-76rxw" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.669910 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2ed3c87a-d599-4e91-92ce-377ddef564da-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bk22j\" (UID: \"2ed3c87a-d599-4e91-92ce-377ddef564da\") " pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.674816 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c71d687-c381-4e1d-8a4a-ae72f4b00f9f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pxnrp\" (UID: \"7c71d687-c381-4e1d-8a4a-ae72f4b00f9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pxnrp" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.679503 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9318de0-6c45-4506-a667-b8e7180f7584-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dwz6v\" (UID: \"e9318de0-6c45-4506-a667-b8e7180f7584\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dwz6v" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.690444 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zv29\" (UniqueName: \"kubernetes.io/projected/9c333aa9-e463-465a-957f-34e571dc6741-kube-api-access-8zv29\") pod \"service-ca-operator-777779d784-76rxw\" (UID: \"9c333aa9-e463-465a-957f-34e571dc6741\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-76rxw" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.702688 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fds9q"] Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.713237 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx4q8\" (UniqueName: \"kubernetes.io/projected/fa50bf9f-5749-43e8-9ddb-7f9be1c6d8ce-kube-api-access-fx4q8\") pod \"migrator-59844c95c7-hgfxd\" (UID: \"fa50bf9f-5749-43e8-9ddb-7f9be1c6d8ce\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hgfxd" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.713584 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hgfxd" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.715097 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.715387 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f0958c0-4b04-4d8e-9bbb-8dd838c9b966-config-volume\") pod \"dns-default-s2n6p\" (UID: \"0f0958c0-4b04-4d8e-9bbb-8dd838c9b966\") " pod="openshift-dns/dns-default-s2n6p" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.715482 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94dc47ef-d37b-46be-8696-378acb500013-cert\") pod \"ingress-canary-pp6hc\" (UID: \"94dc47ef-d37b-46be-8696-378acb500013\") " pod="openshift-ingress-canary/ingress-canary-pp6hc" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.715572 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/29ca355c-84f8-434c-a892-d0d3c6c78c00-mountpoint-dir\") pod \"csi-hostpathplugin-zgwq2\" (UID: \"29ca355c-84f8-434c-a892-d0d3c6c78c00\") " pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.715658 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4p4f\" (UniqueName: \"kubernetes.io/projected/29ca355c-84f8-434c-a892-d0d3c6c78c00-kube-api-access-x4p4f\") pod \"csi-hostpathplugin-zgwq2\" (UID: \"29ca355c-84f8-434c-a892-d0d3c6c78c00\") " pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.715732 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wbz7\" (UniqueName: \"kubernetes.io/projected/94dc47ef-d37b-46be-8696-378acb500013-kube-api-access-9wbz7\") pod \"ingress-canary-pp6hc\" (UID: \"94dc47ef-d37b-46be-8696-378acb500013\") " pod="openshift-ingress-canary/ingress-canary-pp6hc" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.715872 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/29ca355c-84f8-434c-a892-d0d3c6c78c00-csi-data-dir\") pod \"csi-hostpathplugin-zgwq2\" (UID: \"29ca355c-84f8-434c-a892-d0d3c6c78c00\") " pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.715983 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/29ca355c-84f8-434c-a892-d0d3c6c78c00-socket-dir\") pod \"csi-hostpathplugin-zgwq2\" (UID: \"29ca355c-84f8-434c-a892-d0d3c6c78c00\") " pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.716127 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/29ca355c-84f8-434c-a892-d0d3c6c78c00-registration-dir\") pod \"csi-hostpathplugin-zgwq2\" (UID: \"29ca355c-84f8-434c-a892-d0d3c6c78c00\") " pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.716225 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtgt\" (UniqueName: \"kubernetes.io/projected/0f0958c0-4b04-4d8e-9bbb-8dd838c9b966-kube-api-access-vvtgt\") pod \"dns-default-s2n6p\" (UID: \"0f0958c0-4b04-4d8e-9bbb-8dd838c9b966\") " pod="openshift-dns/dns-default-s2n6p" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.716324 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0f0958c0-4b04-4d8e-9bbb-8dd838c9b966-metrics-tls\") pod \"dns-default-s2n6p\" (UID: \"0f0958c0-4b04-4d8e-9bbb-8dd838c9b966\") " pod="openshift-dns/dns-default-s2n6p" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.716416 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/29ca355c-84f8-434c-a892-d0d3c6c78c00-plugins-dir\") pod \"csi-hostpathplugin-zgwq2\" (UID: \"29ca355c-84f8-434c-a892-d0d3c6c78c00\") " pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.718002 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/29ca355c-84f8-434c-a892-d0d3c6c78c00-plugins-dir\") pod \"csi-hostpathplugin-zgwq2\" (UID: \"29ca355c-84f8-434c-a892-d0d3c6c78c00\") " pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" Feb 17 16:05:33 crc kubenswrapper[4672]: E0217 16:05:33.718147 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:34.218134448 +0000 UTC m=+142.972223170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.718576 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/29ca355c-84f8-434c-a892-d0d3c6c78c00-csi-data-dir\") pod \"csi-hostpathplugin-zgwq2\" (UID: \"29ca355c-84f8-434c-a892-d0d3c6c78c00\") " pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.719076 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/29ca355c-84f8-434c-a892-d0d3c6c78c00-mountpoint-dir\") pod \"csi-hostpathplugin-zgwq2\" (UID: \"29ca355c-84f8-434c-a892-d0d3c6c78c00\") " pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.719489 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f0958c0-4b04-4d8e-9bbb-8dd838c9b966-config-volume\") pod \"dns-default-s2n6p\" (UID: \"0f0958c0-4b04-4d8e-9bbb-8dd838c9b966\") " pod="openshift-dns/dns-default-s2n6p" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.719739 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/29ca355c-84f8-434c-a892-d0d3c6c78c00-socket-dir\") pod \"csi-hostpathplugin-zgwq2\" (UID: \"29ca355c-84f8-434c-a892-d0d3c6c78c00\") " pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.720501 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/29ca355c-84f8-434c-a892-d0d3c6c78c00-registration-dir\") pod \"csi-hostpathplugin-zgwq2\" (UID: \"29ca355c-84f8-434c-a892-d0d3c6c78c00\") " pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.735656 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0f0958c0-4b04-4d8e-9bbb-8dd838c9b966-metrics-tls\") pod \"dns-default-s2n6p\" (UID: \"0f0958c0-4b04-4d8e-9bbb-8dd838c9b966\") " pod="openshift-dns/dns-default-s2n6p" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.738338 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsv79\" (UniqueName: \"kubernetes.io/projected/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-kube-api-access-gsv79\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.739252 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94dc47ef-d37b-46be-8696-378acb500013-cert\") pod \"ingress-canary-pp6hc\" (UID: \"94dc47ef-d37b-46be-8696-378acb500013\") " pod="openshift-ingress-canary/ingress-canary-pp6hc" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.756021 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x7qd\" (UniqueName: \"kubernetes.io/projected/bbc8108a-8a2d-4f9a-af8f-335f0bf8ff6d-kube-api-access-8x7qd\") pod \"machine-config-controller-84d6567774-8ggmr\" (UID: \"bbc8108a-8a2d-4f9a-af8f-335f0bf8ff6d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8ggmr" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.757965 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wtm9c"] Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.760266 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-b62wz"] Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.769056 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lzwwl"] Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.769255 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6vt2\" (UniqueName: \"kubernetes.io/projected/fa3f4bc7-5a08-4820-bba2-12b682296098-kube-api-access-b6vt2\") pod \"cluster-image-registry-operator-dc59b4c8b-px67v\" (UID: \"fa3f4bc7-5a08-4820-bba2-12b682296098\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px67v" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.781369 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-76rxw" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.782384 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnmdd\" (UniqueName: \"kubernetes.io/projected/2180d9e0-3678-4bdd-84aa-0dba230aa4e3-kube-api-access-dnmdd\") pod \"collect-profiles-29522400-b79hz\" (UID: \"2180d9e0-3678-4bdd-84aa-0dba230aa4e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b79hz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.799630 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fa3f4bc7-5a08-4820-bba2-12b682296098-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-px67v\" (UID: \"fa3f4bc7-5a08-4820-bba2-12b682296098\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px67v" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.805013 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b79hz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.821016 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfmgn"] Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.822689 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: E0217 16:05:33.823128 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:34.323115782 +0000 UTC m=+143.077204514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.825735 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c84xd\" (UniqueName: \"kubernetes.io/projected/e0bef061-3829-41ea-926f-058de4404865-kube-api-access-c84xd\") pod \"control-plane-machine-set-operator-78cbb6b69f-g6m8r\" (UID: \"e0bef061-3829-41ea-926f-058de4404865\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g6m8r" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.841794 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jmm8\" (UniqueName: \"kubernetes.io/projected/00f1ee53-d03e-47ad-bf0e-d04589199cb5-kube-api-access-8jmm8\") pod \"multus-admission-controller-857f4d67dd-ltvxx\" (UID: \"00f1ee53-d03e-47ad-bf0e-d04589199cb5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ltvxx" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.868374 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9318de0-6c45-4506-a667-b8e7180f7584-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dwz6v\" (UID: \"e9318de0-6c45-4506-a667-b8e7180f7584\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dwz6v" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.891684 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch5ph\" (UniqueName: \"kubernetes.io/projected/4aec7350-9b5f-44c1-9a39-24a95a286233-kube-api-access-ch5ph\") pod \"service-ca-9c57cc56f-n4nj9\" (UID: \"4aec7350-9b5f-44c1-9a39-24a95a286233\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4nj9" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.893557 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bn2m6" event={"ID":"06bbdf78-c2ef-42f1-8f0d-952f07a4b678","Type":"ContainerStarted","Data":"8afa06902ec64351d3d517e7558e563f1d18b3f20a4708221722c25d677498dc"} Feb 17 16:05:33 crc kubenswrapper[4672]: W0217 16:05:33.894926 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod506b8374_2f07_427a_bf3b_44b1f6f022b5.slice/crio-534b4fe49c26c5c7bc62632b1bf1de2e0c0ee11dbca8b5e29ee15f6db0b754f1 WatchSource:0}: Error finding container 534b4fe49c26c5c7bc62632b1bf1de2e0c0ee11dbca8b5e29ee15f6db0b754f1: Status 404 returned error can't find the container with id 534b4fe49c26c5c7bc62632b1bf1de2e0c0ee11dbca8b5e29ee15f6db0b754f1 Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.895565 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-vndpv" event={"ID":"1d98488b-d521-4207-a7b8-23b37cb1ef98","Type":"ContainerStarted","Data":"8b40b1fa0f7d85dfd435d28dd621b27fe6b237ca0815e8efed051739abb8a113"} Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.895598 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-vndpv" event={"ID":"1d98488b-d521-4207-a7b8-23b37cb1ef98","Type":"ContainerStarted","Data":"c637b680b66277da22a8d4ffaad738fca2df75d1bd10d79f3cf2327eb4216ab5"} Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.897052 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" event={"ID":"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba","Type":"ContainerStarted","Data":"7177f91687a6f8da846a998434c042552ff5a4ab45f8d52af9d4a5a00a9da7be"} Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.898612 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpbdr\" (UniqueName: \"kubernetes.io/projected/2ed3c87a-d599-4e91-92ce-377ddef564da-kube-api-access-qpbdr\") pod \"marketplace-operator-79b997595-bk22j\" (UID: \"2ed3c87a-d599-4e91-92ce-377ddef564da\") " pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.899739 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wtm9c" event={"ID":"aae518f7-37fe-4bd1-9c5b-ba5186684ebd","Type":"ContainerStarted","Data":"ca2cd0181688151c4f2b5e74b32b0d15717e61a2f6375f50069f34f7d766d909"} Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.907160 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-b62wz" event={"ID":"4d95f63b-d4f4-4da3-a741-c69b49b9233c","Type":"ContainerStarted","Data":"2d5c08d05029afd3fafc691eca81dec4ca9fd6f614601784041998c721d1fdfd"} Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.910256 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" event={"ID":"59e82a1f-2c6a-4938-9696-ffe2eac280ce","Type":"ContainerStarted","Data":"218653ffd6e6f1e67a7dbd7e3a9ac0ccf75b109821e487b67c07515f389610ad"} Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.922394 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" event={"ID":"4fc91364-276e-4cc3-bf44-3e5dad5ad06e","Type":"ContainerStarted","Data":"948d09e255b48423335b5578bca724a215ab54287ce46f26aca6474522462b0d"} Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.927667 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cdjsz"] Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.929966 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpj2t\" (UniqueName: \"kubernetes.io/projected/eab5a9da-bb14-4f97-9c54-eaa7972a047d-kube-api-access-rpj2t\") pod \"kube-storage-version-migrator-operator-b67b599dd-76xtz\" (UID: \"eab5a9da-bb14-4f97-9c54-eaa7972a047d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-76xtz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.930004 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:33 crc kubenswrapper[4672]: E0217 16:05:33.931408 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:34.431383984 +0000 UTC m=+143.185472716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.934421 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: E0217 16:05:33.934925 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:34.434909677 +0000 UTC m=+143.188998409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.938777 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px67v" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.945961 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrrfw\" (UniqueName: \"kubernetes.io/projected/fe511f9f-bc6a-4e27-9837-703d6b981fb7-kube-api-access-hrrfw\") pod \"etcd-operator-b45778765-mqrdm\" (UID: \"fe511f9f-bc6a-4e27-9837-703d6b981fb7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mqrdm" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.959942 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pxnrp" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.963207 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-bound-sa-token\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.964098 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6ljr8"] Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.964210 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-76xtz" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.966346 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbvwk"] Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.969303 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dwz6v" Feb 17 16:05:33 crc kubenswrapper[4672]: I0217 16:05:33.980449 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2v25\" (UniqueName: \"kubernetes.io/projected/ee33c3da-fd0b-45a5-8337-b600c2b6b11e-kube-api-access-h2v25\") pod \"olm-operator-6b444d44fb-bkxss\" (UID: \"ee33c3da-fd0b-45a5-8337-b600c2b6b11e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bkxss" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.006632 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wljng\" (UniqueName: \"kubernetes.io/projected/8f99610f-8623-43fe-a352-8bb6ced6a41c-kube-api-access-wljng\") pod \"machine-config-operator-74547568cd-q54wv\" (UID: \"8f99610f-8623-43fe-a352-8bb6ced6a41c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q54wv" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.022960 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvbf8\" (UniqueName: \"kubernetes.io/projected/51a45e2f-3728-46d0-b04c-cdec82ed7d58-kube-api-access-xvbf8\") pod \"console-operator-58897d9998-gcffc\" (UID: \"51a45e2f-3728-46d0-b04c-cdec82ed7d58\") " pod="openshift-console-operator/console-operator-58897d9998-gcffc" Feb 17 16:05:34 crc kubenswrapper[4672]: W0217 16:05:34.035022 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc83e6ac1_9d88_475f_b293_e3accaf7b812.slice/crio-0acea6c162d6407fe7bef1f2f444d6914fa072edb5efbcf91f32f8dd0b813db4 WatchSource:0}: Error finding container 0acea6c162d6407fe7bef1f2f444d6914fa072edb5efbcf91f32f8dd0b813db4: Status 404 returned error can't find the container with id 0acea6c162d6407fe7bef1f2f444d6914fa072edb5efbcf91f32f8dd0b813db4 Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.035196 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:34 crc kubenswrapper[4672]: E0217 16:05:34.035353 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:34.535327241 +0000 UTC m=+143.289415973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.035395 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:34 crc kubenswrapper[4672]: E0217 16:05:34.036208 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:34.536199594 +0000 UTC m=+143.290288326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.037375 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8ggmr" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.046649 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-mqrdm" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.051465 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7p722"] Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.060936 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bkxss" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.061365 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5l7j\" (UniqueName: \"kubernetes.io/projected/908f0c62-5b97-4c11-8b5d-6454f36295f6-kube-api-access-f5l7j\") pod \"downloads-7954f5f757-v9lrm\" (UID: \"908f0c62-5b97-4c11-8b5d-6454f36295f6\") " pod="openshift-console/downloads-7954f5f757-v9lrm" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.068637 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g6m8r" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.074570 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.080326 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjchv\" (UniqueName: \"kubernetes.io/projected/740b08b8-8626-4158-813f-1c10e317f517-kube-api-access-zjchv\") pod \"packageserver-d55dfcdfc-8wlwl\" (UID: \"740b08b8-8626-4158-813f-1c10e317f517\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8wlwl" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.091544 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ltvxx" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.104716 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtwwh\" (UniqueName: \"kubernetes.io/projected/685f4abd-5760-4f83-b975-0986b69d4cc3-kube-api-access-qtwwh\") pod \"machine-config-server-7prtt\" (UID: \"685f4abd-5760-4f83-b975-0986b69d4cc3\") " pod="openshift-machine-config-operator/machine-config-server-7prtt" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.111367 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f7zb5"] Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.113069 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hgfxd"] Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.118327 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n4nj9" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.118541 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lxmvh"] Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.121857 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h6hxj"] Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.127404 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-7prtt" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.127867 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvtgt\" (UniqueName: \"kubernetes.io/projected/0f0958c0-4b04-4d8e-9bbb-8dd838c9b966-kube-api-access-vvtgt\") pod \"dns-default-s2n6p\" (UID: \"0f0958c0-4b04-4d8e-9bbb-8dd838c9b966\") " pod="openshift-dns/dns-default-s2n6p" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.132734 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-s2n6p" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.137980 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4p4f\" (UniqueName: \"kubernetes.io/projected/29ca355c-84f8-434c-a892-d0d3c6c78c00-kube-api-access-x4p4f\") pod \"csi-hostpathplugin-zgwq2\" (UID: \"29ca355c-84f8-434c-a892-d0d3c6c78c00\") " pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.140072 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:34 crc kubenswrapper[4672]: E0217 16:05:34.140260 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:34.640232993 +0000 UTC m=+143.394321745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.140434 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:34 crc kubenswrapper[4672]: E0217 16:05:34.140767 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:34.640752407 +0000 UTC m=+143.394841139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.162767 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.168672 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wbz7\" (UniqueName: \"kubernetes.io/projected/94dc47ef-d37b-46be-8696-378acb500013-kube-api-access-9wbz7\") pod \"ingress-canary-pp6hc\" (UID: \"94dc47ef-d37b-46be-8696-378acb500013\") " pod="openshift-ingress-canary/ingress-canary-pp6hc" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.171571 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pp6hc" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.226980 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gcffc" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.234750 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-76rxw"] Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.241381 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87"] Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.242079 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:34 crc kubenswrapper[4672]: E0217 16:05:34.242774 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:34.742742893 +0000 UTC m=+143.496831625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.242890 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-v9lrm" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.252884 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q54wv" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.255031 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-d9vk6"] Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.255786 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8cvrf"] Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.259991 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-b79hz"] Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.284802 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6grzz"] Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.315453 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-vndpv" Feb 17 16:05:34 crc kubenswrapper[4672]: W0217 16:05:34.317218 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa50bf9f_5749_43e8_9ddb_7f9be1c6d8ce.slice/crio-4c3c019bb7a81a56388ff701d55c0753282b2f1608fa767d92ea8e8241112b2b WatchSource:0}: Error finding container 4c3c019bb7a81a56388ff701d55c0753282b2f1608fa767d92ea8e8241112b2b: Status 404 returned error can't find the container with id 4c3c019bb7a81a56388ff701d55c0753282b2f1608fa767d92ea8e8241112b2b Feb 17 16:05:34 crc kubenswrapper[4672]: W0217 16:05:34.317710 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73969925_7fe2_4e3a_9ede_d1bd990f7f71.slice/crio-dec181a4fbe9e703333630feed72d3b8bac1226e20ff11631e8e2d320f2464d8 WatchSource:0}: Error finding container dec181a4fbe9e703333630feed72d3b8bac1226e20ff11631e8e2d320f2464d8: Status 404 returned error can't find the container with id dec181a4fbe9e703333630feed72d3b8bac1226e20ff11631e8e2d320f2464d8 Feb 17 16:05:34 crc kubenswrapper[4672]: W0217 16:05:34.318281 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c333aa9_e463_465a_957f_34e571dc6741.slice/crio-c83f3f3609182bef8e97b091a120155b1b4e02b3f9c59947f3b6870e6e254f92 WatchSource:0}: Error finding container c83f3f3609182bef8e97b091a120155b1b4e02b3f9c59947f3b6870e6e254f92: Status 404 returned error can't find the container with id c83f3f3609182bef8e97b091a120155b1b4e02b3f9c59947f3b6870e6e254f92 Feb 17 16:05:34 crc kubenswrapper[4672]: W0217 16:05:34.332667 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod655735f2_25f3_4cf3_8b40_a35184576e33.slice/crio-7f4ae965c3e6353b8f4f4ab929a3df89b4048aeaf1627471944363eac98614a4 WatchSource:0}: Error finding container 7f4ae965c3e6353b8f4f4ab929a3df89b4048aeaf1627471944363eac98614a4: Status 404 returned error can't find the container with id 7f4ae965c3e6353b8f4f4ab929a3df89b4048aeaf1627471944363eac98614a4 Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.343184 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:34 crc kubenswrapper[4672]: E0217 16:05:34.343582 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:34.843561967 +0000 UTC m=+143.597650699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:34 crc kubenswrapper[4672]: W0217 16:05:34.365239 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod750ef8f5_44ad_4016_8894_0b2a05430464.slice/crio-9a4862b47fceaa04e03c34548b09f49397cebf31862f79f3151db667cc4a2860 WatchSource:0}: Error finding container 9a4862b47fceaa04e03c34548b09f49397cebf31862f79f3151db667cc4a2860: Status 404 returned error can't find the container with id 9a4862b47fceaa04e03c34548b09f49397cebf31862f79f3151db667cc4a2860 Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.366452 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8wlwl" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.424393 4672 patch_prober.go:28] interesting pod/router-default-5444994796-vndpv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:34 crc kubenswrapper[4672]: [-]has-synced failed: reason withheld Feb 17 16:05:34 crc kubenswrapper[4672]: [+]process-running ok Feb 17 16:05:34 crc kubenswrapper[4672]: healthz check failed Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.424455 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vndpv" podUID="1d98488b-d521-4207-a7b8-23b37cb1ef98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.452016 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:34 crc kubenswrapper[4672]: E0217 16:05:34.452389 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:34.952372863 +0000 UTC m=+143.706461595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.560647 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:34 crc kubenswrapper[4672]: E0217 16:05:34.562675 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.062656688 +0000 UTC m=+143.816745420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.664288 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:34 crc kubenswrapper[4672]: E0217 16:05:34.665131 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.165097755 +0000 UTC m=+143.919186497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.665618 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:34 crc kubenswrapper[4672]: E0217 16:05:34.674465 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.174444672 +0000 UTC m=+143.928533404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.724395 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pxnrp"] Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.739100 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dwz6v"] Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.743662 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px67v"] Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.779088 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:34 crc kubenswrapper[4672]: E0217 16:05:34.779889 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.279861838 +0000 UTC m=+144.033950560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.870692 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-vndpv" podStartSLOduration=117.870673599 podStartE2EDuration="1m57.870673599s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:34.865005699 +0000 UTC m=+143.619094431" watchObservedRunningTime="2026-02-17 16:05:34.870673599 +0000 UTC m=+143.624762321" Feb 17 16:05:34 crc kubenswrapper[4672]: W0217 16:05:34.873884 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa3f4bc7_5a08_4820_bba2_12b682296098.slice/crio-65fc1a0a2ec5b005b2c467dd69450265c258ff9f6805a74f745dae5052c310e4 WatchSource:0}: Error finding container 65fc1a0a2ec5b005b2c467dd69450265c258ff9f6805a74f745dae5052c310e4: Status 404 returned error can't find the container with id 65fc1a0a2ec5b005b2c467dd69450265c258ff9f6805a74f745dae5052c310e4 Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.881590 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:34 crc kubenswrapper[4672]: E0217 16:05:34.881889 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.381876115 +0000 UTC m=+144.135964847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.885190 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mqrdm"] Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.887477 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8ggmr"] Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.888084 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bkxss"] Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.903738 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-76xtz"] Feb 17 16:05:34 crc kubenswrapper[4672]: W0217 16:05:34.924858 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9318de0_6c45_4506_a667_b8e7180f7584.slice/crio-1e3f1f925690abdb81d9cc47e77b0b9515a9213ee0b2fb9ab4da6f3559a2f707 WatchSource:0}: Error finding container 1e3f1f925690abdb81d9cc47e77b0b9515a9213ee0b2fb9ab4da6f3559a2f707: Status 404 returned error can't find the container with id 1e3f1f925690abdb81d9cc47e77b0b9515a9213ee0b2fb9ab4da6f3559a2f707 Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.950410 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px67v" event={"ID":"fa3f4bc7-5a08-4820-bba2-12b682296098","Type":"ContainerStarted","Data":"65fc1a0a2ec5b005b2c467dd69450265c258ff9f6805a74f745dae5052c310e4"} Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.959652 4672 generic.go:334] "Generic (PLEG): container finished" podID="4fc91364-276e-4cc3-bf44-3e5dad5ad06e" containerID="e12259e608198e5cadbd7cb866e9bf31112e351c25315f25795b2bbbf675b1d4" exitCode=0 Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.960189 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" event={"ID":"4fc91364-276e-4cc3-bf44-3e5dad5ad06e","Type":"ContainerDied","Data":"e12259e608198e5cadbd7cb866e9bf31112e351c25315f25795b2bbbf675b1d4"} Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.966257 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-d9vk6" event={"ID":"750ef8f5-44ad-4016-8894-0b2a05430464","Type":"ContainerStarted","Data":"9a4862b47fceaa04e03c34548b09f49397cebf31862f79f3151db667cc4a2860"} Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.970229 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lxmvh" event={"ID":"5e8f5285-7002-4472-be6a-a21731ccaf67","Type":"ContainerStarted","Data":"6a54805802c862b3b5544b5e493e3471e45ab2e10b6a1b69193e571819177197"} Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.980772 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" event={"ID":"59e82a1f-2c6a-4938-9696-ffe2eac280ce","Type":"ContainerStarted","Data":"8469d573b653a8806c853a1173e0645c56ba099dcf83caa84671c857c933e1b9"} Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.980836 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.982621 4672 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fds9q container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.982680 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" podUID="59e82a1f-2c6a-4938-9696-ffe2eac280ce" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.984693 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b79hz" event={"ID":"2180d9e0-3678-4bdd-84aa-0dba230aa4e3","Type":"ContainerStarted","Data":"790ce8f166ef5660b0f517ea8dadfa2f7e564e77bbd2cff180a2f9601c4b47c0"} Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.986423 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:34 crc kubenswrapper[4672]: E0217 16:05:34.986552 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.48652582 +0000 UTC m=+144.240614552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.987279 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" event={"ID":"b5948d11-a6da-4f21-a6e8-413a28791775","Type":"ContainerStarted","Data":"14f171c17ff095150410f07896dbbab02401ba4f56a61df6a84e1d912b0c519f"} Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.988747 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6ljr8" event={"ID":"c83e6ac1-9d88-475f-b293-e3accaf7b812","Type":"ContainerStarted","Data":"0acea6c162d6407fe7bef1f2f444d6914fa072edb5efbcf91f32f8dd0b813db4"} Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.989758 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbvwk" event={"ID":"798d1803-87c5-4e9e-a29e-660f313c283c","Type":"ContainerStarted","Data":"41f35dd1f046b752114643546d8ed389517a0ed26248f82fb3824a98ed618177"} Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.993238 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6grzz" event={"ID":"43a5e5af-ba41-4a32-9893-1c17a54e7024","Type":"ContainerStarted","Data":"2855380c48ff181a61a85a72a1428594b17b0480ebf8943f179d93e5d2d06a16"} Feb 17 16:05:34 crc kubenswrapper[4672]: I0217 16:05:34.993451 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:34 crc kubenswrapper[4672]: E0217 16:05:34.993991 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.493976027 +0000 UTC m=+144.248064759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.010102 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8cvrf" event={"ID":"73969925-7fe2-4e3a-9ede-d1bd990f7f71","Type":"ContainerStarted","Data":"dec181a4fbe9e703333630feed72d3b8bac1226e20ff11631e8e2d320f2464d8"} Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.032413 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h6hxj" event={"ID":"655735f2-25f3-4cf3-8b40-a35184576e33","Type":"ContainerStarted","Data":"7f4ae965c3e6353b8f4f4ab929a3df89b4048aeaf1627471944363eac98614a4"} Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.037191 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfmgn" event={"ID":"506b8374-2f07-427a-bf3b-44b1f6f022b5","Type":"ContainerStarted","Data":"fee692b8a5b23e4f35db3b395e39395ae3e4b8ae7638d829f15b59df056bdf3c"} Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.037249 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfmgn" event={"ID":"506b8374-2f07-427a-bf3b-44b1f6f022b5","Type":"ContainerStarted","Data":"534b4fe49c26c5c7bc62632b1bf1de2e0c0ee11dbca8b5e29ee15f6db0b754f1"} Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.041661 4672 generic.go:334] "Generic (PLEG): container finished" podID="eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba" containerID="84285634d4abf7e2e8cc4daeef1f9fdd5d437adc39968cd8dc928824e197ef65" exitCode=0 Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.041713 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" event={"ID":"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba","Type":"ContainerDied","Data":"84285634d4abf7e2e8cc4daeef1f9fdd5d437adc39968cd8dc928824e197ef65"} Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.097576 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:35 crc kubenswrapper[4672]: E0217 16:05:35.098869 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.598841719 +0000 UTC m=+144.352930451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.109194 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f7zb5" event={"ID":"f4bd667f-f40c-402e-96f4-6978225fc1ed","Type":"ContainerStarted","Data":"c955e38245621db6b03d715efc4b3ce0fd0994a86943c277f00c5655690d1151"} Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.109693 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f7zb5" event={"ID":"f4bd667f-f40c-402e-96f4-6978225fc1ed","Type":"ContainerStarted","Data":"9a402eb7090fcef93f3515e5234eb1b3fe3bd20526f8bb46304c5e12a6e84c29"} Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.118583 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8wlwl"] Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.137983 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cdjsz" event={"ID":"6119a50b-94a4-4095-b14c-f009fe646312","Type":"ContainerStarted","Data":"9553fbb6826ed5bbf9b8163914ecedf07799a0150351f6b64ff8bb56f3dda132"} Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.138043 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cdjsz" event={"ID":"6119a50b-94a4-4095-b14c-f009fe646312","Type":"ContainerStarted","Data":"a395188c6aefd938ef8d6cf111a6e80b6baa12a30d26c6be51cac8a8d79d3999"} Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.153992 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-b62wz" event={"ID":"4d95f63b-d4f4-4da3-a741-c69b49b9233c","Type":"ContainerStarted","Data":"39f4108eb2a65ecbd019930ee8b48f92b6d0d4425304c8f3c8140b13ca742b7c"} Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.163333 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-q54wv"] Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.200298 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:35 crc kubenswrapper[4672]: E0217 16:05:35.200856 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.700837915 +0000 UTC m=+144.454926647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.245959 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ltvxx"] Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.251615 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bn2m6" event={"ID":"06bbdf78-c2ef-42f1-8f0d-952f07a4b678","Type":"ContainerStarted","Data":"27c5125e27c3593368ee21ec3d3ba9407e99e7b79ae2c0e0b0c0918738ed4cbe"} Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.251654 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bn2m6" event={"ID":"06bbdf78-c2ef-42f1-8f0d-952f07a4b678","Type":"ContainerStarted","Data":"05b81895c7db3ba4c68b7cb579e4d4c721112955be46d921297c9551c0509ab4"} Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.253613 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-7prtt" event={"ID":"685f4abd-5760-4f83-b975-0986b69d4cc3","Type":"ContainerStarted","Data":"ca04dc1efa3c7de6040cde94b687bbadc92f05f951c6be53d00aab617ed3da22"} Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.254468 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" event={"ID":"a4206b74-7012-47af-9344-253aa7453e86","Type":"ContainerStarted","Data":"38e3bd524fa50a0739db41a24c64da2e08ae1208ee7aa8e3a2268a2e8c4dd26f"} Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.254496 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" event={"ID":"a4206b74-7012-47af-9344-253aa7453e86","Type":"ContainerStarted","Data":"e6b9445b10a044e906b01f8747508b7938737b5ad3a644ecf26493bce4898974"} Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.255257 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.256026 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-76rxw" event={"ID":"9c333aa9-e463-465a-957f-34e571dc6741","Type":"ContainerStarted","Data":"84467dffe6cf4826ec23c2e879822c7007a81248530e05bdd8a3667d782d176d"} Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.256051 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-76rxw" event={"ID":"9c333aa9-e463-465a-957f-34e571dc6741","Type":"ContainerStarted","Data":"c83f3f3609182bef8e97b091a120155b1b4e02b3f9c59947f3b6870e6e254f92"} Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.258246 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pp6hc"] Feb 17 16:05:35 crc kubenswrapper[4672]: W0217 16:05:35.259717 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod740b08b8_8626_4158_813f_1c10e317f517.slice/crio-b631a911e909199d296a62bf95a24488bc4bb26376fe884d9034fb4bccde277f WatchSource:0}: Error finding container b631a911e909199d296a62bf95a24488bc4bb26376fe884d9034fb4bccde277f: Status 404 returned error can't find the container with id b631a911e909199d296a62bf95a24488bc4bb26376fe884d9034fb4bccde277f Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.260101 4672 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7p722 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.260142 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" podUID="a4206b74-7012-47af-9344-253aa7453e86" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.262804 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g6m8r"] Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.267101 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wtm9c" event={"ID":"aae518f7-37fe-4bd1-9c5b-ba5186684ebd","Type":"ContainerStarted","Data":"5b5f5d09bbeab6105ce12b0aba5cf787fdf8d6560d79e044fc0d466961d1e9bd"} Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.268910 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bk22j"] Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.276443 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pxnrp" event={"ID":"7c71d687-c381-4e1d-8a4a-ae72f4b00f9f","Type":"ContainerStarted","Data":"e8c80ca5a37193602b6a1bc69a0de9ceeeb33d087d3ce0eb99dde160c9f5dcf1"} Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.279453 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hgfxd" event={"ID":"fa50bf9f-5749-43e8-9ddb-7f9be1c6d8ce","Type":"ContainerStarted","Data":"4c3c019bb7a81a56388ff701d55c0753282b2f1608fa767d92ea8e8241112b2b"} Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.287188 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-s2n6p"] Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.289944 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zgwq2"] Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.291589 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gcffc"] Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.295383 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n4nj9"] Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.301104 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:35 crc kubenswrapper[4672]: E0217 16:05:35.301235 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.801202897 +0000 UTC m=+144.555291619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4672]: E0217 16:05:35.311462 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.811424987 +0000 UTC m=+144.565513719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.311182 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:35 crc kubenswrapper[4672]: W0217 16:05:35.319232 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f0958c0_4b04_4d8e_9bbb_8dd838c9b966.slice/crio-5636a85e6f06234a67addda8ae6c40fc33d33e095662f2eda160f9977b795600 WatchSource:0}: Error finding container 5636a85e6f06234a67addda8ae6c40fc33d33e095662f2eda160f9977b795600: Status 404 returned error can't find the container with id 5636a85e6f06234a67addda8ae6c40fc33d33e095662f2eda160f9977b795600 Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.324454 4672 patch_prober.go:28] interesting pod/router-default-5444994796-vndpv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:35 crc kubenswrapper[4672]: [-]has-synced failed: reason withheld Feb 17 16:05:35 crc kubenswrapper[4672]: [+]process-running ok Feb 17 16:05:35 crc kubenswrapper[4672]: healthz check failed Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.324753 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vndpv" podUID="1d98488b-d521-4207-a7b8-23b37cb1ef98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.349228 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-v9lrm"] Feb 17 16:05:35 crc kubenswrapper[4672]: W0217 16:05:35.393898 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod908f0c62_5b97_4c11_8b5d_6454f36295f6.slice/crio-2ecd696fb640c9529ff7850f255330e42582d1b49f93f05246c3ce94239fcbd4 WatchSource:0}: Error finding container 2ecd696fb640c9529ff7850f255330e42582d1b49f93f05246c3ce94239fcbd4: Status 404 returned error can't find the container with id 2ecd696fb640c9529ff7850f255330e42582d1b49f93f05246c3ce94239fcbd4 Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.415018 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:35 crc kubenswrapper[4672]: E0217 16:05:35.415170 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.915127868 +0000 UTC m=+144.669216610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.415599 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:35 crc kubenswrapper[4672]: E0217 16:05:35.416093 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:35.916069863 +0000 UTC m=+144.670158595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.517545 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:35 crc kubenswrapper[4672]: E0217 16:05:35.517680 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.017650758 +0000 UTC m=+144.771739490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.518003 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:35 crc kubenswrapper[4672]: E0217 16:05:35.519247 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.019223259 +0000 UTC m=+144.773311991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.618746 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:35 crc kubenswrapper[4672]: E0217 16:05:35.618939 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.118916324 +0000 UTC m=+144.873005056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.619136 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:35 crc kubenswrapper[4672]: E0217 16:05:35.619482 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.119470719 +0000 UTC m=+144.873559451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.719929 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:35 crc kubenswrapper[4672]: E0217 16:05:35.720197 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.22016379 +0000 UTC m=+144.974252562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.720486 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:35 crc kubenswrapper[4672]: E0217 16:05:35.720892 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.220878669 +0000 UTC m=+144.974967411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.821138 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:35 crc kubenswrapper[4672]: E0217 16:05:35.821665 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.321649802 +0000 UTC m=+145.075738534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.925473 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:35 crc kubenswrapper[4672]: E0217 16:05:35.925927 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.425914088 +0000 UTC m=+145.180002820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.944643 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-wtm9c" podStartSLOduration=119.944625063 podStartE2EDuration="1m59.944625063s" podCreationTimestamp="2026-02-17 16:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:35.943052121 +0000 UTC m=+144.697140853" watchObservedRunningTime="2026-02-17 16:05:35.944625063 +0000 UTC m=+144.698713795" Feb 17 16:05:35 crc kubenswrapper[4672]: I0217 16:05:35.947095 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" podStartSLOduration=118.947069857 podStartE2EDuration="1m58.947069857s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:35.924951253 +0000 UTC m=+144.679039995" watchObservedRunningTime="2026-02-17 16:05:35.947069857 +0000 UTC m=+144.701158589" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.016384 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-76rxw" podStartSLOduration=119.016367259 podStartE2EDuration="1m59.016367259s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.014954391 +0000 UTC m=+144.769043133" watchObservedRunningTime="2026-02-17 16:05:36.016367259 +0000 UTC m=+144.770455991" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.026305 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:36 crc kubenswrapper[4672]: E0217 16:05:36.026657 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.52664286 +0000 UTC m=+145.280731582 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.097266 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfmgn" podStartSLOduration=119.097252396 podStartE2EDuration="1m59.097252396s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.095823639 +0000 UTC m=+144.849912371" watchObservedRunningTime="2026-02-17 16:05:36.097252396 +0000 UTC m=+144.851341128" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.127296 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:36 crc kubenswrapper[4672]: E0217 16:05:36.128144 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.628133633 +0000 UTC m=+145.382222365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.155869 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bn2m6" podStartSLOduration=120.155850575 podStartE2EDuration="2m0.155850575s" podCreationTimestamp="2026-02-17 16:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.154121089 +0000 UTC m=+144.908209821" watchObservedRunningTime="2026-02-17 16:05:36.155850575 +0000 UTC m=+144.909939307" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.199454 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" podStartSLOduration=120.199422167 podStartE2EDuration="2m0.199422167s" podCreationTimestamp="2026-02-17 16:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.196493079 +0000 UTC m=+144.950581821" watchObservedRunningTime="2026-02-17 16:05:36.199422167 +0000 UTC m=+144.953510919" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.233970 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:36 crc kubenswrapper[4672]: E0217 16:05:36.234187 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.734158715 +0000 UTC m=+145.488247447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.234313 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:36 crc kubenswrapper[4672]: E0217 16:05:36.234583 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.734572426 +0000 UTC m=+145.488661158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.321978 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q54wv" event={"ID":"8f99610f-8623-43fe-a352-8bb6ced6a41c","Type":"ContainerStarted","Data":"2440e3e15c6e96f1be74551478b0c5accf163dc9761977690673a1b706c2b2b1"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.322329 4672 patch_prober.go:28] interesting pod/router-default-5444994796-vndpv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:36 crc kubenswrapper[4672]: [-]has-synced failed: reason withheld Feb 17 16:05:36 crc kubenswrapper[4672]: [+]process-running ok Feb 17 16:05:36 crc kubenswrapper[4672]: healthz check failed Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.322382 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vndpv" podUID="1d98488b-d521-4207-a7b8-23b37cb1ef98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.328736 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h6hxj" event={"ID":"655735f2-25f3-4cf3-8b40-a35184576e33","Type":"ContainerStarted","Data":"a733ce71df03f3651d10fc3723c0f130b4c90025dd8e11ddc35d7b246fdca1aa"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.340611 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:36 crc kubenswrapper[4672]: E0217 16:05:36.341060 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.84104318 +0000 UTC m=+145.595131912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.363019 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f7zb5" event={"ID":"f4bd667f-f40c-402e-96f4-6978225fc1ed","Type":"ContainerStarted","Data":"ac87411da77f85674fa353d9a8152efee84766eeb2d339091129c713dc3a8728"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.363361 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f7zb5" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.365715 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gcffc" event={"ID":"51a45e2f-3728-46d0-b04c-cdec82ed7d58","Type":"ContainerStarted","Data":"a4978023709fb02d9973def038166b3c6b95f4c519834b1a0c3c8912d3a1e36c"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.367692 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" event={"ID":"29ca355c-84f8-434c-a892-d0d3c6c78c00","Type":"ContainerStarted","Data":"e216531b42c46cd0712cff0e51103b9b1d47a7092a3cc0287ff03fecbe210885"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.375486 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h6hxj" podStartSLOduration=120.37547117 podStartE2EDuration="2m0.37547117s" podCreationTimestamp="2026-02-17 16:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.374112364 +0000 UTC m=+145.128201096" watchObservedRunningTime="2026-02-17 16:05:36.37547117 +0000 UTC m=+145.129559912" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.403440 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f7zb5" podStartSLOduration=119.403426549 podStartE2EDuration="1m59.403426549s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.401669012 +0000 UTC m=+145.155757744" watchObservedRunningTime="2026-02-17 16:05:36.403426549 +0000 UTC m=+145.157515271" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.404392 4672 generic.go:334] "Generic (PLEG): container finished" podID="43a5e5af-ba41-4a32-9893-1c17a54e7024" containerID="42ec03a03317b346c24b9c725ff43dca6634b350501698e358ab8b1237c0e5f2" exitCode=0 Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.405136 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6grzz" event={"ID":"43a5e5af-ba41-4a32-9893-1c17a54e7024","Type":"ContainerDied","Data":"42ec03a03317b346c24b9c725ff43dca6634b350501698e358ab8b1237c0e5f2"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.412607 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pxnrp" event={"ID":"7c71d687-c381-4e1d-8a4a-ae72f4b00f9f","Type":"ContainerStarted","Data":"bd6c8aef4716c7cd2f46788783fcefe037dd966ecf9cee0dc041ba4324fed83d"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.436912 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8wlwl" event={"ID":"740b08b8-8626-4158-813f-1c10e317f517","Type":"ContainerStarted","Data":"a7691b8c04b6d85f3902bb14cd5226995d89d0b2b5502a5ae27d204c0361d7b9"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.436946 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8wlwl" event={"ID":"740b08b8-8626-4158-813f-1c10e317f517","Type":"ContainerStarted","Data":"b631a911e909199d296a62bf95a24488bc4bb26376fe884d9034fb4bccde277f"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.437782 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8wlwl" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.439209 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8ggmr" event={"ID":"bbc8108a-8a2d-4f9a-af8f-335f0bf8ff6d","Type":"ContainerStarted","Data":"1cd90f1ebd464eeb155baff601e8143bfae11ce3db32ae13df8ad9212fa90066"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.439236 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8ggmr" event={"ID":"bbc8108a-8a2d-4f9a-af8f-335f0bf8ff6d","Type":"ContainerStarted","Data":"2ff7a77392765c9c835e3f27760d6dee8595151ad6e63916c0a44cd7de7663df"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.449773 4672 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-8wlwl container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.449849 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8wlwl" podUID="740b08b8-8626-4158-813f-1c10e317f517" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.450145 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-v9lrm" event={"ID":"908f0c62-5b97-4c11-8b5d-6454f36295f6","Type":"ContainerStarted","Data":"2ecd696fb640c9529ff7850f255330e42582d1b49f93f05246c3ce94239fcbd4"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.450308 4672 csr.go:261] certificate signing request csr-w4gt8 is approved, waiting to be issued Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.451108 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:36 crc kubenswrapper[4672]: E0217 16:05:36.452662 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:36.952651879 +0000 UTC m=+145.706740611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.460433 4672 csr.go:257] certificate signing request csr-w4gt8 is issued Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.461959 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pxnrp" podStartSLOduration=119.461935565 podStartE2EDuration="1m59.461935565s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.457779825 +0000 UTC m=+145.211868557" watchObservedRunningTime="2026-02-17 16:05:36.461935565 +0000 UTC m=+145.216024297" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.495501 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8wlwl" podStartSLOduration=119.495485182 podStartE2EDuration="1m59.495485182s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.485311793 +0000 UTC m=+145.239400525" watchObservedRunningTime="2026-02-17 16:05:36.495485182 +0000 UTC m=+145.249573904" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.502200 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b79hz" event={"ID":"2180d9e0-3678-4bdd-84aa-0dba230aa4e3","Type":"ContainerStarted","Data":"05f802e5ab0a5bcc44ea9f95953f50154f395bfe8d4a7775d34ed6c635f654c3"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.533871 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b79hz" podStartSLOduration=120.533840125 podStartE2EDuration="2m0.533840125s" podCreationTimestamp="2026-02-17 16:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.527806166 +0000 UTC m=+145.281894898" watchObservedRunningTime="2026-02-17 16:05:36.533840125 +0000 UTC m=+145.287928857" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.534645 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pp6hc" event={"ID":"94dc47ef-d37b-46be-8696-378acb500013","Type":"ContainerStarted","Data":"61ab11b0cffc41e51dc59d5370eedbfb35a85cfa055af0e2e5a5f6af209a82cf"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.534689 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pp6hc" event={"ID":"94dc47ef-d37b-46be-8696-378acb500013","Type":"ContainerStarted","Data":"a38d7b894f7f00fc6ea110dc5df102517ffa0011f3e8ba911b39f76bdbcf2807"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.562286 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:36 crc kubenswrapper[4672]: E0217 16:05:36.562552 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.062490382 +0000 UTC m=+145.816579114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4672]: E0217 16:05:36.563441 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.063423557 +0000 UTC m=+145.817512289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.563634 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.573359 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-pp6hc" podStartSLOduration=5.573344419 podStartE2EDuration="5.573344419s" podCreationTimestamp="2026-02-17 16:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.5722211 +0000 UTC m=+145.326309832" watchObservedRunningTime="2026-02-17 16:05:36.573344419 +0000 UTC m=+145.327433151" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.575783 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cdjsz" event={"ID":"6119a50b-94a4-4095-b14c-f009fe646312","Type":"ContainerStarted","Data":"f604b9dbad9f4b294162cbee985a0a10f9cd99fb71acb7567a72d2d6ff8b65b3"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.594426 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-b62wz" event={"ID":"4d95f63b-d4f4-4da3-a741-c69b49b9233c","Type":"ContainerStarted","Data":"2b3e17766060e54841ae1e183971f8670cc5cdbc5de35c27410c766aff918342"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.599532 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n4nj9" event={"ID":"4aec7350-9b5f-44c1-9a39-24a95a286233","Type":"ContainerStarted","Data":"7ee89381aac667bf4296ba803f65050c14121bcefdea6efbb99914cbde82bc86"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.603691 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bkxss" event={"ID":"ee33c3da-fd0b-45a5-8337-b600c2b6b11e","Type":"ContainerStarted","Data":"cde5220595900e6358acabe7f9a354a3a4dc052745df445fa7441eb565f82bbf"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.603843 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bkxss" event={"ID":"ee33c3da-fd0b-45a5-8337-b600c2b6b11e","Type":"ContainerStarted","Data":"049f976295523af9c84f74fada004692efd5634fc0a0c6f04872640ee5c40fa9"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.605447 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bkxss" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.615680 4672 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-bkxss container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.615747 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bkxss" podUID="ee33c3da-fd0b-45a5-8337-b600c2b6b11e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.618491 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-cdjsz" podStartSLOduration=119.618482422 podStartE2EDuration="1m59.618482422s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.61611301 +0000 UTC m=+145.370201732" watchObservedRunningTime="2026-02-17 16:05:36.618482422 +0000 UTC m=+145.372571154" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.630838 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbvwk" event={"ID":"798d1803-87c5-4e9e-a29e-660f313c283c","Type":"ContainerStarted","Data":"2afdf81d2446eed9b1307d0d8d6cac38de7d54f86d6dbd7f630f0d347b893d54"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.631760 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbvwk" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.635213 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-7prtt" event={"ID":"685f4abd-5760-4f83-b975-0986b69d4cc3","Type":"ContainerStarted","Data":"41906d21ee034e4b0470b4f50aad0f7757f6726902324284c7874bbab1226344"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.639088 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-d9vk6" event={"ID":"750ef8f5-44ad-4016-8894-0b2a05430464","Type":"ContainerStarted","Data":"7021f5fb4f39da5fa603e0bddd03a5b74b61918c0c7d030e214c8f8e0caa96ed"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.651569 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-b62wz" podStartSLOduration=119.651551106 podStartE2EDuration="1m59.651551106s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.649529663 +0000 UTC m=+145.403618405" watchObservedRunningTime="2026-02-17 16:05:36.651551106 +0000 UTC m=+145.405639838" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.653785 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-mqrdm" event={"ID":"fe511f9f-bc6a-4e27-9837-703d6b981fb7","Type":"ContainerStarted","Data":"83029d4a077c773fc86cf4841d737c85e31cee36a28d216d8de0ac208c58b38c"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.654055 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-mqrdm" event={"ID":"fe511f9f-bc6a-4e27-9837-703d6b981fb7","Type":"ContainerStarted","Data":"aeec4dd2916ebe31f821c1bc555691d7055c877803374bf172b3f0986d709651"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.665946 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:36 crc kubenswrapper[4672]: E0217 16:05:36.666637 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.166611804 +0000 UTC m=+145.920700536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.668151 4672 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-bbvwk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.668212 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbvwk" podUID="798d1803-87c5-4e9e-a29e-660f313c283c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.672901 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:36 crc kubenswrapper[4672]: E0217 16:05:36.674611 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.174593685 +0000 UTC m=+145.928682417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.679617 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bkxss" podStartSLOduration=119.679590257 podStartE2EDuration="1m59.679590257s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.672336766 +0000 UTC m=+145.426425498" watchObservedRunningTime="2026-02-17 16:05:36.679590257 +0000 UTC m=+145.433678989" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.682346 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px67v" event={"ID":"fa3f4bc7-5a08-4820-bba2-12b682296098","Type":"ContainerStarted","Data":"0ef01085b80c4c41f558c3cc9e01e930112e8dee0df00ad20ef1d70cc3b63704"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.696245 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6ljr8" event={"ID":"c83e6ac1-9d88-475f-b293-e3accaf7b812","Type":"ContainerStarted","Data":"0bd72103e06462b421cdd7d8161d3b4bb060595f23895fcd38a96717392d59cc"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.702719 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-n4nj9" podStartSLOduration=119.702702578 podStartE2EDuration="1m59.702702578s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.702525574 +0000 UTC m=+145.456614316" watchObservedRunningTime="2026-02-17 16:05:36.702702578 +0000 UTC m=+145.456791310" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.718640 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-76xtz" event={"ID":"eab5a9da-bb14-4f97-9c54-eaa7972a047d","Type":"ContainerStarted","Data":"6af5ace8d523623eefd9a8867f21a1726fa65a0e9cddcdbd1a3138be01845bd0"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.718677 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-76xtz" event={"ID":"eab5a9da-bb14-4f97-9c54-eaa7972a047d","Type":"ContainerStarted","Data":"68943d881c8496cb1fb009de6d8c905c12c3ba74a8dfb92c4b2da2e19b2878b6"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.733441 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbvwk" podStartSLOduration=119.73341525 podStartE2EDuration="1m59.73341525s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.731122389 +0000 UTC m=+145.485211121" watchObservedRunningTime="2026-02-17 16:05:36.73341525 +0000 UTC m=+145.487503982" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.763167 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" event={"ID":"4fc91364-276e-4cc3-bf44-3e5dad5ad06e","Type":"ContainerStarted","Data":"33f60ef629ee61c7b3cd1e829f63964f7e9102c9414f98e208d0a52a1cc0d87f"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.770846 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" event={"ID":"b5948d11-a6da-4f21-a6e8-413a28791775","Type":"ContainerStarted","Data":"2e7ee527cc36a23dddfb24c985f54cd8f62474e27c49b6359dc06deaf0c13a85"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.771583 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.776413 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px67v" podStartSLOduration=119.776404345 podStartE2EDuration="1m59.776404345s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.773808346 +0000 UTC m=+145.527897078" watchObservedRunningTime="2026-02-17 16:05:36.776404345 +0000 UTC m=+145.530493077" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.779885 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" event={"ID":"2ed3c87a-d599-4e91-92ce-377ddef564da","Type":"ContainerStarted","Data":"3b6eee1fd7cc5c0e14628e9f22d295d86b1e2a141b8832c504bb8bd1fdafef4d"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.780690 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.780908 4672 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vwl87 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.781062 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" podUID="b5948d11-a6da-4f21-a6e8-413a28791775" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.784157 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.784394 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-s2n6p" event={"ID":"0f0958c0-4b04-4d8e-9bbb-8dd838c9b966","Type":"ContainerStarted","Data":"5636a85e6f06234a67addda8ae6c40fc33d33e095662f2eda160f9977b795600"} Feb 17 16:05:36 crc kubenswrapper[4672]: E0217 16:05:36.784623 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.284586621 +0000 UTC m=+146.038675353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.791212 4672 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bk22j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.791268 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" podUID="2ed3c87a-d599-4e91-92ce-377ddef564da" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.792045 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:36 crc kubenswrapper[4672]: E0217 16:05:36.800767 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.300751468 +0000 UTC m=+146.054840200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.808182 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lxmvh" event={"ID":"5e8f5285-7002-4472-be6a-a21731ccaf67","Type":"ContainerStarted","Data":"3bcd52d911755554c2fd6d5c93d381a1bafd1937e7275570c82ad1a80c48c273"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.812764 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8cvrf" event={"ID":"73969925-7fe2-4e3a-9ede-d1bd990f7f71","Type":"ContainerStarted","Data":"22e7c821764c7bd6f5e63594af53607dbd6c453422eb83065aba28adef62c6b2"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.814588 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-d9vk6" podStartSLOduration=119.814570874 podStartE2EDuration="1m59.814570874s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.811552324 +0000 UTC m=+145.565641056" watchObservedRunningTime="2026-02-17 16:05:36.814570874 +0000 UTC m=+145.568659606" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.815284 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ltvxx" event={"ID":"00f1ee53-d03e-47ad-bf0e-d04589199cb5","Type":"ContainerStarted","Data":"33e504283b0bbd4768dbc49e6208903d0e0224e11cd6a44ad6fd8adc644b9616"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.833735 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hgfxd" event={"ID":"fa50bf9f-5749-43e8-9ddb-7f9be1c6d8ce","Type":"ContainerStarted","Data":"f6060d63aa3b3385aa771ebf1ed715ec8994f6c3826e9a80853198bc7f358d78"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.849297 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-mqrdm" podStartSLOduration=119.849266591 podStartE2EDuration="1m59.849266591s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.839343648 +0000 UTC m=+145.593432380" watchObservedRunningTime="2026-02-17 16:05:36.849266591 +0000 UTC m=+145.603355323" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.893469 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:36 crc kubenswrapper[4672]: E0217 16:05:36.894657 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.39463416 +0000 UTC m=+146.148722892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.939180 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g6m8r" event={"ID":"e0bef061-3829-41ea-926f-058de4404865","Type":"ContainerStarted","Data":"0a0cf6649649a1af6ef70f28bc614f8df15d6a75f7f0d620896b1df0f84eca79"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.952792 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6ljr8" podStartSLOduration=119.952759206 podStartE2EDuration="1m59.952759206s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.892413271 +0000 UTC m=+145.646502003" watchObservedRunningTime="2026-02-17 16:05:36.952759206 +0000 UTC m=+145.706847938" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.956389 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-7prtt" podStartSLOduration=6.956369891 podStartE2EDuration="6.956369891s" podCreationTimestamp="2026-02-17 16:05:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.950126856 +0000 UTC m=+145.704215578" watchObservedRunningTime="2026-02-17 16:05:36.956369891 +0000 UTC m=+145.710458633" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.967683 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dwz6v" event={"ID":"e9318de0-6c45-4506-a667-b8e7180f7584","Type":"ContainerStarted","Data":"15b5f1b7ad8a42f94f66c6460921d58cf74897cefb259dc5c26c225d8eacdcdb"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.967784 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dwz6v" event={"ID":"e9318de0-6c45-4506-a667-b8e7180f7584","Type":"ContainerStarted","Data":"1e3f1f925690abdb81d9cc47e77b0b9515a9213ee0b2fb9ab4da6f3559a2f707"} Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.968899 4672 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7p722 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.968942 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" podUID="a4206b74-7012-47af-9344-253aa7453e86" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.986090 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" podStartSLOduration=119.986069516 podStartE2EDuration="1m59.986069516s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:36.983728355 +0000 UTC m=+145.737817087" watchObservedRunningTime="2026-02-17 16:05:36.986069516 +0000 UTC m=+145.740158248" Feb 17 16:05:36 crc kubenswrapper[4672]: I0217 16:05:36.995715 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:36 crc kubenswrapper[4672]: E0217 16:05:36.996867 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.496853031 +0000 UTC m=+146.250941763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.012820 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" podStartSLOduration=120.012802983 podStartE2EDuration="2m0.012802983s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:37.009563157 +0000 UTC m=+145.763651899" watchObservedRunningTime="2026-02-17 16:05:37.012802983 +0000 UTC m=+145.766891715" Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.027212 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g6m8r" podStartSLOduration=120.027197183 podStartE2EDuration="2m0.027197183s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:37.025884749 +0000 UTC m=+145.779973481" watchObservedRunningTime="2026-02-17 16:05:37.027197183 +0000 UTC m=+145.781285915" Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.075316 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hgfxd" podStartSLOduration=120.075295125 podStartE2EDuration="2m0.075295125s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:37.072115491 +0000 UTC m=+145.826204223" watchObservedRunningTime="2026-02-17 16:05:37.075295125 +0000 UTC m=+145.829383857" Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.099354 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:37 crc kubenswrapper[4672]: E0217 16:05:37.103500 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.603477049 +0000 UTC m=+146.357565781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.105047 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-76xtz" podStartSLOduration=120.105031171 podStartE2EDuration="2m0.105031171s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:37.094629346 +0000 UTC m=+145.848718078" watchObservedRunningTime="2026-02-17 16:05:37.105031171 +0000 UTC m=+145.859119903" Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.113869 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.155580 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dwz6v" podStartSLOduration=120.155552546 podStartE2EDuration="2m0.155552546s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:37.154480687 +0000 UTC m=+145.908569409" watchObservedRunningTime="2026-02-17 16:05:37.155552546 +0000 UTC m=+145.909641268" Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.157363 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" podStartSLOduration=120.157355443 podStartE2EDuration="2m0.157355443s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:37.123412976 +0000 UTC m=+145.877501708" watchObservedRunningTime="2026-02-17 16:05:37.157355443 +0000 UTC m=+145.911444175" Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.206172 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:37 crc kubenswrapper[4672]: E0217 16:05:37.206518 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.706493062 +0000 UTC m=+146.460581794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.310907 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:37 crc kubenswrapper[4672]: E0217 16:05:37.311277 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.811262841 +0000 UTC m=+146.565351573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.320700 4672 patch_prober.go:28] interesting pod/router-default-5444994796-vndpv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:37 crc kubenswrapper[4672]: [-]has-synced failed: reason withheld Feb 17 16:05:37 crc kubenswrapper[4672]: [+]process-running ok Feb 17 16:05:37 crc kubenswrapper[4672]: healthz check failed Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.320756 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vndpv" podUID="1d98488b-d521-4207-a7b8-23b37cb1ef98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.412141 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:37 crc kubenswrapper[4672]: E0217 16:05:37.412546 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:37.912497667 +0000 UTC m=+146.666586399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.466969 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-17 16:00:36 +0000 UTC, rotation deadline is 2026-12-04 02:18:13.990945365 +0000 UTC Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.467279 4672 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6946h12m36.523669051s for next certificate rotation Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.513612 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:37 crc kubenswrapper[4672]: E0217 16:05:37.513945 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.013930217 +0000 UTC m=+146.768018949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.615128 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:37 crc kubenswrapper[4672]: E0217 16:05:37.615628 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.115606545 +0000 UTC m=+146.869695357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.716783 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:37 crc kubenswrapper[4672]: E0217 16:05:37.717093 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.217079417 +0000 UTC m=+146.971168149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.818163 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:37 crc kubenswrapper[4672]: E0217 16:05:37.818645 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.31860155 +0000 UTC m=+147.072690272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.919536 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:37 crc kubenswrapper[4672]: E0217 16:05:37.919712 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.419678531 +0000 UTC m=+147.173767253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.919756 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:37 crc kubenswrapper[4672]: E0217 16:05:37.920058 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.420050471 +0000 UTC m=+147.174139203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.972634 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gcffc" event={"ID":"51a45e2f-3728-46d0-b04c-cdec82ed7d58","Type":"ContainerStarted","Data":"adf54a5e5ca9b57d8261496f0e934aaae2843b41e3272874dd0ca49162d18416"} Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.973744 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-gcffc" Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.975317 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" event={"ID":"29ca355c-84f8-434c-a892-d0d3c6c78c00","Type":"ContainerStarted","Data":"d738a11ac346ddf4bbad2884a7fa3759d3a0a99b7c12720dcc7fca07ce0eaff7"} Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.975823 4672 patch_prober.go:28] interesting pod/console-operator-58897d9998-gcffc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.975857 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gcffc" podUID="51a45e2f-3728-46d0-b04c-cdec82ed7d58" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.977054 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q54wv" event={"ID":"8f99610f-8623-43fe-a352-8bb6ced6a41c","Type":"ContainerStarted","Data":"b11987f0f1d099650730e6b713570ca3a86278e3691b8e67e3390edf11a3e417"} Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.977086 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q54wv" event={"ID":"8f99610f-8623-43fe-a352-8bb6ced6a41c","Type":"ContainerStarted","Data":"06066fd0693c7be73e5b20af4a541dea6f0a112d0033353494f8f18143c4a698"} Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.979440 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ltvxx" event={"ID":"00f1ee53-d03e-47ad-bf0e-d04589199cb5","Type":"ContainerStarted","Data":"e46d38473d3f395d12f5331548d60c53b83e371e56e9e0170e28b6d81dcce98a"} Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.979471 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ltvxx" event={"ID":"00f1ee53-d03e-47ad-bf0e-d04589199cb5","Type":"ContainerStarted","Data":"57f71391612442f0349308a1d2f29fbe89d1c3837c9fe631f1faac18b182139f"} Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.981423 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hgfxd" event={"ID":"fa50bf9f-5749-43e8-9ddb-7f9be1c6d8ce","Type":"ContainerStarted","Data":"dd5550e4e0a0f3b0163943ac1d255d7c3a83e7351543701b115bb6edbaab2352"} Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.982767 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n4nj9" event={"ID":"4aec7350-9b5f-44c1-9a39-24a95a286233","Type":"ContainerStarted","Data":"2cc591c33b5758800d0f2e366d9bd479d177b44644f106fc45970e5cd8c516d9"} Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.984341 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8cvrf" event={"ID":"73969925-7fe2-4e3a-9ede-d1bd990f7f71","Type":"ContainerStarted","Data":"a663a810be13b4ff11d532b7471d6a293a601ba69ce13319f547d26d66f86f00"} Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.986313 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-s2n6p" event={"ID":"0f0958c0-4b04-4d8e-9bbb-8dd838c9b966","Type":"ContainerStarted","Data":"1f5c54fe3c8ab3ed0133a289c386898b97556146cf7f4326a7dccf3a5e308e8f"} Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.986341 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-s2n6p" event={"ID":"0f0958c0-4b04-4d8e-9bbb-8dd838c9b966","Type":"ContainerStarted","Data":"3093cd081229a30765e5d0f27d983ebf97ecae6b4ecdd0b3fd3ee1351384efca"} Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.986441 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-s2n6p" Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.987792 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lxmvh" event={"ID":"5e8f5285-7002-4472-be6a-a21731ccaf67","Type":"ContainerStarted","Data":"6372dfa0f4a662561b9f1d86f29c7994f9532c7143e108b4b3159aecbc67242c"} Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.990028 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" event={"ID":"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba","Type":"ContainerStarted","Data":"df822cda6bd69b92f453a490b2728044cc0e0f956e1c6ab84d71b7b778ee0288"} Feb 17 16:05:37 crc kubenswrapper[4672]: I0217 16:05:37.993569 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" event={"ID":"eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba","Type":"ContainerStarted","Data":"946ee679460b2a0e7501c39e1997a0d6f799a110bff47e66390b21b915706ab5"} Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.022927 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:38 crc kubenswrapper[4672]: E0217 16:05:38.023600 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.523582877 +0000 UTC m=+147.277671609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.023128 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6grzz" event={"ID":"43a5e5af-ba41-4a32-9893-1c17a54e7024","Type":"ContainerStarted","Data":"c2bff0ca3ca29cfd26fb3eb96e378676028d7e3c1aedcf5ef2dc93561568a8fd"} Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.023710 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6grzz" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.023982 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:38 crc kubenswrapper[4672]: E0217 16:05:38.024599 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.524578514 +0000 UTC m=+147.278667246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.028034 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-gcffc" podStartSLOduration=122.028021175 podStartE2EDuration="2m2.028021175s" podCreationTimestamp="2026-02-17 16:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:38.023335691 +0000 UTC m=+146.777424423" watchObservedRunningTime="2026-02-17 16:05:38.028021175 +0000 UTC m=+146.782109907" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.028056 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g6m8r" event={"ID":"e0bef061-3829-41ea-926f-058de4404865","Type":"ContainerStarted","Data":"9b0186951aca1310abc10eeaf5537c08306c31bbed04096d58966964a696b1cf"} Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.037271 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" event={"ID":"2ed3c87a-d599-4e91-92ce-377ddef564da","Type":"ContainerStarted","Data":"994f5beba1593c7a76312740bff3f4e0fd815bb7e935f7bd0b28b9387dabdf02"} Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.038353 4672 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bk22j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.038388 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" podUID="2ed3c87a-d599-4e91-92ce-377ddef564da" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.049540 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8ggmr" event={"ID":"bbc8108a-8a2d-4f9a-af8f-335f0bf8ff6d","Type":"ContainerStarted","Data":"c1b9eb823f884c87ce89fa2e4ffd8ff2e37d49763e2a86698dc827a972586d2f"} Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.060157 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-v9lrm" event={"ID":"908f0c62-5b97-4c11-8b5d-6454f36295f6","Type":"ContainerStarted","Data":"31e7cab191208dc603445868bd147271650acea635b1d3b2ea19288227bddfc6"} Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.077893 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.078174 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.081819 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.082370 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bbvwk" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.083739 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6grzz" podStartSLOduration=122.083721607 podStartE2EDuration="2m2.083721607s" podCreationTimestamp="2026-02-17 16:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:38.077742939 +0000 UTC m=+146.831831671" watchObservedRunningTime="2026-02-17 16:05:38.083721607 +0000 UTC m=+146.837810339" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.087601 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.097861 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bkxss" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.127052 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:38 crc kubenswrapper[4672]: E0217 16:05:38.127670 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.627613037 +0000 UTC m=+147.381701769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.128866 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:38 crc kubenswrapper[4672]: E0217 16:05:38.147258 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.647237555 +0000 UTC m=+147.401326287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.235196 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:38 crc kubenswrapper[4672]: E0217 16:05:38.235298 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.735281922 +0000 UTC m=+147.489370654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.235681 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:38 crc kubenswrapper[4672]: E0217 16:05:38.237489 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.73747998 +0000 UTC m=+147.491568712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.244203 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.244381 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.247472 4672 patch_prober.go:28] interesting pod/apiserver-76f77b778f-lzwwl container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.12:8443/livez\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.247571 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" podUID="eb85ddd1-cd9b-4365-92ee-7e2f52f2cdba" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.12:8443/livez\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.247969 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.255142 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-ltvxx" podStartSLOduration=121.255117797 podStartE2EDuration="2m1.255117797s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:38.122621295 +0000 UTC m=+146.876710027" watchObservedRunningTime="2026-02-17 16:05:38.255117797 +0000 UTC m=+147.009206539" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.323781 4672 patch_prober.go:28] interesting pod/router-default-5444994796-vndpv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:38 crc kubenswrapper[4672]: [-]has-synced failed: reason withheld Feb 17 16:05:38 crc kubenswrapper[4672]: [+]process-running ok Feb 17 16:05:38 crc kubenswrapper[4672]: healthz check failed Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.324130 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vndpv" podUID="1d98488b-d521-4207-a7b8-23b37cb1ef98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.338401 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:38 crc kubenswrapper[4672]: E0217 16:05:38.338719 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.838704806 +0000 UTC m=+147.592793538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.359168 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8wlwl" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.386273 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" podStartSLOduration=122.386255382 podStartE2EDuration="2m2.386255382s" podCreationTimestamp="2026-02-17 16:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:38.25712058 +0000 UTC m=+147.011209312" watchObservedRunningTime="2026-02-17 16:05:38.386255382 +0000 UTC m=+147.140344114" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.440320 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:38 crc kubenswrapper[4672]: E0217 16:05:38.440638 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:38.94062681 +0000 UTC m=+147.694715542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.488132 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8cvrf" podStartSLOduration=121.488109794 podStartE2EDuration="2m1.488109794s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:38.487854958 +0000 UTC m=+147.241943690" watchObservedRunningTime="2026-02-17 16:05:38.488109794 +0000 UTC m=+147.242198526" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.488687 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lxmvh" podStartSLOduration=122.48868007 podStartE2EDuration="2m2.48868007s" podCreationTimestamp="2026-02-17 16:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:38.393017101 +0000 UTC m=+147.147105843" watchObservedRunningTime="2026-02-17 16:05:38.48868007 +0000 UTC m=+147.242768802" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.536330 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q54wv" podStartSLOduration=121.536314399 podStartE2EDuration="2m1.536314399s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:38.52879914 +0000 UTC m=+147.282887872" watchObservedRunningTime="2026-02-17 16:05:38.536314399 +0000 UTC m=+147.290403121" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.541793 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:38 crc kubenswrapper[4672]: E0217 16:05:38.542145 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.042129542 +0000 UTC m=+147.796218274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.618380 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-s2n6p" podStartSLOduration=7.618363357 podStartE2EDuration="7.618363357s" podCreationTimestamp="2026-02-17 16:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:38.556167733 +0000 UTC m=+147.310256465" watchObservedRunningTime="2026-02-17 16:05:38.618363357 +0000 UTC m=+147.372452089" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.643346 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:38 crc kubenswrapper[4672]: E0217 16:05:38.643888 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.143873381 +0000 UTC m=+147.897962113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.745004 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:38 crc kubenswrapper[4672]: E0217 16:05:38.745986 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.24597069 +0000 UTC m=+148.000059422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.816604 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8ggmr" podStartSLOduration=121.816590516 podStartE2EDuration="2m1.816590516s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:38.809430517 +0000 UTC m=+147.563519239" watchObservedRunningTime="2026-02-17 16:05:38.816590516 +0000 UTC m=+147.570679248" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.836969 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vxnc7"] Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.838115 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxnc7" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.839894 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.848421 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:38 crc kubenswrapper[4672]: E0217 16:05:38.848934 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.34890107 +0000 UTC m=+148.102989802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.876440 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-v9lrm" podStartSLOduration=121.876427078 podStartE2EDuration="2m1.876427078s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:38.852428303 +0000 UTC m=+147.606517035" watchObservedRunningTime="2026-02-17 16:05:38.876427078 +0000 UTC m=+147.630515810" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.880159 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vxnc7"] Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.953605 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.954024 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db7fc0eb-2899-4c37-bf2e-30d02cbffb2c-utilities\") pod \"community-operators-vxnc7\" (UID: \"db7fc0eb-2899-4c37-bf2e-30d02cbffb2c\") " pod="openshift-marketplace/community-operators-vxnc7" Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.954144 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bjxk\" (UniqueName: \"kubernetes.io/projected/db7fc0eb-2899-4c37-bf2e-30d02cbffb2c-kube-api-access-5bjxk\") pod \"community-operators-vxnc7\" (UID: \"db7fc0eb-2899-4c37-bf2e-30d02cbffb2c\") " pod="openshift-marketplace/community-operators-vxnc7" Feb 17 16:05:38 crc kubenswrapper[4672]: E0217 16:05:38.954348 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.454332427 +0000 UTC m=+148.208421159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.954480 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:38 crc kubenswrapper[4672]: E0217 16:05:38.954855 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.45484795 +0000 UTC m=+148.208936682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:38 crc kubenswrapper[4672]: I0217 16:05:38.955476 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db7fc0eb-2899-4c37-bf2e-30d02cbffb2c-catalog-content\") pod \"community-operators-vxnc7\" (UID: \"db7fc0eb-2899-4c37-bf2e-30d02cbffb2c\") " pod="openshift-marketplace/community-operators-vxnc7" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.001684 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wvksq"] Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.003653 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wvksq" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.011563 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.018133 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wvksq"] Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.056012 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:39 crc kubenswrapper[4672]: E0217 16:05:39.056209 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.556184829 +0000 UTC m=+148.310273561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.056548 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bjxk\" (UniqueName: \"kubernetes.io/projected/db7fc0eb-2899-4c37-bf2e-30d02cbffb2c-kube-api-access-5bjxk\") pod \"community-operators-vxnc7\" (UID: \"db7fc0eb-2899-4c37-bf2e-30d02cbffb2c\") " pod="openshift-marketplace/community-operators-vxnc7" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.056682 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.056799 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db7fc0eb-2899-4c37-bf2e-30d02cbffb2c-catalog-content\") pod \"community-operators-vxnc7\" (UID: \"db7fc0eb-2899-4c37-bf2e-30d02cbffb2c\") " pod="openshift-marketplace/community-operators-vxnc7" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.056918 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db7fc0eb-2899-4c37-bf2e-30d02cbffb2c-utilities\") pod \"community-operators-vxnc7\" (UID: \"db7fc0eb-2899-4c37-bf2e-30d02cbffb2c\") " pod="openshift-marketplace/community-operators-vxnc7" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.057386 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db7fc0eb-2899-4c37-bf2e-30d02cbffb2c-utilities\") pod \"community-operators-vxnc7\" (UID: \"db7fc0eb-2899-4c37-bf2e-30d02cbffb2c\") " pod="openshift-marketplace/community-operators-vxnc7" Feb 17 16:05:39 crc kubenswrapper[4672]: E0217 16:05:39.057985 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.557973316 +0000 UTC m=+148.312062048 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.058404 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db7fc0eb-2899-4c37-bf2e-30d02cbffb2c-catalog-content\") pod \"community-operators-vxnc7\" (UID: \"db7fc0eb-2899-4c37-bf2e-30d02cbffb2c\") " pod="openshift-marketplace/community-operators-vxnc7" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.107822 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" event={"ID":"29ca355c-84f8-434c-a892-d0d3c6c78c00","Type":"ContainerStarted","Data":"581ed5c905b58aa669b739cd62f0a62e9692348a1403631e4660b8fa4af3c12a"} Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.111356 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-v9lrm" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.113233 4672 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bk22j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.113418 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" podUID="2ed3c87a-d599-4e91-92ce-377ddef564da" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.114838 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bjxk\" (UniqueName: \"kubernetes.io/projected/db7fc0eb-2899-4c37-bf2e-30d02cbffb2c-kube-api-access-5bjxk\") pod \"community-operators-vxnc7\" (UID: \"db7fc0eb-2899-4c37-bf2e-30d02cbffb2c\") " pod="openshift-marketplace/community-operators-vxnc7" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.119878 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kfzvb" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.139838 4672 patch_prober.go:28] interesting pod/downloads-7954f5f757-v9lrm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.140174 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v9lrm" podUID="908f0c62-5b97-4c11-8b5d-6454f36295f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.158168 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.158626 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/708084b0-bae5-4cfc-ab45-cc5ca619f849-catalog-content\") pod \"certified-operators-wvksq\" (UID: \"708084b0-bae5-4cfc-ab45-cc5ca619f849\") " pod="openshift-marketplace/certified-operators-wvksq" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.158736 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5zw5\" (UniqueName: \"kubernetes.io/projected/708084b0-bae5-4cfc-ab45-cc5ca619f849-kube-api-access-r5zw5\") pod \"certified-operators-wvksq\" (UID: \"708084b0-bae5-4cfc-ab45-cc5ca619f849\") " pod="openshift-marketplace/certified-operators-wvksq" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.158892 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/708084b0-bae5-4cfc-ab45-cc5ca619f849-utilities\") pod \"certified-operators-wvksq\" (UID: \"708084b0-bae5-4cfc-ab45-cc5ca619f849\") " pod="openshift-marketplace/certified-operators-wvksq" Feb 17 16:05:39 crc kubenswrapper[4672]: E0217 16:05:39.159089 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.659075218 +0000 UTC m=+148.413163950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.177096 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxnc7" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.213298 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2mj5d"] Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.214978 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2mj5d" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.251261 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2mj5d"] Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.260714 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.261641 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/708084b0-bae5-4cfc-ab45-cc5ca619f849-catalog-content\") pod \"certified-operators-wvksq\" (UID: \"708084b0-bae5-4cfc-ab45-cc5ca619f849\") " pod="openshift-marketplace/certified-operators-wvksq" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.262058 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5zw5\" (UniqueName: \"kubernetes.io/projected/708084b0-bae5-4cfc-ab45-cc5ca619f849-kube-api-access-r5zw5\") pod \"certified-operators-wvksq\" (UID: \"708084b0-bae5-4cfc-ab45-cc5ca619f849\") " pod="openshift-marketplace/certified-operators-wvksq" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.262676 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/708084b0-bae5-4cfc-ab45-cc5ca619f849-utilities\") pod \"certified-operators-wvksq\" (UID: \"708084b0-bae5-4cfc-ab45-cc5ca619f849\") " pod="openshift-marketplace/certified-operators-wvksq" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.263481 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/708084b0-bae5-4cfc-ab45-cc5ca619f849-utilities\") pod \"certified-operators-wvksq\" (UID: \"708084b0-bae5-4cfc-ab45-cc5ca619f849\") " pod="openshift-marketplace/certified-operators-wvksq" Feb 17 16:05:39 crc kubenswrapper[4672]: E0217 16:05:39.275016 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.774993832 +0000 UTC m=+148.529082564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.298722 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/708084b0-bae5-4cfc-ab45-cc5ca619f849-catalog-content\") pod \"certified-operators-wvksq\" (UID: \"708084b0-bae5-4cfc-ab45-cc5ca619f849\") " pod="openshift-marketplace/certified-operators-wvksq" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.324980 4672 patch_prober.go:28] interesting pod/router-default-5444994796-vndpv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:39 crc kubenswrapper[4672]: [-]has-synced failed: reason withheld Feb 17 16:05:39 crc kubenswrapper[4672]: [+]process-running ok Feb 17 16:05:39 crc kubenswrapper[4672]: healthz check failed Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.325048 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vndpv" podUID="1d98488b-d521-4207-a7b8-23b37cb1ef98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.355310 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5zw5\" (UniqueName: \"kubernetes.io/projected/708084b0-bae5-4cfc-ab45-cc5ca619f849-kube-api-access-r5zw5\") pod \"certified-operators-wvksq\" (UID: \"708084b0-bae5-4cfc-ab45-cc5ca619f849\") " pod="openshift-marketplace/certified-operators-wvksq" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.364237 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.364708 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm2pf\" (UniqueName: \"kubernetes.io/projected/2217a413-541b-46bc-9563-b382fb9f090d-kube-api-access-zm2pf\") pod \"community-operators-2mj5d\" (UID: \"2217a413-541b-46bc-9563-b382fb9f090d\") " pod="openshift-marketplace/community-operators-2mj5d" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.364758 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2217a413-541b-46bc-9563-b382fb9f090d-catalog-content\") pod \"community-operators-2mj5d\" (UID: \"2217a413-541b-46bc-9563-b382fb9f090d\") " pod="openshift-marketplace/community-operators-2mj5d" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.364789 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2217a413-541b-46bc-9563-b382fb9f090d-utilities\") pod \"community-operators-2mj5d\" (UID: \"2217a413-541b-46bc-9563-b382fb9f090d\") " pod="openshift-marketplace/community-operators-2mj5d" Feb 17 16:05:39 crc kubenswrapper[4672]: E0217 16:05:39.364927 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.864893927 +0000 UTC m=+148.618982659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.424672 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d4qrd"] Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.425735 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4qrd" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.426741 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d4qrd"] Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.466264 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm2pf\" (UniqueName: \"kubernetes.io/projected/2217a413-541b-46bc-9563-b382fb9f090d-kube-api-access-zm2pf\") pod \"community-operators-2mj5d\" (UID: \"2217a413-541b-46bc-9563-b382fb9f090d\") " pod="openshift-marketplace/community-operators-2mj5d" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.466306 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.466333 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2217a413-541b-46bc-9563-b382fb9f090d-catalog-content\") pod \"community-operators-2mj5d\" (UID: \"2217a413-541b-46bc-9563-b382fb9f090d\") " pod="openshift-marketplace/community-operators-2mj5d" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.466360 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2217a413-541b-46bc-9563-b382fb9f090d-utilities\") pod \"community-operators-2mj5d\" (UID: \"2217a413-541b-46bc-9563-b382fb9f090d\") " pod="openshift-marketplace/community-operators-2mj5d" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.466790 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2217a413-541b-46bc-9563-b382fb9f090d-utilities\") pod \"community-operators-2mj5d\" (UID: \"2217a413-541b-46bc-9563-b382fb9f090d\") " pod="openshift-marketplace/community-operators-2mj5d" Feb 17 16:05:39 crc kubenswrapper[4672]: E0217 16:05:39.466845 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:39.966828731 +0000 UTC m=+148.720917463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.467025 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2217a413-541b-46bc-9563-b382fb9f090d-catalog-content\") pod \"community-operators-2mj5d\" (UID: \"2217a413-541b-46bc-9563-b382fb9f090d\") " pod="openshift-marketplace/community-operators-2mj5d" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.564969 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm2pf\" (UniqueName: \"kubernetes.io/projected/2217a413-541b-46bc-9563-b382fb9f090d-kube-api-access-zm2pf\") pod \"community-operators-2mj5d\" (UID: \"2217a413-541b-46bc-9563-b382fb9f090d\") " pod="openshift-marketplace/community-operators-2mj5d" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.569335 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.569497 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd92fc97-4e60-481b-8d9f-91642c614e48-catalog-content\") pod \"certified-operators-d4qrd\" (UID: \"fd92fc97-4e60-481b-8d9f-91642c614e48\") " pod="openshift-marketplace/certified-operators-d4qrd" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.569551 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94q6r\" (UniqueName: \"kubernetes.io/projected/fd92fc97-4e60-481b-8d9f-91642c614e48-kube-api-access-94q6r\") pod \"certified-operators-d4qrd\" (UID: \"fd92fc97-4e60-481b-8d9f-91642c614e48\") " pod="openshift-marketplace/certified-operators-d4qrd" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.569599 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd92fc97-4e60-481b-8d9f-91642c614e48-utilities\") pod \"certified-operators-d4qrd\" (UID: \"fd92fc97-4e60-481b-8d9f-91642c614e48\") " pod="openshift-marketplace/certified-operators-d4qrd" Feb 17 16:05:39 crc kubenswrapper[4672]: E0217 16:05:39.569708 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:40.06969306 +0000 UTC m=+148.823781792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.628104 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2mj5d" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.654955 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wvksq" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.670915 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd92fc97-4e60-481b-8d9f-91642c614e48-utilities\") pod \"certified-operators-d4qrd\" (UID: \"fd92fc97-4e60-481b-8d9f-91642c614e48\") " pod="openshift-marketplace/certified-operators-d4qrd" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.671011 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.671033 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.671068 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd92fc97-4e60-481b-8d9f-91642c614e48-catalog-content\") pod \"certified-operators-d4qrd\" (UID: \"fd92fc97-4e60-481b-8d9f-91642c614e48\") " pod="openshift-marketplace/certified-operators-d4qrd" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.671092 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.671109 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94q6r\" (UniqueName: \"kubernetes.io/projected/fd92fc97-4e60-481b-8d9f-91642c614e48-kube-api-access-94q6r\") pod \"certified-operators-d4qrd\" (UID: \"fd92fc97-4e60-481b-8d9f-91642c614e48\") " pod="openshift-marketplace/certified-operators-d4qrd" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.672263 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd92fc97-4e60-481b-8d9f-91642c614e48-utilities\") pod \"certified-operators-d4qrd\" (UID: \"fd92fc97-4e60-481b-8d9f-91642c614e48\") " pod="openshift-marketplace/certified-operators-d4qrd" Feb 17 16:05:39 crc kubenswrapper[4672]: E0217 16:05:39.672483 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:40.172436375 +0000 UTC m=+148.926525107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.672559 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd92fc97-4e60-481b-8d9f-91642c614e48-catalog-content\") pod \"certified-operators-d4qrd\" (UID: \"fd92fc97-4e60-481b-8d9f-91642c614e48\") " pod="openshift-marketplace/certified-operators-d4qrd" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.676575 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.684161 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.701055 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6grzz" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.711815 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94q6r\" (UniqueName: \"kubernetes.io/projected/fd92fc97-4e60-481b-8d9f-91642c614e48-kube-api-access-94q6r\") pod \"certified-operators-d4qrd\" (UID: \"fd92fc97-4e60-481b-8d9f-91642c614e48\") " pod="openshift-marketplace/certified-operators-d4qrd" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.739798 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-gcffc" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.742771 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4qrd" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.777069 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.777254 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.777302 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:39 crc kubenswrapper[4672]: E0217 16:05:39.777984 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:40.277956784 +0000 UTC m=+149.032045516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.802300 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.808193 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.861709 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vxnc7"] Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.878719 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:39 crc kubenswrapper[4672]: E0217 16:05:39.879055 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:40.379044875 +0000 UTC m=+149.133133607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.968767 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.983049 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:39 crc kubenswrapper[4672]: E0217 16:05:39.983387 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:40.483372223 +0000 UTC m=+149.237460955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:39 crc kubenswrapper[4672]: I0217 16:05:39.983960 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.084462 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:40 crc kubenswrapper[4672]: E0217 16:05:40.085873 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:40.585857511 +0000 UTC m=+149.339946233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.086042 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.186978 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:40 crc kubenswrapper[4672]: E0217 16:05:40.187298 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:40.687282912 +0000 UTC m=+149.441371644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.221007 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" event={"ID":"29ca355c-84f8-434c-a892-d0d3c6c78c00","Type":"ContainerStarted","Data":"66d7dd3c874f3f6a3d11a8e6e1d656e7fe51d493c76111e01429d9d6681414f3"} Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.242534 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxnc7" event={"ID":"db7fc0eb-2899-4c37-bf2e-30d02cbffb2c","Type":"ContainerStarted","Data":"222782152d1da94aaa5f20349999704de626197a46e88a664a414272eca82da0"} Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.245473 4672 patch_prober.go:28] interesting pod/downloads-7954f5f757-v9lrm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.245507 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v9lrm" podUID="908f0c62-5b97-4c11-8b5d-6454f36295f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.298666 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:40 crc kubenswrapper[4672]: E0217 16:05:40.303355 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:40.803340629 +0000 UTC m=+149.557429361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.347313 4672 patch_prober.go:28] interesting pod/router-default-5444994796-vndpv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:40 crc kubenswrapper[4672]: [-]has-synced failed: reason withheld Feb 17 16:05:40 crc kubenswrapper[4672]: [+]process-running ok Feb 17 16:05:40 crc kubenswrapper[4672]: healthz check failed Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.347362 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vndpv" podUID="1d98488b-d521-4207-a7b8-23b37cb1ef98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.400487 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:40 crc kubenswrapper[4672]: E0217 16:05:40.400780 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:40.900764183 +0000 UTC m=+149.654852915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.470373 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d4qrd"] Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.477825 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wvksq"] Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.504251 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:40 crc kubenswrapper[4672]: E0217 16:05:40.504579 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:41.004568267 +0000 UTC m=+149.758656999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:40 crc kubenswrapper[4672]: W0217 16:05:40.512924 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd92fc97_4e60_481b_8d9f_91642c614e48.slice/crio-75b516eb1355d4dbab7495cd227252df9c45fc1de1a3bb6891c47e0456f39baa WatchSource:0}: Error finding container 75b516eb1355d4dbab7495cd227252df9c45fc1de1a3bb6891c47e0456f39baa: Status 404 returned error can't find the container with id 75b516eb1355d4dbab7495cd227252df9c45fc1de1a3bb6891c47e0456f39baa Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.609683 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:40 crc kubenswrapper[4672]: E0217 16:05:40.610311 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:41.110295641 +0000 UTC m=+149.864384373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.666684 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.667271 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.680071 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.680280 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.688006 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.713783 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c149acc1-b05c-4919-8deb-2b0d7d9d90b9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c149acc1-b05c-4919-8deb-2b0d7d9d90b9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.713857 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.713883 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c149acc1-b05c-4919-8deb-2b0d7d9d90b9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c149acc1-b05c-4919-8deb-2b0d7d9d90b9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 16:05:40 crc kubenswrapper[4672]: E0217 16:05:40.714148 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:41.214136135 +0000 UTC m=+149.968224857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.748257 4672 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.793768 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l98cc"] Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.803363 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l98cc" Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.805263 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l98cc"] Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.808867 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.815697 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.821219 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c149acc1-b05c-4919-8deb-2b0d7d9d90b9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c149acc1-b05c-4919-8deb-2b0d7d9d90b9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.821395 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c149acc1-b05c-4919-8deb-2b0d7d9d90b9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c149acc1-b05c-4919-8deb-2b0d7d9d90b9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.821680 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c149acc1-b05c-4919-8deb-2b0d7d9d90b9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c149acc1-b05c-4919-8deb-2b0d7d9d90b9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 16:05:40 crc kubenswrapper[4672]: E0217 16:05:40.821879 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:41.321851272 +0000 UTC m=+150.075940004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.846414 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c149acc1-b05c-4919-8deb-2b0d7d9d90b9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c149acc1-b05c-4919-8deb-2b0d7d9d90b9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.922080 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zc5l\" (UniqueName: \"kubernetes.io/projected/505bfe60-cd7c-4bd6-981a-c14076ef5387-kube-api-access-9zc5l\") pod \"redhat-marketplace-l98cc\" (UID: \"505bfe60-cd7c-4bd6-981a-c14076ef5387\") " pod="openshift-marketplace/redhat-marketplace-l98cc" Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.922339 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/505bfe60-cd7c-4bd6-981a-c14076ef5387-utilities\") pod \"redhat-marketplace-l98cc\" (UID: \"505bfe60-cd7c-4bd6-981a-c14076ef5387\") " pod="openshift-marketplace/redhat-marketplace-l98cc" Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.922370 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:40 crc kubenswrapper[4672]: I0217 16:05:40.922397 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/505bfe60-cd7c-4bd6-981a-c14076ef5387-catalog-content\") pod \"redhat-marketplace-l98cc\" (UID: \"505bfe60-cd7c-4bd6-981a-c14076ef5387\") " pod="openshift-marketplace/redhat-marketplace-l98cc" Feb 17 16:05:40 crc kubenswrapper[4672]: E0217 16:05:40.922729 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:41.422716438 +0000 UTC m=+150.176805170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.024032 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.024594 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zc5l\" (UniqueName: \"kubernetes.io/projected/505bfe60-cd7c-4bd6-981a-c14076ef5387-kube-api-access-9zc5l\") pod \"redhat-marketplace-l98cc\" (UID: \"505bfe60-cd7c-4bd6-981a-c14076ef5387\") " pod="openshift-marketplace/redhat-marketplace-l98cc" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.024651 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/505bfe60-cd7c-4bd6-981a-c14076ef5387-utilities\") pod \"redhat-marketplace-l98cc\" (UID: \"505bfe60-cd7c-4bd6-981a-c14076ef5387\") " pod="openshift-marketplace/redhat-marketplace-l98cc" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.024691 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/505bfe60-cd7c-4bd6-981a-c14076ef5387-catalog-content\") pod \"redhat-marketplace-l98cc\" (UID: \"505bfe60-cd7c-4bd6-981a-c14076ef5387\") " pod="openshift-marketplace/redhat-marketplace-l98cc" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.025137 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/505bfe60-cd7c-4bd6-981a-c14076ef5387-catalog-content\") pod \"redhat-marketplace-l98cc\" (UID: \"505bfe60-cd7c-4bd6-981a-c14076ef5387\") " pod="openshift-marketplace/redhat-marketplace-l98cc" Feb 17 16:05:41 crc kubenswrapper[4672]: E0217 16:05:41.025259 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 16:05:41.525242388 +0000 UTC m=+150.279331120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.025961 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/505bfe60-cd7c-4bd6-981a-c14076ef5387-utilities\") pod \"redhat-marketplace-l98cc\" (UID: \"505bfe60-cd7c-4bd6-981a-c14076ef5387\") " pod="openshift-marketplace/redhat-marketplace-l98cc" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.053499 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zc5l\" (UniqueName: \"kubernetes.io/projected/505bfe60-cd7c-4bd6-981a-c14076ef5387-kube-api-access-9zc5l\") pod \"redhat-marketplace-l98cc\" (UID: \"505bfe60-cd7c-4bd6-981a-c14076ef5387\") " pod="openshift-marketplace/redhat-marketplace-l98cc" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.059634 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.062606 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2mj5d"] Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.125767 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:41 crc kubenswrapper[4672]: E0217 16:05:41.126111 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 16:05:41.626098643 +0000 UTC m=+150.380187375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lnsj7" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.163048 4672 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-17T16:05:40.748280348Z","Handler":null,"Name":""} Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.175781 4672 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.175814 4672 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.185424 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-skvcq"] Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.186335 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skvcq" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.211646 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-skvcq"] Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.226443 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.226498 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l98cc" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.226773 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a14c1588-0007-41ea-b334-f2bc0b2a5587-utilities\") pod \"redhat-marketplace-skvcq\" (UID: \"a14c1588-0007-41ea-b334-f2bc0b2a5587\") " pod="openshift-marketplace/redhat-marketplace-skvcq" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.226800 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a14c1588-0007-41ea-b334-f2bc0b2a5587-catalog-content\") pod \"redhat-marketplace-skvcq\" (UID: \"a14c1588-0007-41ea-b334-f2bc0b2a5587\") " pod="openshift-marketplace/redhat-marketplace-skvcq" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.226821 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cvrz\" (UniqueName: \"kubernetes.io/projected/a14c1588-0007-41ea-b334-f2bc0b2a5587-kube-api-access-7cvrz\") pod \"redhat-marketplace-skvcq\" (UID: \"a14c1588-0007-41ea-b334-f2bc0b2a5587\") " pod="openshift-marketplace/redhat-marketplace-skvcq" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.236525 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.265688 4672 generic.go:334] "Generic (PLEG): container finished" podID="db7fc0eb-2899-4c37-bf2e-30d02cbffb2c" containerID="f2396035ff02b1dd27bf937c4f39c840fc8601c3ee37d7b4752cd18626c58a62" exitCode=0 Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.265746 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxnc7" event={"ID":"db7fc0eb-2899-4c37-bf2e-30d02cbffb2c","Type":"ContainerDied","Data":"f2396035ff02b1dd27bf937c4f39c840fc8601c3ee37d7b4752cd18626c58a62"} Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.267087 4672 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.271169 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4b8b876d791bd49352b5296779034037cf0f68d2c66b304322ac35b5d1eda8e9"} Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.271198 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"279a89bca1345384e5f018ec74f13eb22e2085fd0ad2d50dbb8b5e8d501312d7"} Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.272653 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.315722 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" event={"ID":"29ca355c-84f8-434c-a892-d0d3c6c78c00","Type":"ContainerStarted","Data":"30759d9144428a0e8e3c5eb236fe780c821f5fec47a372da2949d5de7da580cc"} Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.321114 4672 patch_prober.go:28] interesting pod/router-default-5444994796-vndpv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:41 crc kubenswrapper[4672]: [-]has-synced failed: reason withheld Feb 17 16:05:41 crc kubenswrapper[4672]: [+]process-running ok Feb 17 16:05:41 crc kubenswrapper[4672]: healthz check failed Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.321156 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vndpv" podUID="1d98488b-d521-4207-a7b8-23b37cb1ef98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.327956 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a14c1588-0007-41ea-b334-f2bc0b2a5587-utilities\") pod \"redhat-marketplace-skvcq\" (UID: \"a14c1588-0007-41ea-b334-f2bc0b2a5587\") " pod="openshift-marketplace/redhat-marketplace-skvcq" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.328000 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a14c1588-0007-41ea-b334-f2bc0b2a5587-catalog-content\") pod \"redhat-marketplace-skvcq\" (UID: \"a14c1588-0007-41ea-b334-f2bc0b2a5587\") " pod="openshift-marketplace/redhat-marketplace-skvcq" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.328021 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cvrz\" (UniqueName: \"kubernetes.io/projected/a14c1588-0007-41ea-b334-f2bc0b2a5587-kube-api-access-7cvrz\") pod \"redhat-marketplace-skvcq\" (UID: \"a14c1588-0007-41ea-b334-f2bc0b2a5587\") " pod="openshift-marketplace/redhat-marketplace-skvcq" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.328104 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.328757 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a14c1588-0007-41ea-b334-f2bc0b2a5587-utilities\") pod \"redhat-marketplace-skvcq\" (UID: \"a14c1588-0007-41ea-b334-f2bc0b2a5587\") " pod="openshift-marketplace/redhat-marketplace-skvcq" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.328966 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a14c1588-0007-41ea-b334-f2bc0b2a5587-catalog-content\") pod \"redhat-marketplace-skvcq\" (UID: \"a14c1588-0007-41ea-b334-f2bc0b2a5587\") " pod="openshift-marketplace/redhat-marketplace-skvcq" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.341826 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ab86ad40c3522733d5745d6a2535634612497d10bdc565f4f14dbbc869b64c53"} Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.341873 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4cbf960810309086f2a8ff957615bdf5a28749310b654d67b104afb680023f5d"} Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.347001 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mj5d" event={"ID":"2217a413-541b-46bc-9563-b382fb9f090d","Type":"ContainerStarted","Data":"64b01acef81d24af7e432554023b6f6d34aec97a1210dfac755957445239e650"} Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.348431 4672 generic.go:334] "Generic (PLEG): container finished" podID="708084b0-bae5-4cfc-ab45-cc5ca619f849" containerID="8e44341433926e5449b1c01c36ccbef82e6d772b6a78c5bac34c1cf02db9143c" exitCode=0 Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.348501 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvksq" event={"ID":"708084b0-bae5-4cfc-ab45-cc5ca619f849","Type":"ContainerDied","Data":"8e44341433926e5449b1c01c36ccbef82e6d772b6a78c5bac34c1cf02db9143c"} Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.348540 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvksq" event={"ID":"708084b0-bae5-4cfc-ab45-cc5ca619f849","Type":"ContainerStarted","Data":"0a2922d54675109365da6a9bb1d000eb6f34624d4bcfe1a4a6b22a7ed5c5a72a"} Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.363580 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-zgwq2" podStartSLOduration=10.363562919 podStartE2EDuration="10.363562919s" podCreationTimestamp="2026-02-17 16:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:41.358886786 +0000 UTC m=+150.112975518" watchObservedRunningTime="2026-02-17 16:05:41.363562919 +0000 UTC m=+150.117651651" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.370445 4672 generic.go:334] "Generic (PLEG): container finished" podID="fd92fc97-4e60-481b-8d9f-91642c614e48" containerID="f06cfd169f60f3b2f78b261423cfd215914b3f1ab4870bcfe93428bd995a4804" exitCode=0 Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.370536 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4qrd" event={"ID":"fd92fc97-4e60-481b-8d9f-91642c614e48","Type":"ContainerDied","Data":"f06cfd169f60f3b2f78b261423cfd215914b3f1ab4870bcfe93428bd995a4804"} Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.370566 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4qrd" event={"ID":"fd92fc97-4e60-481b-8d9f-91642c614e48","Type":"ContainerStarted","Data":"75b516eb1355d4dbab7495cd227252df9c45fc1de1a3bb6891c47e0456f39baa"} Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.395071 4672 generic.go:334] "Generic (PLEG): container finished" podID="2180d9e0-3678-4bdd-84aa-0dba230aa4e3" containerID="05f802e5ab0a5bcc44ea9f95953f50154f395bfe8d4a7775d34ed6c635f654c3" exitCode=0 Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.395197 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b79hz" event={"ID":"2180d9e0-3678-4bdd-84aa-0dba230aa4e3","Type":"ContainerDied","Data":"05f802e5ab0a5bcc44ea9f95953f50154f395bfe8d4a7775d34ed6c635f654c3"} Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.413374 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cvrz\" (UniqueName: \"kubernetes.io/projected/a14c1588-0007-41ea-b334-f2bc0b2a5587-kube-api-access-7cvrz\") pod \"redhat-marketplace-skvcq\" (UID: \"a14c1588-0007-41ea-b334-f2bc0b2a5587\") " pod="openshift-marketplace/redhat-marketplace-skvcq" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.413395 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9cacda96a712a4d2b9aec8423b2e86fc1727871f599a7525aa9de061cebc5590"} Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.413426 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2d52e88fb847bddb376482fc4c0b2c9a563db1d6e5bfd037a9ff6f6b088244e0"} Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.468629 4672 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.468986 4672 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.470587 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.502094 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lnsj7\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.503695 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skvcq" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.631209 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l98cc"] Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.720874 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.729553 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-skvcq"] Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.956688 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 17 16:05:41 crc kubenswrapper[4672]: W0217 16:05:41.959999 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f278b6e_e162_448c_b57c_b3e66a6b0e5e.slice/crio-552340a77d779e2fa3f0de31a19bbb26e8a3aac539c01f6da71fdf2824815137 WatchSource:0}: Error finding container 552340a77d779e2fa3f0de31a19bbb26e8a3aac539c01f6da71fdf2824815137: Status 404 returned error can't find the container with id 552340a77d779e2fa3f0de31a19bbb26e8a3aac539c01f6da71fdf2824815137 Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.965682 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lnsj7"] Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.987691 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nd8vd"] Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.988902 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nd8vd" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.991793 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 16:05:41 crc kubenswrapper[4672]: I0217 16:05:41.998104 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nd8vd"] Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.038030 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz2v9\" (UniqueName: \"kubernetes.io/projected/028c8d9b-9bd5-4cf5-9628-849e8b5aacaf-kube-api-access-bz2v9\") pod \"redhat-operators-nd8vd\" (UID: \"028c8d9b-9bd5-4cf5-9628-849e8b5aacaf\") " pod="openshift-marketplace/redhat-operators-nd8vd" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.038110 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/028c8d9b-9bd5-4cf5-9628-849e8b5aacaf-utilities\") pod \"redhat-operators-nd8vd\" (UID: \"028c8d9b-9bd5-4cf5-9628-849e8b5aacaf\") " pod="openshift-marketplace/redhat-operators-nd8vd" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.038145 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/028c8d9b-9bd5-4cf5-9628-849e8b5aacaf-catalog-content\") pod \"redhat-operators-nd8vd\" (UID: \"028c8d9b-9bd5-4cf5-9628-849e8b5aacaf\") " pod="openshift-marketplace/redhat-operators-nd8vd" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.138724 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/028c8d9b-9bd5-4cf5-9628-849e8b5aacaf-utilities\") pod \"redhat-operators-nd8vd\" (UID: \"028c8d9b-9bd5-4cf5-9628-849e8b5aacaf\") " pod="openshift-marketplace/redhat-operators-nd8vd" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.138770 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/028c8d9b-9bd5-4cf5-9628-849e8b5aacaf-catalog-content\") pod \"redhat-operators-nd8vd\" (UID: \"028c8d9b-9bd5-4cf5-9628-849e8b5aacaf\") " pod="openshift-marketplace/redhat-operators-nd8vd" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.138813 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz2v9\" (UniqueName: \"kubernetes.io/projected/028c8d9b-9bd5-4cf5-9628-849e8b5aacaf-kube-api-access-bz2v9\") pod \"redhat-operators-nd8vd\" (UID: \"028c8d9b-9bd5-4cf5-9628-849e8b5aacaf\") " pod="openshift-marketplace/redhat-operators-nd8vd" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.139312 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/028c8d9b-9bd5-4cf5-9628-849e8b5aacaf-utilities\") pod \"redhat-operators-nd8vd\" (UID: \"028c8d9b-9bd5-4cf5-9628-849e8b5aacaf\") " pod="openshift-marketplace/redhat-operators-nd8vd" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.142274 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/028c8d9b-9bd5-4cf5-9628-849e8b5aacaf-catalog-content\") pod \"redhat-operators-nd8vd\" (UID: \"028c8d9b-9bd5-4cf5-9628-849e8b5aacaf\") " pod="openshift-marketplace/redhat-operators-nd8vd" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.173436 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz2v9\" (UniqueName: \"kubernetes.io/projected/028c8d9b-9bd5-4cf5-9628-849e8b5aacaf-kube-api-access-bz2v9\") pod \"redhat-operators-nd8vd\" (UID: \"028c8d9b-9bd5-4cf5-9628-849e8b5aacaf\") " pod="openshift-marketplace/redhat-operators-nd8vd" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.321389 4672 patch_prober.go:28] interesting pod/router-default-5444994796-vndpv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:42 crc kubenswrapper[4672]: [-]has-synced failed: reason withheld Feb 17 16:05:42 crc kubenswrapper[4672]: [+]process-running ok Feb 17 16:05:42 crc kubenswrapper[4672]: healthz check failed Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.321920 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vndpv" podUID="1d98488b-d521-4207-a7b8-23b37cb1ef98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.324929 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nd8vd" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.389463 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w2tc4"] Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.390390 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2tc4" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.404416 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w2tc4"] Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.441656 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfjfs\" (UniqueName: \"kubernetes.io/projected/f07446c8-4550-461c-a53d-c1d4bd056cfd-kube-api-access-rfjfs\") pod \"redhat-operators-w2tc4\" (UID: \"f07446c8-4550-461c-a53d-c1d4bd056cfd\") " pod="openshift-marketplace/redhat-operators-w2tc4" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.441710 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f07446c8-4550-461c-a53d-c1d4bd056cfd-utilities\") pod \"redhat-operators-w2tc4\" (UID: \"f07446c8-4550-461c-a53d-c1d4bd056cfd\") " pod="openshift-marketplace/redhat-operators-w2tc4" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.441740 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f07446c8-4550-461c-a53d-c1d4bd056cfd-catalog-content\") pod \"redhat-operators-w2tc4\" (UID: \"f07446c8-4550-461c-a53d-c1d4bd056cfd\") " pod="openshift-marketplace/redhat-operators-w2tc4" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.477927 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c149acc1-b05c-4919-8deb-2b0d7d9d90b9","Type":"ContainerStarted","Data":"312f5170c1d4acfcd6e26df7ad6ce688a9b58ba317674ecdb5d7756e5c501367"} Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.478502 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c149acc1-b05c-4919-8deb-2b0d7d9d90b9","Type":"ContainerStarted","Data":"44dca774d9611817e8b18c221cbedb14f2bb5e95d28bdb52211fb569ee1987f8"} Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.480442 4672 generic.go:334] "Generic (PLEG): container finished" podID="a14c1588-0007-41ea-b334-f2bc0b2a5587" containerID="aac792017eb8d9dee5a920abc92512584f448bd2ba2d8261f47e6c6062805856" exitCode=0 Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.480533 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skvcq" event={"ID":"a14c1588-0007-41ea-b334-f2bc0b2a5587","Type":"ContainerDied","Data":"aac792017eb8d9dee5a920abc92512584f448bd2ba2d8261f47e6c6062805856"} Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.480558 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skvcq" event={"ID":"a14c1588-0007-41ea-b334-f2bc0b2a5587","Type":"ContainerStarted","Data":"236a4a560b59b58b6757c4b4134eeee2f064f8083b7c5b2e0465ae37aa29a93c"} Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.483097 4672 generic.go:334] "Generic (PLEG): container finished" podID="505bfe60-cd7c-4bd6-981a-c14076ef5387" containerID="5da309d255743a05c224bccd21910188e4ec29791e044f3d15ffb336e2c43d29" exitCode=0 Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.483173 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l98cc" event={"ID":"505bfe60-cd7c-4bd6-981a-c14076ef5387","Type":"ContainerDied","Data":"5da309d255743a05c224bccd21910188e4ec29791e044f3d15ffb336e2c43d29"} Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.483199 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l98cc" event={"ID":"505bfe60-cd7c-4bd6-981a-c14076ef5387","Type":"ContainerStarted","Data":"727715973d29ce2d5ce3c22ad26343a7fc530c19d18ad238c267ca8336d54458"} Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.493669 4672 generic.go:334] "Generic (PLEG): container finished" podID="2217a413-541b-46bc-9563-b382fb9f090d" containerID="1d4166387646f9d0f5420811fdee06b68ba62ba08f69be9654edf224da14c7f7" exitCode=0 Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.496178 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mj5d" event={"ID":"2217a413-541b-46bc-9563-b382fb9f090d","Type":"ContainerDied","Data":"1d4166387646f9d0f5420811fdee06b68ba62ba08f69be9654edf224da14c7f7"} Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.500825 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" event={"ID":"5f278b6e-e162-448c-b57c-b3e66a6b0e5e","Type":"ContainerStarted","Data":"c9a1429a2be36ea34dff64d7a9469eec05267cb713aa011ccaf924d7be19e709"} Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.501204 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.501243 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" event={"ID":"5f278b6e-e162-448c-b57c-b3e66a6b0e5e","Type":"ContainerStarted","Data":"552340a77d779e2fa3f0de31a19bbb26e8a3aac539c01f6da71fdf2824815137"} Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.519911 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.519890461 podStartE2EDuration="2.519890461s" podCreationTimestamp="2026-02-17 16:05:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:42.505339756 +0000 UTC m=+151.259428488" watchObservedRunningTime="2026-02-17 16:05:42.519890461 +0000 UTC m=+151.273979193" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.540085 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" podStartSLOduration=125.540061594 podStartE2EDuration="2m5.540061594s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:42.524113062 +0000 UTC m=+151.278201784" watchObservedRunningTime="2026-02-17 16:05:42.540061594 +0000 UTC m=+151.294150326" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.542468 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfjfs\" (UniqueName: \"kubernetes.io/projected/f07446c8-4550-461c-a53d-c1d4bd056cfd-kube-api-access-rfjfs\") pod \"redhat-operators-w2tc4\" (UID: \"f07446c8-4550-461c-a53d-c1d4bd056cfd\") " pod="openshift-marketplace/redhat-operators-w2tc4" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.548150 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f07446c8-4550-461c-a53d-c1d4bd056cfd-utilities\") pod \"redhat-operators-w2tc4\" (UID: \"f07446c8-4550-461c-a53d-c1d4bd056cfd\") " pod="openshift-marketplace/redhat-operators-w2tc4" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.548202 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f07446c8-4550-461c-a53d-c1d4bd056cfd-catalog-content\") pod \"redhat-operators-w2tc4\" (UID: \"f07446c8-4550-461c-a53d-c1d4bd056cfd\") " pod="openshift-marketplace/redhat-operators-w2tc4" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.549470 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f07446c8-4550-461c-a53d-c1d4bd056cfd-catalog-content\") pod \"redhat-operators-w2tc4\" (UID: \"f07446c8-4550-461c-a53d-c1d4bd056cfd\") " pod="openshift-marketplace/redhat-operators-w2tc4" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.551042 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f07446c8-4550-461c-a53d-c1d4bd056cfd-utilities\") pod \"redhat-operators-w2tc4\" (UID: \"f07446c8-4550-461c-a53d-c1d4bd056cfd\") " pod="openshift-marketplace/redhat-operators-w2tc4" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.576334 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfjfs\" (UniqueName: \"kubernetes.io/projected/f07446c8-4550-461c-a53d-c1d4bd056cfd-kube-api-access-rfjfs\") pod \"redhat-operators-w2tc4\" (UID: \"f07446c8-4550-461c-a53d-c1d4bd056cfd\") " pod="openshift-marketplace/redhat-operators-w2tc4" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.602718 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nd8vd"] Feb 17 16:05:42 crc kubenswrapper[4672]: W0217 16:05:42.657730 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod028c8d9b_9bd5_4cf5_9628_849e8b5aacaf.slice/crio-30e115772276cf76f98239cbf3d5b791aecaa890069de1bf99a8704d568e0d46 WatchSource:0}: Error finding container 30e115772276cf76f98239cbf3d5b791aecaa890069de1bf99a8704d568e0d46: Status 404 returned error can't find the container with id 30e115772276cf76f98239cbf3d5b791aecaa890069de1bf99a8704d568e0d46 Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.708383 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2tc4" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.827752 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b79hz" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.956824 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2180d9e0-3678-4bdd-84aa-0dba230aa4e3-config-volume\") pod \"2180d9e0-3678-4bdd-84aa-0dba230aa4e3\" (UID: \"2180d9e0-3678-4bdd-84aa-0dba230aa4e3\") " Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.957626 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnmdd\" (UniqueName: \"kubernetes.io/projected/2180d9e0-3678-4bdd-84aa-0dba230aa4e3-kube-api-access-dnmdd\") pod \"2180d9e0-3678-4bdd-84aa-0dba230aa4e3\" (UID: \"2180d9e0-3678-4bdd-84aa-0dba230aa4e3\") " Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.957765 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2180d9e0-3678-4bdd-84aa-0dba230aa4e3-secret-volume\") pod \"2180d9e0-3678-4bdd-84aa-0dba230aa4e3\" (UID: \"2180d9e0-3678-4bdd-84aa-0dba230aa4e3\") " Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.959591 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2180d9e0-3678-4bdd-84aa-0dba230aa4e3-config-volume" (OuterVolumeSpecName: "config-volume") pod "2180d9e0-3678-4bdd-84aa-0dba230aa4e3" (UID: "2180d9e0-3678-4bdd-84aa-0dba230aa4e3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.964493 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2180d9e0-3678-4bdd-84aa-0dba230aa4e3-kube-api-access-dnmdd" (OuterVolumeSpecName: "kube-api-access-dnmdd") pod "2180d9e0-3678-4bdd-84aa-0dba230aa4e3" (UID: "2180d9e0-3678-4bdd-84aa-0dba230aa4e3"). InnerVolumeSpecName "kube-api-access-dnmdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:05:42 crc kubenswrapper[4672]: I0217 16:05:42.981842 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2180d9e0-3678-4bdd-84aa-0dba230aa4e3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2180d9e0-3678-4bdd-84aa-0dba230aa4e3" (UID: "2180d9e0-3678-4bdd-84aa-0dba230aa4e3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:05:43 crc kubenswrapper[4672]: I0217 16:05:43.059262 4672 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2180d9e0-3678-4bdd-84aa-0dba230aa4e3-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:43 crc kubenswrapper[4672]: I0217 16:05:43.059295 4672 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2180d9e0-3678-4bdd-84aa-0dba230aa4e3-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:43 crc kubenswrapper[4672]: I0217 16:05:43.059304 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnmdd\" (UniqueName: \"kubernetes.io/projected/2180d9e0-3678-4bdd-84aa-0dba230aa4e3-kube-api-access-dnmdd\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:43 crc kubenswrapper[4672]: I0217 16:05:43.257084 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:43 crc kubenswrapper[4672]: I0217 16:05:43.259166 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w2tc4"] Feb 17 16:05:43 crc kubenswrapper[4672]: I0217 16:05:43.263117 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-lzwwl" Feb 17 16:05:43 crc kubenswrapper[4672]: I0217 16:05:43.321356 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-vndpv" Feb 17 16:05:43 crc kubenswrapper[4672]: I0217 16:05:43.326747 4672 patch_prober.go:28] interesting pod/router-default-5444994796-vndpv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:43 crc kubenswrapper[4672]: [-]has-synced failed: reason withheld Feb 17 16:05:43 crc kubenswrapper[4672]: [+]process-running ok Feb 17 16:05:43 crc kubenswrapper[4672]: healthz check failed Feb 17 16:05:43 crc kubenswrapper[4672]: I0217 16:05:43.326792 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vndpv" podUID="1d98488b-d521-4207-a7b8-23b37cb1ef98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:43 crc kubenswrapper[4672]: I0217 16:05:43.532351 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b79hz" event={"ID":"2180d9e0-3678-4bdd-84aa-0dba230aa4e3","Type":"ContainerDied","Data":"790ce8f166ef5660b0f517ea8dadfa2f7e564e77bbd2cff180a2f9601c4b47c0"} Feb 17 16:05:43 crc kubenswrapper[4672]: I0217 16:05:43.532385 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="790ce8f166ef5660b0f517ea8dadfa2f7e564e77bbd2cff180a2f9601c4b47c0" Feb 17 16:05:43 crc kubenswrapper[4672]: I0217 16:05:43.532442 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-b79hz" Feb 17 16:05:43 crc kubenswrapper[4672]: I0217 16:05:43.545718 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2tc4" event={"ID":"f07446c8-4550-461c-a53d-c1d4bd056cfd","Type":"ContainerStarted","Data":"4a3e8eb279b65b0f3d3831dee5a7d978d2ccd8b6b76e8bb03d61b4e989a018a6"} Feb 17 16:05:43 crc kubenswrapper[4672]: I0217 16:05:43.568411 4672 generic.go:334] "Generic (PLEG): container finished" podID="c149acc1-b05c-4919-8deb-2b0d7d9d90b9" containerID="312f5170c1d4acfcd6e26df7ad6ce688a9b58ba317674ecdb5d7756e5c501367" exitCode=0 Feb 17 16:05:43 crc kubenswrapper[4672]: I0217 16:05:43.568882 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c149acc1-b05c-4919-8deb-2b0d7d9d90b9","Type":"ContainerDied","Data":"312f5170c1d4acfcd6e26df7ad6ce688a9b58ba317674ecdb5d7756e5c501367"} Feb 17 16:05:43 crc kubenswrapper[4672]: I0217 16:05:43.587261 4672 generic.go:334] "Generic (PLEG): container finished" podID="028c8d9b-9bd5-4cf5-9628-849e8b5aacaf" containerID="2d186d33b5c78309395ce14e7c14e371ed64262e50f5f2fd92d85dfc7c570340" exitCode=0 Feb 17 16:05:43 crc kubenswrapper[4672]: I0217 16:05:43.588808 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nd8vd" event={"ID":"028c8d9b-9bd5-4cf5-9628-849e8b5aacaf","Type":"ContainerDied","Data":"2d186d33b5c78309395ce14e7c14e371ed64262e50f5f2fd92d85dfc7c570340"} Feb 17 16:05:43 crc kubenswrapper[4672]: I0217 16:05:43.588838 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nd8vd" event={"ID":"028c8d9b-9bd5-4cf5-9628-849e8b5aacaf","Type":"ContainerStarted","Data":"30e115772276cf76f98239cbf3d5b791aecaa890069de1bf99a8704d568e0d46"} Feb 17 16:05:43 crc kubenswrapper[4672]: I0217 16:05:43.615974 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:43 crc kubenswrapper[4672]: I0217 16:05:43.620466 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:05:43 crc kubenswrapper[4672]: I0217 16:05:43.631944 4672 patch_prober.go:28] interesting pod/console-f9d7485db-d9vk6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 17 16:05:43 crc kubenswrapper[4672]: I0217 16:05:43.634011 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-d9vk6" podUID="750ef8f5-44ad-4016-8894-0b2a05430464" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 17 16:05:44 crc kubenswrapper[4672]: I0217 16:05:44.081202 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" Feb 17 16:05:44 crc kubenswrapper[4672]: I0217 16:05:44.243634 4672 patch_prober.go:28] interesting pod/downloads-7954f5f757-v9lrm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 17 16:05:44 crc kubenswrapper[4672]: I0217 16:05:44.243977 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v9lrm" podUID="908f0c62-5b97-4c11-8b5d-6454f36295f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 17 16:05:44 crc kubenswrapper[4672]: I0217 16:05:44.243668 4672 patch_prober.go:28] interesting pod/downloads-7954f5f757-v9lrm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 17 16:05:44 crc kubenswrapper[4672]: I0217 16:05:44.244375 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-v9lrm" podUID="908f0c62-5b97-4c11-8b5d-6454f36295f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 17 16:05:44 crc kubenswrapper[4672]: I0217 16:05:44.317764 4672 patch_prober.go:28] interesting pod/router-default-5444994796-vndpv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:44 crc kubenswrapper[4672]: [-]has-synced failed: reason withheld Feb 17 16:05:44 crc kubenswrapper[4672]: [+]process-running ok Feb 17 16:05:44 crc kubenswrapper[4672]: healthz check failed Feb 17 16:05:44 crc kubenswrapper[4672]: I0217 16:05:44.317824 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vndpv" podUID="1d98488b-d521-4207-a7b8-23b37cb1ef98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:44 crc kubenswrapper[4672]: I0217 16:05:44.598344 4672 generic.go:334] "Generic (PLEG): container finished" podID="f07446c8-4550-461c-a53d-c1d4bd056cfd" containerID="0ed979bfe4c406ced027f2ca892de09b714baac76fcb42ef29cb4f136d208feb" exitCode=0 Feb 17 16:05:44 crc kubenswrapper[4672]: I0217 16:05:44.598575 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2tc4" event={"ID":"f07446c8-4550-461c-a53d-c1d4bd056cfd","Type":"ContainerDied","Data":"0ed979bfe4c406ced027f2ca892de09b714baac76fcb42ef29cb4f136d208feb"} Feb 17 16:05:44 crc kubenswrapper[4672]: I0217 16:05:44.935086 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 16:05:44 crc kubenswrapper[4672]: E0217 16:05:44.935295 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2180d9e0-3678-4bdd-84aa-0dba230aa4e3" containerName="collect-profiles" Feb 17 16:05:44 crc kubenswrapper[4672]: I0217 16:05:44.935311 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="2180d9e0-3678-4bdd-84aa-0dba230aa4e3" containerName="collect-profiles" Feb 17 16:05:44 crc kubenswrapper[4672]: I0217 16:05:44.935405 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="2180d9e0-3678-4bdd-84aa-0dba230aa4e3" containerName="collect-profiles" Feb 17 16:05:44 crc kubenswrapper[4672]: I0217 16:05:44.935751 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 16:05:44 crc kubenswrapper[4672]: I0217 16:05:44.938538 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 16:05:44 crc kubenswrapper[4672]: I0217 16:05:44.941002 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 16:05:44 crc kubenswrapper[4672]: I0217 16:05:44.944838 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 16:05:44 crc kubenswrapper[4672]: I0217 16:05:44.948140 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 16:05:45 crc kubenswrapper[4672]: I0217 16:05:45.101892 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c149acc1-b05c-4919-8deb-2b0d7d9d90b9-kube-api-access\") pod \"c149acc1-b05c-4919-8deb-2b0d7d9d90b9\" (UID: \"c149acc1-b05c-4919-8deb-2b0d7d9d90b9\") " Feb 17 16:05:45 crc kubenswrapper[4672]: I0217 16:05:45.101946 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c149acc1-b05c-4919-8deb-2b0d7d9d90b9-kubelet-dir\") pod \"c149acc1-b05c-4919-8deb-2b0d7d9d90b9\" (UID: \"c149acc1-b05c-4919-8deb-2b0d7d9d90b9\") " Feb 17 16:05:45 crc kubenswrapper[4672]: I0217 16:05:45.102104 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c149acc1-b05c-4919-8deb-2b0d7d9d90b9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c149acc1-b05c-4919-8deb-2b0d7d9d90b9" (UID: "c149acc1-b05c-4919-8deb-2b0d7d9d90b9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:45 crc kubenswrapper[4672]: I0217 16:05:45.102576 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbe1e310-fc45-4438-bc77-523bd8f5e598-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cbe1e310-fc45-4438-bc77-523bd8f5e598\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 16:05:45 crc kubenswrapper[4672]: I0217 16:05:45.102613 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbe1e310-fc45-4438-bc77-523bd8f5e598-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cbe1e310-fc45-4438-bc77-523bd8f5e598\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 16:05:45 crc kubenswrapper[4672]: I0217 16:05:45.102693 4672 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c149acc1-b05c-4919-8deb-2b0d7d9d90b9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:45 crc kubenswrapper[4672]: I0217 16:05:45.115452 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c149acc1-b05c-4919-8deb-2b0d7d9d90b9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c149acc1-b05c-4919-8deb-2b0d7d9d90b9" (UID: "c149acc1-b05c-4919-8deb-2b0d7d9d90b9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:05:45 crc kubenswrapper[4672]: I0217 16:05:45.204053 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbe1e310-fc45-4438-bc77-523bd8f5e598-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cbe1e310-fc45-4438-bc77-523bd8f5e598\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 16:05:45 crc kubenswrapper[4672]: I0217 16:05:45.204093 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbe1e310-fc45-4438-bc77-523bd8f5e598-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cbe1e310-fc45-4438-bc77-523bd8f5e598\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 16:05:45 crc kubenswrapper[4672]: I0217 16:05:45.204142 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c149acc1-b05c-4919-8deb-2b0d7d9d90b9-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:45 crc kubenswrapper[4672]: I0217 16:05:45.204185 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbe1e310-fc45-4438-bc77-523bd8f5e598-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cbe1e310-fc45-4438-bc77-523bd8f5e598\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 16:05:45 crc kubenswrapper[4672]: I0217 16:05:45.223908 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbe1e310-fc45-4438-bc77-523bd8f5e598-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cbe1e310-fc45-4438-bc77-523bd8f5e598\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 16:05:45 crc kubenswrapper[4672]: I0217 16:05:45.268555 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 16:05:45 crc kubenswrapper[4672]: I0217 16:05:45.318996 4672 patch_prober.go:28] interesting pod/router-default-5444994796-vndpv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:45 crc kubenswrapper[4672]: [-]has-synced failed: reason withheld Feb 17 16:05:45 crc kubenswrapper[4672]: [+]process-running ok Feb 17 16:05:45 crc kubenswrapper[4672]: healthz check failed Feb 17 16:05:45 crc kubenswrapper[4672]: I0217 16:05:45.319055 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vndpv" podUID="1d98488b-d521-4207-a7b8-23b37cb1ef98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:45 crc kubenswrapper[4672]: I0217 16:05:45.542959 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 16:05:45 crc kubenswrapper[4672]: I0217 16:05:45.629972 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c149acc1-b05c-4919-8deb-2b0d7d9d90b9","Type":"ContainerDied","Data":"44dca774d9611817e8b18c221cbedb14f2bb5e95d28bdb52211fb569ee1987f8"} Feb 17 16:05:45 crc kubenswrapper[4672]: I0217 16:05:45.630150 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44dca774d9611817e8b18c221cbedb14f2bb5e95d28bdb52211fb569ee1987f8" Feb 17 16:05:45 crc kubenswrapper[4672]: I0217 16:05:45.630069 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 16:05:46 crc kubenswrapper[4672]: I0217 16:05:46.137597 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-s2n6p" Feb 17 16:05:46 crc kubenswrapper[4672]: I0217 16:05:46.319322 4672 patch_prober.go:28] interesting pod/router-default-5444994796-vndpv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:46 crc kubenswrapper[4672]: [-]has-synced failed: reason withheld Feb 17 16:05:46 crc kubenswrapper[4672]: [+]process-running ok Feb 17 16:05:46 crc kubenswrapper[4672]: healthz check failed Feb 17 16:05:46 crc kubenswrapper[4672]: I0217 16:05:46.319372 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vndpv" podUID="1d98488b-d521-4207-a7b8-23b37cb1ef98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:46 crc kubenswrapper[4672]: I0217 16:05:46.667758 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cbe1e310-fc45-4438-bc77-523bd8f5e598","Type":"ContainerStarted","Data":"a3f7b643026c28cf498ffe67e14203e56e46f72ebb9267f66ba8c47aca2d8e9d"} Feb 17 16:05:46 crc kubenswrapper[4672]: I0217 16:05:46.667800 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cbe1e310-fc45-4438-bc77-523bd8f5e598","Type":"ContainerStarted","Data":"bb9fa7e07fefd79383e61bff7f2b497e9417b23c6e21b033da6ff0be2dfddaeb"} Feb 17 16:05:46 crc kubenswrapper[4672]: I0217 16:05:46.681284 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.681266933 podStartE2EDuration="2.681266933s" podCreationTimestamp="2026-02-17 16:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:05:46.679787603 +0000 UTC m=+155.433876335" watchObservedRunningTime="2026-02-17 16:05:46.681266933 +0000 UTC m=+155.435355665" Feb 17 16:05:47 crc kubenswrapper[4672]: I0217 16:05:47.320922 4672 patch_prober.go:28] interesting pod/router-default-5444994796-vndpv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:47 crc kubenswrapper[4672]: [-]has-synced failed: reason withheld Feb 17 16:05:47 crc kubenswrapper[4672]: [+]process-running ok Feb 17 16:05:47 crc kubenswrapper[4672]: healthz check failed Feb 17 16:05:47 crc kubenswrapper[4672]: I0217 16:05:47.324601 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vndpv" podUID="1d98488b-d521-4207-a7b8-23b37cb1ef98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:47 crc kubenswrapper[4672]: I0217 16:05:47.684215 4672 generic.go:334] "Generic (PLEG): container finished" podID="cbe1e310-fc45-4438-bc77-523bd8f5e598" containerID="a3f7b643026c28cf498ffe67e14203e56e46f72ebb9267f66ba8c47aca2d8e9d" exitCode=0 Feb 17 16:05:47 crc kubenswrapper[4672]: I0217 16:05:47.684259 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cbe1e310-fc45-4438-bc77-523bd8f5e598","Type":"ContainerDied","Data":"a3f7b643026c28cf498ffe67e14203e56e46f72ebb9267f66ba8c47aca2d8e9d"} Feb 17 16:05:48 crc kubenswrapper[4672]: I0217 16:05:48.316755 4672 patch_prober.go:28] interesting pod/router-default-5444994796-vndpv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:48 crc kubenswrapper[4672]: [-]has-synced failed: reason withheld Feb 17 16:05:48 crc kubenswrapper[4672]: [+]process-running ok Feb 17 16:05:48 crc kubenswrapper[4672]: healthz check failed Feb 17 16:05:48 crc kubenswrapper[4672]: I0217 16:05:48.316818 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vndpv" podUID="1d98488b-d521-4207-a7b8-23b37cb1ef98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:49 crc kubenswrapper[4672]: I0217 16:05:49.318846 4672 patch_prober.go:28] interesting pod/router-default-5444994796-vndpv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:49 crc kubenswrapper[4672]: [-]has-synced failed: reason withheld Feb 17 16:05:49 crc kubenswrapper[4672]: [+]process-running ok Feb 17 16:05:49 crc kubenswrapper[4672]: healthz check failed Feb 17 16:05:49 crc kubenswrapper[4672]: I0217 16:05:49.319232 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vndpv" podUID="1d98488b-d521-4207-a7b8-23b37cb1ef98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:50 crc kubenswrapper[4672]: I0217 16:05:50.319677 4672 patch_prober.go:28] interesting pod/router-default-5444994796-vndpv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:50 crc kubenswrapper[4672]: [-]has-synced failed: reason withheld Feb 17 16:05:50 crc kubenswrapper[4672]: [+]process-running ok Feb 17 16:05:50 crc kubenswrapper[4672]: healthz check failed Feb 17 16:05:50 crc kubenswrapper[4672]: I0217 16:05:50.319771 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vndpv" podUID="1d98488b-d521-4207-a7b8-23b37cb1ef98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:51 crc kubenswrapper[4672]: I0217 16:05:51.317461 4672 patch_prober.go:28] interesting pod/router-default-5444994796-vndpv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:51 crc kubenswrapper[4672]: [-]has-synced failed: reason withheld Feb 17 16:05:51 crc kubenswrapper[4672]: [+]process-running ok Feb 17 16:05:51 crc kubenswrapper[4672]: healthz check failed Feb 17 16:05:51 crc kubenswrapper[4672]: I0217 16:05:51.317523 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vndpv" podUID="1d98488b-d521-4207-a7b8-23b37cb1ef98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:52 crc kubenswrapper[4672]: I0217 16:05:52.317152 4672 patch_prober.go:28] interesting pod/router-default-5444994796-vndpv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:52 crc kubenswrapper[4672]: [-]has-synced failed: reason withheld Feb 17 16:05:52 crc kubenswrapper[4672]: [+]process-running ok Feb 17 16:05:52 crc kubenswrapper[4672]: healthz check failed Feb 17 16:05:52 crc kubenswrapper[4672]: I0217 16:05:52.317226 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vndpv" podUID="1d98488b-d521-4207-a7b8-23b37cb1ef98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:53 crc kubenswrapper[4672]: I0217 16:05:53.317253 4672 patch_prober.go:28] interesting pod/router-default-5444994796-vndpv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 16:05:53 crc kubenswrapper[4672]: [-]has-synced failed: reason withheld Feb 17 16:05:53 crc kubenswrapper[4672]: [+]process-running ok Feb 17 16:05:53 crc kubenswrapper[4672]: healthz check failed Feb 17 16:05:53 crc kubenswrapper[4672]: I0217 16:05:53.317597 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vndpv" podUID="1d98488b-d521-4207-a7b8-23b37cb1ef98" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 16:05:53 crc kubenswrapper[4672]: I0217 16:05:53.602594 4672 patch_prober.go:28] interesting pod/console-f9d7485db-d9vk6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 17 16:05:53 crc kubenswrapper[4672]: I0217 16:05:53.604655 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-d9vk6" podUID="750ef8f5-44ad-4016-8894-0b2a05430464" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 17 16:05:54 crc kubenswrapper[4672]: I0217 16:05:54.244057 4672 patch_prober.go:28] interesting pod/downloads-7954f5f757-v9lrm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 17 16:05:54 crc kubenswrapper[4672]: I0217 16:05:54.244111 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-v9lrm" podUID="908f0c62-5b97-4c11-8b5d-6454f36295f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 17 16:05:54 crc kubenswrapper[4672]: I0217 16:05:54.244448 4672 patch_prober.go:28] interesting pod/downloads-7954f5f757-v9lrm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 17 16:05:54 crc kubenswrapper[4672]: I0217 16:05:54.244529 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v9lrm" podUID="908f0c62-5b97-4c11-8b5d-6454f36295f6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 17 16:05:54 crc kubenswrapper[4672]: I0217 16:05:54.318564 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-vndpv" Feb 17 16:05:54 crc kubenswrapper[4672]: I0217 16:05:54.321632 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-vndpv" Feb 17 16:05:55 crc kubenswrapper[4672]: I0217 16:05:55.630746 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 16:05:55 crc kubenswrapper[4672]: I0217 16:05:55.759767 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cbe1e310-fc45-4438-bc77-523bd8f5e598","Type":"ContainerDied","Data":"bb9fa7e07fefd79383e61bff7f2b497e9417b23c6e21b033da6ff0be2dfddaeb"} Feb 17 16:05:55 crc kubenswrapper[4672]: I0217 16:05:55.759788 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 16:05:55 crc kubenswrapper[4672]: I0217 16:05:55.759802 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb9fa7e07fefd79383e61bff7f2b497e9417b23c6e21b033da6ff0be2dfddaeb" Feb 17 16:05:55 crc kubenswrapper[4672]: I0217 16:05:55.803646 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbe1e310-fc45-4438-bc77-523bd8f5e598-kube-api-access\") pod \"cbe1e310-fc45-4438-bc77-523bd8f5e598\" (UID: \"cbe1e310-fc45-4438-bc77-523bd8f5e598\") " Feb 17 16:05:55 crc kubenswrapper[4672]: I0217 16:05:55.803804 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbe1e310-fc45-4438-bc77-523bd8f5e598-kubelet-dir\") pod \"cbe1e310-fc45-4438-bc77-523bd8f5e598\" (UID: \"cbe1e310-fc45-4438-bc77-523bd8f5e598\") " Feb 17 16:05:55 crc kubenswrapper[4672]: I0217 16:05:55.803994 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbe1e310-fc45-4438-bc77-523bd8f5e598-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cbe1e310-fc45-4438-bc77-523bd8f5e598" (UID: "cbe1e310-fc45-4438-bc77-523bd8f5e598"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:55 crc kubenswrapper[4672]: I0217 16:05:55.804351 4672 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbe1e310-fc45-4438-bc77-523bd8f5e598-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:55 crc kubenswrapper[4672]: I0217 16:05:55.811570 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbe1e310-fc45-4438-bc77-523bd8f5e598-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cbe1e310-fc45-4438-bc77-523bd8f5e598" (UID: "cbe1e310-fc45-4438-bc77-523bd8f5e598"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:05:55 crc kubenswrapper[4672]: I0217 16:05:55.905141 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbe1e310-fc45-4438-bc77-523bd8f5e598-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:56 crc kubenswrapper[4672]: I0217 16:05:56.217773 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7p722"] Feb 17 16:05:56 crc kubenswrapper[4672]: I0217 16:05:56.218081 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" podUID="a4206b74-7012-47af-9344-253aa7453e86" containerName="controller-manager" containerID="cri-o://38e3bd524fa50a0739db41a24c64da2e08ae1208ee7aa8e3a2268a2e8c4dd26f" gracePeriod=30 Feb 17 16:05:56 crc kubenswrapper[4672]: I0217 16:05:56.235789 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87"] Feb 17 16:05:56 crc kubenswrapper[4672]: I0217 16:05:56.236082 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" podUID="b5948d11-a6da-4f21-a6e8-413a28791775" containerName="route-controller-manager" containerID="cri-o://2e7ee527cc36a23dddfb24c985f54cd8f62474e27c49b6359dc06deaf0c13a85" gracePeriod=30 Feb 17 16:05:57 crc kubenswrapper[4672]: I0217 16:05:57.566232 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:05:57 crc kubenswrapper[4672]: I0217 16:05:57.566651 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:05:59 crc kubenswrapper[4672]: I0217 16:05:59.665159 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs\") pod \"network-metrics-daemon-hqdz9\" (UID: \"712be02c-2ccc-4989-aecb-653745bacb0d\") " pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:05:59 crc kubenswrapper[4672]: I0217 16:05:59.683546 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/712be02c-2ccc-4989-aecb-653745bacb0d-metrics-certs\") pod \"network-metrics-daemon-hqdz9\" (UID: \"712be02c-2ccc-4989-aecb-653745bacb0d\") " pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:05:59 crc kubenswrapper[4672]: I0217 16:05:59.871780 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hqdz9" Feb 17 16:06:01 crc kubenswrapper[4672]: I0217 16:06:01.728426 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:06:03 crc kubenswrapper[4672]: I0217 16:06:03.611228 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:06:03 crc kubenswrapper[4672]: I0217 16:06:03.615664 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:06:03 crc kubenswrapper[4672]: I0217 16:06:03.926996 4672 generic.go:334] "Generic (PLEG): container finished" podID="a4206b74-7012-47af-9344-253aa7453e86" containerID="38e3bd524fa50a0739db41a24c64da2e08ae1208ee7aa8e3a2268a2e8c4dd26f" exitCode=0 Feb 17 16:06:03 crc kubenswrapper[4672]: I0217 16:06:03.927121 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" event={"ID":"a4206b74-7012-47af-9344-253aa7453e86","Type":"ContainerDied","Data":"38e3bd524fa50a0739db41a24c64da2e08ae1208ee7aa8e3a2268a2e8c4dd26f"} Feb 17 16:06:03 crc kubenswrapper[4672]: I0217 16:06:03.929181 4672 generic.go:334] "Generic (PLEG): container finished" podID="b5948d11-a6da-4f21-a6e8-413a28791775" containerID="2e7ee527cc36a23dddfb24c985f54cd8f62474e27c49b6359dc06deaf0c13a85" exitCode=0 Feb 17 16:06:03 crc kubenswrapper[4672]: I0217 16:06:03.929234 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" event={"ID":"b5948d11-a6da-4f21-a6e8-413a28791775","Type":"ContainerDied","Data":"2e7ee527cc36a23dddfb24c985f54cd8f62474e27c49b6359dc06deaf0c13a85"} Feb 17 16:06:04 crc kubenswrapper[4672]: I0217 16:06:04.248685 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-v9lrm" Feb 17 16:06:04 crc kubenswrapper[4672]: I0217 16:06:04.442363 4672 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7p722 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 16:06:04 crc kubenswrapper[4672]: I0217 16:06:04.442472 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" podUID="a4206b74-7012-47af-9344-253aa7453e86" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 16:06:04 crc kubenswrapper[4672]: I0217 16:06:04.530983 4672 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vwl87 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 16:06:04 crc kubenswrapper[4672]: I0217 16:06:04.531066 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" podUID="b5948d11-a6da-4f21-a6e8-413a28791775" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 16:06:09 crc kubenswrapper[4672]: E0217 16:06:09.120291 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 17 16:06:09 crc kubenswrapper[4672]: E0217 16:06:09.120975 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7cvrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-skvcq_openshift-marketplace(a14c1588-0007-41ea-b334-f2bc0b2a5587): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:06:09 crc kubenswrapper[4672]: E0217 16:06:09.122121 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-skvcq" podUID="a14c1588-0007-41ea-b334-f2bc0b2a5587" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.154833 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.171546 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.219709 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2"] Feb 17 16:06:09 crc kubenswrapper[4672]: E0217 16:06:09.219967 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbe1e310-fc45-4438-bc77-523bd8f5e598" containerName="pruner" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.219982 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbe1e310-fc45-4438-bc77-523bd8f5e598" containerName="pruner" Feb 17 16:06:09 crc kubenswrapper[4672]: E0217 16:06:09.219993 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5948d11-a6da-4f21-a6e8-413a28791775" containerName="route-controller-manager" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.220000 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5948d11-a6da-4f21-a6e8-413a28791775" containerName="route-controller-manager" Feb 17 16:06:09 crc kubenswrapper[4672]: E0217 16:06:09.220016 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4206b74-7012-47af-9344-253aa7453e86" containerName="controller-manager" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.220026 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4206b74-7012-47af-9344-253aa7453e86" containerName="controller-manager" Feb 17 16:06:09 crc kubenswrapper[4672]: E0217 16:06:09.220040 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c149acc1-b05c-4919-8deb-2b0d7d9d90b9" containerName="pruner" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.220048 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c149acc1-b05c-4919-8deb-2b0d7d9d90b9" containerName="pruner" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.220193 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="c149acc1-b05c-4919-8deb-2b0d7d9d90b9" containerName="pruner" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.220212 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5948d11-a6da-4f21-a6e8-413a28791775" containerName="route-controller-manager" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.220236 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbe1e310-fc45-4438-bc77-523bd8f5e598" containerName="pruner" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.220245 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4206b74-7012-47af-9344-253aa7453e86" containerName="controller-manager" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.220877 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.236284 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2"] Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.311372 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btntj\" (UniqueName: \"kubernetes.io/projected/b5948d11-a6da-4f21-a6e8-413a28791775-kube-api-access-btntj\") pod \"b5948d11-a6da-4f21-a6e8-413a28791775\" (UID: \"b5948d11-a6da-4f21-a6e8-413a28791775\") " Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.311447 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktcrs\" (UniqueName: \"kubernetes.io/projected/a4206b74-7012-47af-9344-253aa7453e86-kube-api-access-ktcrs\") pod \"a4206b74-7012-47af-9344-253aa7453e86\" (UID: \"a4206b74-7012-47af-9344-253aa7453e86\") " Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.311492 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5948d11-a6da-4f21-a6e8-413a28791775-serving-cert\") pod \"b5948d11-a6da-4f21-a6e8-413a28791775\" (UID: \"b5948d11-a6da-4f21-a6e8-413a28791775\") " Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.311543 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5948d11-a6da-4f21-a6e8-413a28791775-config\") pod \"b5948d11-a6da-4f21-a6e8-413a28791775\" (UID: \"b5948d11-a6da-4f21-a6e8-413a28791775\") " Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.311619 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4206b74-7012-47af-9344-253aa7453e86-serving-cert\") pod \"a4206b74-7012-47af-9344-253aa7453e86\" (UID: \"a4206b74-7012-47af-9344-253aa7453e86\") " Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.311646 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4206b74-7012-47af-9344-253aa7453e86-config\") pod \"a4206b74-7012-47af-9344-253aa7453e86\" (UID: \"a4206b74-7012-47af-9344-253aa7453e86\") " Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.311676 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4206b74-7012-47af-9344-253aa7453e86-client-ca\") pod \"a4206b74-7012-47af-9344-253aa7453e86\" (UID: \"a4206b74-7012-47af-9344-253aa7453e86\") " Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.311700 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4206b74-7012-47af-9344-253aa7453e86-proxy-ca-bundles\") pod \"a4206b74-7012-47af-9344-253aa7453e86\" (UID: \"a4206b74-7012-47af-9344-253aa7453e86\") " Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.311729 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5948d11-a6da-4f21-a6e8-413a28791775-client-ca\") pod \"b5948d11-a6da-4f21-a6e8-413a28791775\" (UID: \"b5948d11-a6da-4f21-a6e8-413a28791775\") " Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.312931 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5948d11-a6da-4f21-a6e8-413a28791775-client-ca" (OuterVolumeSpecName: "client-ca") pod "b5948d11-a6da-4f21-a6e8-413a28791775" (UID: "b5948d11-a6da-4f21-a6e8-413a28791775"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.313195 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5948d11-a6da-4f21-a6e8-413a28791775-config" (OuterVolumeSpecName: "config") pod "b5948d11-a6da-4f21-a6e8-413a28791775" (UID: "b5948d11-a6da-4f21-a6e8-413a28791775"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.313357 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4206b74-7012-47af-9344-253aa7453e86-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a4206b74-7012-47af-9344-253aa7453e86" (UID: "a4206b74-7012-47af-9344-253aa7453e86"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.313482 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4206b74-7012-47af-9344-253aa7453e86-client-ca" (OuterVolumeSpecName: "client-ca") pod "a4206b74-7012-47af-9344-253aa7453e86" (UID: "a4206b74-7012-47af-9344-253aa7453e86"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.313640 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4206b74-7012-47af-9344-253aa7453e86-config" (OuterVolumeSpecName: "config") pod "a4206b74-7012-47af-9344-253aa7453e86" (UID: "a4206b74-7012-47af-9344-253aa7453e86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.322762 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5948d11-a6da-4f21-a6e8-413a28791775-kube-api-access-btntj" (OuterVolumeSpecName: "kube-api-access-btntj") pod "b5948d11-a6da-4f21-a6e8-413a28791775" (UID: "b5948d11-a6da-4f21-a6e8-413a28791775"). InnerVolumeSpecName "kube-api-access-btntj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.322848 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4206b74-7012-47af-9344-253aa7453e86-kube-api-access-ktcrs" (OuterVolumeSpecName: "kube-api-access-ktcrs") pod "a4206b74-7012-47af-9344-253aa7453e86" (UID: "a4206b74-7012-47af-9344-253aa7453e86"). InnerVolumeSpecName "kube-api-access-ktcrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.324199 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5948d11-a6da-4f21-a6e8-413a28791775-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b5948d11-a6da-4f21-a6e8-413a28791775" (UID: "b5948d11-a6da-4f21-a6e8-413a28791775"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.324562 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4206b74-7012-47af-9344-253aa7453e86-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a4206b74-7012-47af-9344-253aa7453e86" (UID: "a4206b74-7012-47af-9344-253aa7453e86"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.415714 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9265ae5c-7519-431b-8cb4-585305fab03d-serving-cert\") pod \"route-controller-manager-5f86b664bc-vk8z2\" (UID: \"9265ae5c-7519-431b-8cb4-585305fab03d\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.415788 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9265ae5c-7519-431b-8cb4-585305fab03d-client-ca\") pod \"route-controller-manager-5f86b664bc-vk8z2\" (UID: \"9265ae5c-7519-431b-8cb4-585305fab03d\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.415815 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-999vl\" (UniqueName: \"kubernetes.io/projected/9265ae5c-7519-431b-8cb4-585305fab03d-kube-api-access-999vl\") pod \"route-controller-manager-5f86b664bc-vk8z2\" (UID: \"9265ae5c-7519-431b-8cb4-585305fab03d\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.415837 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9265ae5c-7519-431b-8cb4-585305fab03d-config\") pod \"route-controller-manager-5f86b664bc-vk8z2\" (UID: \"9265ae5c-7519-431b-8cb4-585305fab03d\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.415912 4672 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5948d11-a6da-4f21-a6e8-413a28791775-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.416007 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btntj\" (UniqueName: \"kubernetes.io/projected/b5948d11-a6da-4f21-a6e8-413a28791775-kube-api-access-btntj\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.416042 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktcrs\" (UniqueName: \"kubernetes.io/projected/a4206b74-7012-47af-9344-253aa7453e86-kube-api-access-ktcrs\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.416053 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5948d11-a6da-4f21-a6e8-413a28791775-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.416063 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5948d11-a6da-4f21-a6e8-413a28791775-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.416090 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4206b74-7012-47af-9344-253aa7453e86-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.416099 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4206b74-7012-47af-9344-253aa7453e86-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.416107 4672 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4206b74-7012-47af-9344-253aa7453e86-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.416114 4672 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4206b74-7012-47af-9344-253aa7453e86-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.517253 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9265ae5c-7519-431b-8cb4-585305fab03d-serving-cert\") pod \"route-controller-manager-5f86b664bc-vk8z2\" (UID: \"9265ae5c-7519-431b-8cb4-585305fab03d\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.517333 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9265ae5c-7519-431b-8cb4-585305fab03d-client-ca\") pod \"route-controller-manager-5f86b664bc-vk8z2\" (UID: \"9265ae5c-7519-431b-8cb4-585305fab03d\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.517367 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-999vl\" (UniqueName: \"kubernetes.io/projected/9265ae5c-7519-431b-8cb4-585305fab03d-kube-api-access-999vl\") pod \"route-controller-manager-5f86b664bc-vk8z2\" (UID: \"9265ae5c-7519-431b-8cb4-585305fab03d\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.517390 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9265ae5c-7519-431b-8cb4-585305fab03d-config\") pod \"route-controller-manager-5f86b664bc-vk8z2\" (UID: \"9265ae5c-7519-431b-8cb4-585305fab03d\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.518538 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9265ae5c-7519-431b-8cb4-585305fab03d-client-ca\") pod \"route-controller-manager-5f86b664bc-vk8z2\" (UID: \"9265ae5c-7519-431b-8cb4-585305fab03d\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.518616 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9265ae5c-7519-431b-8cb4-585305fab03d-config\") pod \"route-controller-manager-5f86b664bc-vk8z2\" (UID: \"9265ae5c-7519-431b-8cb4-585305fab03d\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.522120 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9265ae5c-7519-431b-8cb4-585305fab03d-serving-cert\") pod \"route-controller-manager-5f86b664bc-vk8z2\" (UID: \"9265ae5c-7519-431b-8cb4-585305fab03d\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.535338 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-999vl\" (UniqueName: \"kubernetes.io/projected/9265ae5c-7519-431b-8cb4-585305fab03d-kube-api-access-999vl\") pod \"route-controller-manager-5f86b664bc-vk8z2\" (UID: \"9265ae5c-7519-431b-8cb4-585305fab03d\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.547673 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.961377 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" event={"ID":"a4206b74-7012-47af-9344-253aa7453e86","Type":"ContainerDied","Data":"e6b9445b10a044e906b01f8747508b7938737b5ad3a644ecf26493bce4898974"} Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.961686 4672 scope.go:117] "RemoveContainer" containerID="38e3bd524fa50a0739db41a24c64da2e08ae1208ee7aa8e3a2268a2e8c4dd26f" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.961784 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7p722" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.968548 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" event={"ID":"b5948d11-a6da-4f21-a6e8-413a28791775","Type":"ContainerDied","Data":"14f171c17ff095150410f07896dbbab02401ba4f56a61df6a84e1d912b0c519f"} Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.968566 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87" Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.997131 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7p722"] Feb 17 16:06:09 crc kubenswrapper[4672]: I0217 16:06:09.999528 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7p722"] Feb 17 16:06:10 crc kubenswrapper[4672]: I0217 16:06:10.006418 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87"] Feb 17 16:06:10 crc kubenswrapper[4672]: I0217 16:06:10.009136 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwl87"] Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.222808 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77c7fb56bb-n856m"] Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.224091 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.225892 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.226485 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77c7fb56bb-n856m"] Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.226750 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.227007 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.231358 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.232256 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.232574 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.234319 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.240921 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0debdd80-b109-4450-b08f-416bf65f3afe-serving-cert\") pod \"controller-manager-77c7fb56bb-n856m\" (UID: \"0debdd80-b109-4450-b08f-416bf65f3afe\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.241007 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0debdd80-b109-4450-b08f-416bf65f3afe-proxy-ca-bundles\") pod \"controller-manager-77c7fb56bb-n856m\" (UID: \"0debdd80-b109-4450-b08f-416bf65f3afe\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.241072 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0debdd80-b109-4450-b08f-416bf65f3afe-config\") pod \"controller-manager-77c7fb56bb-n856m\" (UID: \"0debdd80-b109-4450-b08f-416bf65f3afe\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.241115 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0debdd80-b109-4450-b08f-416bf65f3afe-client-ca\") pod \"controller-manager-77c7fb56bb-n856m\" (UID: \"0debdd80-b109-4450-b08f-416bf65f3afe\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.241147 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc7pw\" (UniqueName: \"kubernetes.io/projected/0debdd80-b109-4450-b08f-416bf65f3afe-kube-api-access-qc7pw\") pod \"controller-manager-77c7fb56bb-n856m\" (UID: \"0debdd80-b109-4450-b08f-416bf65f3afe\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.343134 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0debdd80-b109-4450-b08f-416bf65f3afe-config\") pod \"controller-manager-77c7fb56bb-n856m\" (UID: \"0debdd80-b109-4450-b08f-416bf65f3afe\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.343484 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc7pw\" (UniqueName: \"kubernetes.io/projected/0debdd80-b109-4450-b08f-416bf65f3afe-kube-api-access-qc7pw\") pod \"controller-manager-77c7fb56bb-n856m\" (UID: \"0debdd80-b109-4450-b08f-416bf65f3afe\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.343544 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0debdd80-b109-4450-b08f-416bf65f3afe-client-ca\") pod \"controller-manager-77c7fb56bb-n856m\" (UID: \"0debdd80-b109-4450-b08f-416bf65f3afe\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.343667 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0debdd80-b109-4450-b08f-416bf65f3afe-serving-cert\") pod \"controller-manager-77c7fb56bb-n856m\" (UID: \"0debdd80-b109-4450-b08f-416bf65f3afe\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.343717 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0debdd80-b109-4450-b08f-416bf65f3afe-proxy-ca-bundles\") pod \"controller-manager-77c7fb56bb-n856m\" (UID: \"0debdd80-b109-4450-b08f-416bf65f3afe\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.344397 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0debdd80-b109-4450-b08f-416bf65f3afe-client-ca\") pod \"controller-manager-77c7fb56bb-n856m\" (UID: \"0debdd80-b109-4450-b08f-416bf65f3afe\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.345383 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0debdd80-b109-4450-b08f-416bf65f3afe-proxy-ca-bundles\") pod \"controller-manager-77c7fb56bb-n856m\" (UID: \"0debdd80-b109-4450-b08f-416bf65f3afe\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.348989 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0debdd80-b109-4450-b08f-416bf65f3afe-serving-cert\") pod \"controller-manager-77c7fb56bb-n856m\" (UID: \"0debdd80-b109-4450-b08f-416bf65f3afe\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.349304 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0debdd80-b109-4450-b08f-416bf65f3afe-config\") pod \"controller-manager-77c7fb56bb-n856m\" (UID: \"0debdd80-b109-4450-b08f-416bf65f3afe\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.368765 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc7pw\" (UniqueName: \"kubernetes.io/projected/0debdd80-b109-4450-b08f-416bf65f3afe-kube-api-access-qc7pw\") pod \"controller-manager-77c7fb56bb-n856m\" (UID: \"0debdd80-b109-4450-b08f-416bf65f3afe\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.556861 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.951278 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4206b74-7012-47af-9344-253aa7453e86" path="/var/lib/kubelet/pods/a4206b74-7012-47af-9344-253aa7453e86/volumes" Feb 17 16:06:11 crc kubenswrapper[4672]: I0217 16:06:11.952136 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5948d11-a6da-4f21-a6e8-413a28791775" path="/var/lib/kubelet/pods/b5948d11-a6da-4f21-a6e8-413a28791775/volumes" Feb 17 16:06:12 crc kubenswrapper[4672]: E0217 16:06:12.381126 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-skvcq" podUID="a14c1588-0007-41ea-b334-f2bc0b2a5587" Feb 17 16:06:12 crc kubenswrapper[4672]: I0217 16:06:12.531337 4672 scope.go:117] "RemoveContainer" containerID="2e7ee527cc36a23dddfb24c985f54cd8f62474e27c49b6359dc06deaf0c13a85" Feb 17 16:06:13 crc kubenswrapper[4672]: I0217 16:06:13.159550 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hqdz9"] Feb 17 16:06:13 crc kubenswrapper[4672]: I0217 16:06:13.482276 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2"] Feb 17 16:06:13 crc kubenswrapper[4672]: I0217 16:06:13.527738 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f7zb5" Feb 17 16:06:13 crc kubenswrapper[4672]: I0217 16:06:13.566748 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77c7fb56bb-n856m"] Feb 17 16:06:14 crc kubenswrapper[4672]: I0217 16:06:14.000936 4672 generic.go:334] "Generic (PLEG): container finished" podID="fd92fc97-4e60-481b-8d9f-91642c614e48" containerID="ba68f0e8d7c2df7d9b63b1ea1f28812836887fec794051d3d309c25e1aa3177a" exitCode=0 Feb 17 16:06:14 crc kubenswrapper[4672]: I0217 16:06:14.001010 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4qrd" event={"ID":"fd92fc97-4e60-481b-8d9f-91642c614e48","Type":"ContainerDied","Data":"ba68f0e8d7c2df7d9b63b1ea1f28812836887fec794051d3d309c25e1aa3177a"} Feb 17 16:06:14 crc kubenswrapper[4672]: I0217 16:06:14.003199 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" event={"ID":"0debdd80-b109-4450-b08f-416bf65f3afe","Type":"ContainerStarted","Data":"050ffb8a48c48eb74b5d520c7de6bab62ecd0df8224566cb9bb37a9ec5303897"} Feb 17 16:06:14 crc kubenswrapper[4672]: I0217 16:06:14.012307 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nd8vd" event={"ID":"028c8d9b-9bd5-4cf5-9628-849e8b5aacaf","Type":"ContainerStarted","Data":"abc34a9b0dabc007afa1aef5113eca567847d8b7316bcfb7cc5bd1134a2e0951"} Feb 17 16:06:14 crc kubenswrapper[4672]: I0217 16:06:14.014121 4672 generic.go:334] "Generic (PLEG): container finished" podID="db7fc0eb-2899-4c37-bf2e-30d02cbffb2c" containerID="f027eff199f65139fc942a8436409f8687775cc5c5e1fe14f9e3d4aac9ca733d" exitCode=0 Feb 17 16:06:14 crc kubenswrapper[4672]: I0217 16:06:14.014201 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxnc7" event={"ID":"db7fc0eb-2899-4c37-bf2e-30d02cbffb2c","Type":"ContainerDied","Data":"f027eff199f65139fc942a8436409f8687775cc5c5e1fe14f9e3d4aac9ca733d"} Feb 17 16:06:14 crc kubenswrapper[4672]: I0217 16:06:14.020964 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hqdz9" event={"ID":"712be02c-2ccc-4989-aecb-653745bacb0d","Type":"ContainerStarted","Data":"26f1c8dd62059fff9cc7fdd47588385cecb9964ed7700dc5cb59d23412e22cb4"} Feb 17 16:06:14 crc kubenswrapper[4672]: I0217 16:06:14.023368 4672 generic.go:334] "Generic (PLEG): container finished" podID="505bfe60-cd7c-4bd6-981a-c14076ef5387" containerID="1333dd37a3bda9e4e70598bbb65be2a6fe1acfb184436daf55c9bc0fbb437b96" exitCode=0 Feb 17 16:06:14 crc kubenswrapper[4672]: I0217 16:06:14.023629 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l98cc" event={"ID":"505bfe60-cd7c-4bd6-981a-c14076ef5387","Type":"ContainerDied","Data":"1333dd37a3bda9e4e70598bbb65be2a6fe1acfb184436daf55c9bc0fbb437b96"} Feb 17 16:06:14 crc kubenswrapper[4672]: I0217 16:06:14.025671 4672 generic.go:334] "Generic (PLEG): container finished" podID="2217a413-541b-46bc-9563-b382fb9f090d" containerID="04635c2703cb08cadff9f8cfb718faca87011b5d8dbe7a1e1aa3adbe3d99235b" exitCode=0 Feb 17 16:06:14 crc kubenswrapper[4672]: I0217 16:06:14.025744 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mj5d" event={"ID":"2217a413-541b-46bc-9563-b382fb9f090d","Type":"ContainerDied","Data":"04635c2703cb08cadff9f8cfb718faca87011b5d8dbe7a1e1aa3adbe3d99235b"} Feb 17 16:06:14 crc kubenswrapper[4672]: I0217 16:06:14.028944 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2tc4" event={"ID":"f07446c8-4550-461c-a53d-c1d4bd056cfd","Type":"ContainerStarted","Data":"787ee26f626e77d7d4fe8cee28a7804b45b7b92cf79e95e9a7e6338b11cbcbd0"} Feb 17 16:06:14 crc kubenswrapper[4672]: I0217 16:06:14.038265 4672 generic.go:334] "Generic (PLEG): container finished" podID="708084b0-bae5-4cfc-ab45-cc5ca619f849" containerID="657c071c45776eef125e1f6068661c8962dc83e28dc6d1b9c7258eeae8d864fc" exitCode=0 Feb 17 16:06:14 crc kubenswrapper[4672]: I0217 16:06:14.038336 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvksq" event={"ID":"708084b0-bae5-4cfc-ab45-cc5ca619f849","Type":"ContainerDied","Data":"657c071c45776eef125e1f6068661c8962dc83e28dc6d1b9c7258eeae8d864fc"} Feb 17 16:06:14 crc kubenswrapper[4672]: I0217 16:06:14.044866 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" event={"ID":"9265ae5c-7519-431b-8cb4-585305fab03d","Type":"ContainerStarted","Data":"14e0fad4783ebfd3dc769f43916f01ede682b98616699c407713fb5d33526f12"} Feb 17 16:06:15 crc kubenswrapper[4672]: I0217 16:06:15.054393 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" event={"ID":"0debdd80-b109-4450-b08f-416bf65f3afe","Type":"ContainerStarted","Data":"b0021dd7ea812edc242460785223bf0c51016daf01a7ef7580158e27643d0061"} Feb 17 16:06:15 crc kubenswrapper[4672]: I0217 16:06:15.054774 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" Feb 17 16:06:15 crc kubenswrapper[4672]: I0217 16:06:15.059009 4672 generic.go:334] "Generic (PLEG): container finished" podID="028c8d9b-9bd5-4cf5-9628-849e8b5aacaf" containerID="abc34a9b0dabc007afa1aef5113eca567847d8b7316bcfb7cc5bd1134a2e0951" exitCode=0 Feb 17 16:06:15 crc kubenswrapper[4672]: I0217 16:06:15.059089 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nd8vd" event={"ID":"028c8d9b-9bd5-4cf5-9628-849e8b5aacaf","Type":"ContainerDied","Data":"abc34a9b0dabc007afa1aef5113eca567847d8b7316bcfb7cc5bd1134a2e0951"} Feb 17 16:06:15 crc kubenswrapper[4672]: I0217 16:06:15.061327 4672 generic.go:334] "Generic (PLEG): container finished" podID="f07446c8-4550-461c-a53d-c1d4bd056cfd" containerID="787ee26f626e77d7d4fe8cee28a7804b45b7b92cf79e95e9a7e6338b11cbcbd0" exitCode=0 Feb 17 16:06:15 crc kubenswrapper[4672]: I0217 16:06:15.061371 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2tc4" event={"ID":"f07446c8-4550-461c-a53d-c1d4bd056cfd","Type":"ContainerDied","Data":"787ee26f626e77d7d4fe8cee28a7804b45b7b92cf79e95e9a7e6338b11cbcbd0"} Feb 17 16:06:15 crc kubenswrapper[4672]: I0217 16:06:15.063531 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" event={"ID":"9265ae5c-7519-431b-8cb4-585305fab03d","Type":"ContainerStarted","Data":"c03c0e8470c28f16055fa3a95177c52daaf225362736e0729bd9689d1720f9a3"} Feb 17 16:06:15 crc kubenswrapper[4672]: I0217 16:06:15.063752 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" Feb 17 16:06:15 crc kubenswrapper[4672]: I0217 16:06:15.065248 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hqdz9" event={"ID":"712be02c-2ccc-4989-aecb-653745bacb0d","Type":"ContainerStarted","Data":"262c37099362a949163d7add61e44e5919298c0e6556fbb5ada5edbd2707ee2f"} Feb 17 16:06:15 crc kubenswrapper[4672]: I0217 16:06:15.065839 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" Feb 17 16:06:15 crc kubenswrapper[4672]: I0217 16:06:15.070570 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" Feb 17 16:06:15 crc kubenswrapper[4672]: I0217 16:06:15.073200 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" podStartSLOduration=19.073184607 podStartE2EDuration="19.073184607s" podCreationTimestamp="2026-02-17 16:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:06:15.072311884 +0000 UTC m=+183.826400626" watchObservedRunningTime="2026-02-17 16:06:15.073184607 +0000 UTC m=+183.827273329" Feb 17 16:06:15 crc kubenswrapper[4672]: I0217 16:06:15.116428 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" podStartSLOduration=19.11641211 podStartE2EDuration="19.11641211s" podCreationTimestamp="2026-02-17 16:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:06:15.094442679 +0000 UTC m=+183.848531441" watchObservedRunningTime="2026-02-17 16:06:15.11641211 +0000 UTC m=+183.870500842" Feb 17 16:06:16 crc kubenswrapper[4672]: I0217 16:06:16.083230 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hqdz9" event={"ID":"712be02c-2ccc-4989-aecb-653745bacb0d","Type":"ContainerStarted","Data":"21bc2dbcbfa9b224afa61d253d722f5928ae4989f59838d8a6176928d42bb87f"} Feb 17 16:06:16 crc kubenswrapper[4672]: I0217 16:06:16.108871 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-hqdz9" podStartSLOduration=159.108855929 podStartE2EDuration="2m39.108855929s" podCreationTimestamp="2026-02-17 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:06:16.106501927 +0000 UTC m=+184.860590659" watchObservedRunningTime="2026-02-17 16:06:16.108855929 +0000 UTC m=+184.862944661" Feb 17 16:06:16 crc kubenswrapper[4672]: I0217 16:06:16.190560 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77c7fb56bb-n856m"] Feb 17 16:06:16 crc kubenswrapper[4672]: I0217 16:06:16.369743 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2"] Feb 17 16:06:18 crc kubenswrapper[4672]: I0217 16:06:18.098127 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" podUID="9265ae5c-7519-431b-8cb4-585305fab03d" containerName="route-controller-manager" containerID="cri-o://c03c0e8470c28f16055fa3a95177c52daaf225362736e0729bd9689d1720f9a3" gracePeriod=30 Feb 17 16:06:18 crc kubenswrapper[4672]: I0217 16:06:18.098627 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mj5d" event={"ID":"2217a413-541b-46bc-9563-b382fb9f090d","Type":"ContainerStarted","Data":"1b1aedafd28439d4179113e7fd8db740f8916aa7ff207e8430cfd8bb650811e8"} Feb 17 16:06:18 crc kubenswrapper[4672]: I0217 16:06:18.098792 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" podUID="0debdd80-b109-4450-b08f-416bf65f3afe" containerName="controller-manager" containerID="cri-o://b0021dd7ea812edc242460785223bf0c51016daf01a7ef7580158e27643d0061" gracePeriod=30 Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.104948 4672 generic.go:334] "Generic (PLEG): container finished" podID="0debdd80-b109-4450-b08f-416bf65f3afe" containerID="b0021dd7ea812edc242460785223bf0c51016daf01a7ef7580158e27643d0061" exitCode=0 Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.105225 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" event={"ID":"0debdd80-b109-4450-b08f-416bf65f3afe","Type":"ContainerDied","Data":"b0021dd7ea812edc242460785223bf0c51016daf01a7ef7580158e27643d0061"} Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.106804 4672 generic.go:334] "Generic (PLEG): container finished" podID="9265ae5c-7519-431b-8cb4-585305fab03d" containerID="c03c0e8470c28f16055fa3a95177c52daaf225362736e0729bd9689d1720f9a3" exitCode=0 Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.107741 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" event={"ID":"9265ae5c-7519-431b-8cb4-585305fab03d","Type":"ContainerDied","Data":"c03c0e8470c28f16055fa3a95177c52daaf225362736e0729bd9689d1720f9a3"} Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.216498 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.260362 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2mj5d" podStartSLOduration=5.278457779 podStartE2EDuration="40.260345s" podCreationTimestamp="2026-02-17 16:05:39 +0000 UTC" firstStartedPulling="2026-02-17 16:05:42.496930084 +0000 UTC m=+151.251018816" lastFinishedPulling="2026-02-17 16:06:17.478817305 +0000 UTC m=+186.232906037" observedRunningTime="2026-02-17 16:06:19.13018673 +0000 UTC m=+187.884275472" watchObservedRunningTime="2026-02-17 16:06:19.260345 +0000 UTC m=+188.014433732" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.265855 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n"] Feb 17 16:06:19 crc kubenswrapper[4672]: E0217 16:06:19.266103 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9265ae5c-7519-431b-8cb4-585305fab03d" containerName="route-controller-manager" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.266118 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="9265ae5c-7519-431b-8cb4-585305fab03d" containerName="route-controller-manager" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.266328 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="9265ae5c-7519-431b-8cb4-585305fab03d" containerName="route-controller-manager" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.266810 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.268947 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n"] Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.360657 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9265ae5c-7519-431b-8cb4-585305fab03d-config\") pod \"9265ae5c-7519-431b-8cb4-585305fab03d\" (UID: \"9265ae5c-7519-431b-8cb4-585305fab03d\") " Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.360692 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9265ae5c-7519-431b-8cb4-585305fab03d-serving-cert\") pod \"9265ae5c-7519-431b-8cb4-585305fab03d\" (UID: \"9265ae5c-7519-431b-8cb4-585305fab03d\") " Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.360741 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9265ae5c-7519-431b-8cb4-585305fab03d-client-ca\") pod \"9265ae5c-7519-431b-8cb4-585305fab03d\" (UID: \"9265ae5c-7519-431b-8cb4-585305fab03d\") " Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.360773 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-999vl\" (UniqueName: \"kubernetes.io/projected/9265ae5c-7519-431b-8cb4-585305fab03d-kube-api-access-999vl\") pod \"9265ae5c-7519-431b-8cb4-585305fab03d\" (UID: \"9265ae5c-7519-431b-8cb4-585305fab03d\") " Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.362251 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9265ae5c-7519-431b-8cb4-585305fab03d-client-ca" (OuterVolumeSpecName: "client-ca") pod "9265ae5c-7519-431b-8cb4-585305fab03d" (UID: "9265ae5c-7519-431b-8cb4-585305fab03d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.362637 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9265ae5c-7519-431b-8cb4-585305fab03d-config" (OuterVolumeSpecName: "config") pod "9265ae5c-7519-431b-8cb4-585305fab03d" (UID: "9265ae5c-7519-431b-8cb4-585305fab03d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.366288 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9265ae5c-7519-431b-8cb4-585305fab03d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9265ae5c-7519-431b-8cb4-585305fab03d" (UID: "9265ae5c-7519-431b-8cb4-585305fab03d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.370167 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9265ae5c-7519-431b-8cb4-585305fab03d-kube-api-access-999vl" (OuterVolumeSpecName: "kube-api-access-999vl") pod "9265ae5c-7519-431b-8cb4-585305fab03d" (UID: "9265ae5c-7519-431b-8cb4-585305fab03d"). InnerVolumeSpecName "kube-api-access-999vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.436828 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.464090 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941d737b-74a1-4c73-9b3f-9cd0f37cc501-config\") pod \"route-controller-manager-b47dbb559-gtn5n\" (UID: \"941d737b-74a1-4c73-9b3f-9cd0f37cc501\") " pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.464146 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/941d737b-74a1-4c73-9b3f-9cd0f37cc501-serving-cert\") pod \"route-controller-manager-b47dbb559-gtn5n\" (UID: \"941d737b-74a1-4c73-9b3f-9cd0f37cc501\") " pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.464184 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/941d737b-74a1-4c73-9b3f-9cd0f37cc501-client-ca\") pod \"route-controller-manager-b47dbb559-gtn5n\" (UID: \"941d737b-74a1-4c73-9b3f-9cd0f37cc501\") " pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.464234 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggs9n\" (UniqueName: \"kubernetes.io/projected/941d737b-74a1-4c73-9b3f-9cd0f37cc501-kube-api-access-ggs9n\") pod \"route-controller-manager-b47dbb559-gtn5n\" (UID: \"941d737b-74a1-4c73-9b3f-9cd0f37cc501\") " pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.464275 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9265ae5c-7519-431b-8cb4-585305fab03d-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.464285 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9265ae5c-7519-431b-8cb4-585305fab03d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.464296 4672 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9265ae5c-7519-431b-8cb4-585305fab03d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.464306 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-999vl\" (UniqueName: \"kubernetes.io/projected/9265ae5c-7519-431b-8cb4-585305fab03d-kube-api-access-999vl\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.565261 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc7pw\" (UniqueName: \"kubernetes.io/projected/0debdd80-b109-4450-b08f-416bf65f3afe-kube-api-access-qc7pw\") pod \"0debdd80-b109-4450-b08f-416bf65f3afe\" (UID: \"0debdd80-b109-4450-b08f-416bf65f3afe\") " Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.565366 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0debdd80-b109-4450-b08f-416bf65f3afe-proxy-ca-bundles\") pod \"0debdd80-b109-4450-b08f-416bf65f3afe\" (UID: \"0debdd80-b109-4450-b08f-416bf65f3afe\") " Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.566016 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0debdd80-b109-4450-b08f-416bf65f3afe-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0debdd80-b109-4450-b08f-416bf65f3afe" (UID: "0debdd80-b109-4450-b08f-416bf65f3afe"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.565398 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0debdd80-b109-4450-b08f-416bf65f3afe-config\") pod \"0debdd80-b109-4450-b08f-416bf65f3afe\" (UID: \"0debdd80-b109-4450-b08f-416bf65f3afe\") " Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.566114 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0debdd80-b109-4450-b08f-416bf65f3afe-client-ca\") pod \"0debdd80-b109-4450-b08f-416bf65f3afe\" (UID: \"0debdd80-b109-4450-b08f-416bf65f3afe\") " Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.566179 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0debdd80-b109-4450-b08f-416bf65f3afe-config" (OuterVolumeSpecName: "config") pod "0debdd80-b109-4450-b08f-416bf65f3afe" (UID: "0debdd80-b109-4450-b08f-416bf65f3afe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.566595 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0debdd80-b109-4450-b08f-416bf65f3afe-client-ca" (OuterVolumeSpecName: "client-ca") pod "0debdd80-b109-4450-b08f-416bf65f3afe" (UID: "0debdd80-b109-4450-b08f-416bf65f3afe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.566644 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0debdd80-b109-4450-b08f-416bf65f3afe-serving-cert\") pod \"0debdd80-b109-4450-b08f-416bf65f3afe\" (UID: \"0debdd80-b109-4450-b08f-416bf65f3afe\") " Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.566784 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941d737b-74a1-4c73-9b3f-9cd0f37cc501-config\") pod \"route-controller-manager-b47dbb559-gtn5n\" (UID: \"941d737b-74a1-4c73-9b3f-9cd0f37cc501\") " pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.566820 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/941d737b-74a1-4c73-9b3f-9cd0f37cc501-serving-cert\") pod \"route-controller-manager-b47dbb559-gtn5n\" (UID: \"941d737b-74a1-4c73-9b3f-9cd0f37cc501\") " pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.566861 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/941d737b-74a1-4c73-9b3f-9cd0f37cc501-client-ca\") pod \"route-controller-manager-b47dbb559-gtn5n\" (UID: \"941d737b-74a1-4c73-9b3f-9cd0f37cc501\") " pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.566908 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggs9n\" (UniqueName: \"kubernetes.io/projected/941d737b-74a1-4c73-9b3f-9cd0f37cc501-kube-api-access-ggs9n\") pod \"route-controller-manager-b47dbb559-gtn5n\" (UID: \"941d737b-74a1-4c73-9b3f-9cd0f37cc501\") " pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.566956 4672 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0debdd80-b109-4450-b08f-416bf65f3afe-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.566970 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0debdd80-b109-4450-b08f-416bf65f3afe-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.566984 4672 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0debdd80-b109-4450-b08f-416bf65f3afe-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.568537 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/941d737b-74a1-4c73-9b3f-9cd0f37cc501-client-ca\") pod \"route-controller-manager-b47dbb559-gtn5n\" (UID: \"941d737b-74a1-4c73-9b3f-9cd0f37cc501\") " pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.568815 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941d737b-74a1-4c73-9b3f-9cd0f37cc501-config\") pod \"route-controller-manager-b47dbb559-gtn5n\" (UID: \"941d737b-74a1-4c73-9b3f-9cd0f37cc501\") " pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.569761 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0debdd80-b109-4450-b08f-416bf65f3afe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0debdd80-b109-4450-b08f-416bf65f3afe" (UID: "0debdd80-b109-4450-b08f-416bf65f3afe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.570122 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0debdd80-b109-4450-b08f-416bf65f3afe-kube-api-access-qc7pw" (OuterVolumeSpecName: "kube-api-access-qc7pw") pod "0debdd80-b109-4450-b08f-416bf65f3afe" (UID: "0debdd80-b109-4450-b08f-416bf65f3afe"). InnerVolumeSpecName "kube-api-access-qc7pw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.572479 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/941d737b-74a1-4c73-9b3f-9cd0f37cc501-serving-cert\") pod \"route-controller-manager-b47dbb559-gtn5n\" (UID: \"941d737b-74a1-4c73-9b3f-9cd0f37cc501\") " pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.588396 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggs9n\" (UniqueName: \"kubernetes.io/projected/941d737b-74a1-4c73-9b3f-9cd0f37cc501-kube-api-access-ggs9n\") pod \"route-controller-manager-b47dbb559-gtn5n\" (UID: \"941d737b-74a1-4c73-9b3f-9cd0f37cc501\") " pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.596958 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.629661 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2mj5d" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.629702 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2mj5d" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.667300 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0debdd80-b109-4450-b08f-416bf65f3afe-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:19 crc kubenswrapper[4672]: I0217 16:06:19.667335 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc7pw\" (UniqueName: \"kubernetes.io/projected/0debdd80-b109-4450-b08f-416bf65f3afe-kube-api-access-qc7pw\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:20 crc kubenswrapper[4672]: I0217 16:06:20.092319 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 16:06:20 crc kubenswrapper[4672]: I0217 16:06:20.116876 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" Feb 17 16:06:20 crc kubenswrapper[4672]: I0217 16:06:20.116879 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2" event={"ID":"9265ae5c-7519-431b-8cb4-585305fab03d","Type":"ContainerDied","Data":"14e0fad4783ebfd3dc769f43916f01ede682b98616699c407713fb5d33526f12"} Feb 17 16:06:20 crc kubenswrapper[4672]: I0217 16:06:20.117331 4672 scope.go:117] "RemoveContainer" containerID="c03c0e8470c28f16055fa3a95177c52daaf225362736e0729bd9689d1720f9a3" Feb 17 16:06:20 crc kubenswrapper[4672]: I0217 16:06:20.119771 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l98cc" event={"ID":"505bfe60-cd7c-4bd6-981a-c14076ef5387","Type":"ContainerStarted","Data":"a9533c4cdad3c1526df3d843142d6e6c5c83b9d67292d22e2a1bab0da3bb87cd"} Feb 17 16:06:20 crc kubenswrapper[4672]: I0217 16:06:20.135755 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" Feb 17 16:06:20 crc kubenswrapper[4672]: I0217 16:06:20.136084 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c7fb56bb-n856m" event={"ID":"0debdd80-b109-4450-b08f-416bf65f3afe","Type":"ContainerDied","Data":"050ffb8a48c48eb74b5d520c7de6bab62ecd0df8224566cb9bb37a9ec5303897"} Feb 17 16:06:20 crc kubenswrapper[4672]: I0217 16:06:20.159938 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l98cc" podStartSLOduration=3.93958651 podStartE2EDuration="40.159897713s" podCreationTimestamp="2026-02-17 16:05:40 +0000 UTC" firstStartedPulling="2026-02-17 16:05:42.486804206 +0000 UTC m=+151.240892938" lastFinishedPulling="2026-02-17 16:06:18.707115409 +0000 UTC m=+187.461204141" observedRunningTime="2026-02-17 16:06:20.146875619 +0000 UTC m=+188.900964391" watchObservedRunningTime="2026-02-17 16:06:20.159897713 +0000 UTC m=+188.913986485" Feb 17 16:06:20 crc kubenswrapper[4672]: I0217 16:06:20.171312 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2"] Feb 17 16:06:20 crc kubenswrapper[4672]: I0217 16:06:20.180010 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f86b664bc-vk8z2"] Feb 17 16:06:20 crc kubenswrapper[4672]: I0217 16:06:20.182197 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77c7fb56bb-n856m"] Feb 17 16:06:20 crc kubenswrapper[4672]: I0217 16:06:20.184402 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77c7fb56bb-n856m"] Feb 17 16:06:20 crc kubenswrapper[4672]: I0217 16:06:20.618369 4672 scope.go:117] "RemoveContainer" containerID="b0021dd7ea812edc242460785223bf0c51016daf01a7ef7580158e27643d0061" Feb 17 16:06:21 crc kubenswrapper[4672]: I0217 16:06:21.020126 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n"] Feb 17 16:06:21 crc kubenswrapper[4672]: W0217 16:06:21.034134 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod941d737b_74a1_4c73_9b3f_9cd0f37cc501.slice/crio-41360c64d5e6ef1799691f490e28b2b7459bfd86b7fe9e7a9529d7788fe15db9 WatchSource:0}: Error finding container 41360c64d5e6ef1799691f490e28b2b7459bfd86b7fe9e7a9529d7788fe15db9: Status 404 returned error can't find the container with id 41360c64d5e6ef1799691f490e28b2b7459bfd86b7fe9e7a9529d7788fe15db9 Feb 17 16:06:21 crc kubenswrapper[4672]: I0217 16:06:21.151598 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvksq" event={"ID":"708084b0-bae5-4cfc-ab45-cc5ca619f849","Type":"ContainerStarted","Data":"2d406caf07c8a3823ad5a2c63720af4c6685e036cb907f90d73608c637fcc924"} Feb 17 16:06:21 crc kubenswrapper[4672]: I0217 16:06:21.160911 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4qrd" event={"ID":"fd92fc97-4e60-481b-8d9f-91642c614e48","Type":"ContainerStarted","Data":"ccbe95bd6e9d3d7bb93704f38f18b3ad5553ef92dce4c48cf8c355eb2ace721c"} Feb 17 16:06:21 crc kubenswrapper[4672]: I0217 16:06:21.169428 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nd8vd" event={"ID":"028c8d9b-9bd5-4cf5-9628-849e8b5aacaf","Type":"ContainerStarted","Data":"6bc7ef43a67eab305d8e0ff78868a68557dbcc5610a7c4c68bed609473ee87a5"} Feb 17 16:06:21 crc kubenswrapper[4672]: I0217 16:06:21.171907 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2tc4" event={"ID":"f07446c8-4550-461c-a53d-c1d4bd056cfd","Type":"ContainerStarted","Data":"adf9ab48c7aac5a45eae3893686dd50b38eddb95f81ccf50d8ba0047ef736cae"} Feb 17 16:06:21 crc kubenswrapper[4672]: I0217 16:06:21.173116 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" event={"ID":"941d737b-74a1-4c73-9b3f-9cd0f37cc501","Type":"ContainerStarted","Data":"41360c64d5e6ef1799691f490e28b2b7459bfd86b7fe9e7a9529d7788fe15db9"} Feb 17 16:06:21 crc kubenswrapper[4672]: I0217 16:06:21.183911 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wvksq" podStartSLOduration=3.640843617 podStartE2EDuration="43.183898326s" podCreationTimestamp="2026-02-17 16:05:38 +0000 UTC" firstStartedPulling="2026-02-17 16:05:41.349771095 +0000 UTC m=+150.103859827" lastFinishedPulling="2026-02-17 16:06:20.892825794 +0000 UTC m=+189.646914536" observedRunningTime="2026-02-17 16:06:21.180474476 +0000 UTC m=+189.934563208" watchObservedRunningTime="2026-02-17 16:06:21.183898326 +0000 UTC m=+189.937987058" Feb 17 16:06:21 crc kubenswrapper[4672]: I0217 16:06:21.196483 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w2tc4" podStartSLOduration=2.9144141059999997 podStartE2EDuration="39.196467159s" podCreationTimestamp="2026-02-17 16:05:42 +0000 UTC" firstStartedPulling="2026-02-17 16:05:44.6046987 +0000 UTC m=+153.358787432" lastFinishedPulling="2026-02-17 16:06:20.886751753 +0000 UTC m=+189.640840485" observedRunningTime="2026-02-17 16:06:21.196432978 +0000 UTC m=+189.950521720" watchObservedRunningTime="2026-02-17 16:06:21.196467159 +0000 UTC m=+189.950555891" Feb 17 16:06:21 crc kubenswrapper[4672]: I0217 16:06:21.221274 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d4qrd" podStartSLOduration=2.97919937 podStartE2EDuration="42.221255434s" podCreationTimestamp="2026-02-17 16:05:39 +0000 UTC" firstStartedPulling="2026-02-17 16:05:41.376719497 +0000 UTC m=+150.130808229" lastFinishedPulling="2026-02-17 16:06:20.618775541 +0000 UTC m=+189.372864293" observedRunningTime="2026-02-17 16:06:21.217821813 +0000 UTC m=+189.971910545" watchObservedRunningTime="2026-02-17 16:06:21.221255434 +0000 UTC m=+189.975344166" Feb 17 16:06:21 crc kubenswrapper[4672]: I0217 16:06:21.227753 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l98cc" Feb 17 16:06:21 crc kubenswrapper[4672]: I0217 16:06:21.227798 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l98cc" Feb 17 16:06:21 crc kubenswrapper[4672]: I0217 16:06:21.241189 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nd8vd" podStartSLOduration=2.964532582 podStartE2EDuration="40.24117214s" podCreationTimestamp="2026-02-17 16:05:41 +0000 UTC" firstStartedPulling="2026-02-17 16:05:43.617266864 +0000 UTC m=+152.371355606" lastFinishedPulling="2026-02-17 16:06:20.893906412 +0000 UTC m=+189.647995164" observedRunningTime="2026-02-17 16:06:21.239970428 +0000 UTC m=+189.994059160" watchObservedRunningTime="2026-02-17 16:06:21.24117214 +0000 UTC m=+189.995260872" Feb 17 16:06:21 crc kubenswrapper[4672]: I0217 16:06:21.254613 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-2mj5d" podUID="2217a413-541b-46bc-9563-b382fb9f090d" containerName="registry-server" probeResult="failure" output=< Feb 17 16:06:21 crc kubenswrapper[4672]: timeout: failed to connect service ":50051" within 1s Feb 17 16:06:21 crc kubenswrapper[4672]: > Feb 17 16:06:21 crc kubenswrapper[4672]: I0217 16:06:21.952348 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0debdd80-b109-4450-b08f-416bf65f3afe" path="/var/lib/kubelet/pods/0debdd80-b109-4450-b08f-416bf65f3afe/volumes" Feb 17 16:06:21 crc kubenswrapper[4672]: I0217 16:06:21.953461 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9265ae5c-7519-431b-8cb4-585305fab03d" path="/var/lib/kubelet/pods/9265ae5c-7519-431b-8cb4-585305fab03d/volumes" Feb 17 16:06:21 crc kubenswrapper[4672]: I0217 16:06:21.968364 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fds9q"] Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.178894 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" event={"ID":"941d737b-74a1-4c73-9b3f-9cd0f37cc501","Type":"ContainerStarted","Data":"09fb292de176ad3c8335962c412d17681a73e61f5d06dca2bf2a06c76e6f8d37"} Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.180014 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.180988 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxnc7" event={"ID":"db7fc0eb-2899-4c37-bf2e-30d02cbffb2c","Type":"ContainerStarted","Data":"58f57cd507ac7f181e0427802d1a565f71f0d0631613d6d531b76ba76120e44a"} Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.185452 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.198265 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" podStartSLOduration=6.198250275 podStartE2EDuration="6.198250275s" podCreationTimestamp="2026-02-17 16:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:06:22.196499719 +0000 UTC m=+190.950588451" watchObservedRunningTime="2026-02-17 16:06:22.198250275 +0000 UTC m=+190.952339007" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.222764 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vxnc7" podStartSLOduration=4.471979873 podStartE2EDuration="44.222749932s" podCreationTimestamp="2026-02-17 16:05:38 +0000 UTC" firstStartedPulling="2026-02-17 16:05:41.266879174 +0000 UTC m=+150.020967906" lastFinishedPulling="2026-02-17 16:06:21.017649233 +0000 UTC m=+189.771737965" observedRunningTime="2026-02-17 16:06:22.219781344 +0000 UTC m=+190.973870086" watchObservedRunningTime="2026-02-17 16:06:22.222749932 +0000 UTC m=+190.976838664" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.227928 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6dd64d99c8-bp27p"] Feb 17 16:06:22 crc kubenswrapper[4672]: E0217 16:06:22.228117 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0debdd80-b109-4450-b08f-416bf65f3afe" containerName="controller-manager" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.228127 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="0debdd80-b109-4450-b08f-416bf65f3afe" containerName="controller-manager" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.228216 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="0debdd80-b109-4450-b08f-416bf65f3afe" containerName="controller-manager" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.228592 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.231708 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.232017 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.232151 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.232300 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.232531 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.232709 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.245819 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.267411 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-l98cc" podUID="505bfe60-cd7c-4bd6-981a-c14076ef5387" containerName="registry-server" probeResult="failure" output=< Feb 17 16:06:22 crc kubenswrapper[4672]: timeout: failed to connect service ":50051" within 1s Feb 17 16:06:22 crc kubenswrapper[4672]: > Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.271349 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dd64d99c8-bp27p"] Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.308868 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-proxy-ca-bundles\") pod \"controller-manager-6dd64d99c8-bp27p\" (UID: \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\") " pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.308958 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-config\") pod \"controller-manager-6dd64d99c8-bp27p\" (UID: \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\") " pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.308992 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-client-ca\") pod \"controller-manager-6dd64d99c8-bp27p\" (UID: \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\") " pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.309015 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-serving-cert\") pod \"controller-manager-6dd64d99c8-bp27p\" (UID: \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\") " pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.309039 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6vpp\" (UniqueName: \"kubernetes.io/projected/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-kube-api-access-z6vpp\") pod \"controller-manager-6dd64d99c8-bp27p\" (UID: \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\") " pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.325496 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nd8vd" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.325550 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nd8vd" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.409821 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-config\") pod \"controller-manager-6dd64d99c8-bp27p\" (UID: \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\") " pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.409877 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-client-ca\") pod \"controller-manager-6dd64d99c8-bp27p\" (UID: \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\") " pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.409900 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-serving-cert\") pod \"controller-manager-6dd64d99c8-bp27p\" (UID: \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\") " pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.409924 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6vpp\" (UniqueName: \"kubernetes.io/projected/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-kube-api-access-z6vpp\") pod \"controller-manager-6dd64d99c8-bp27p\" (UID: \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\") " pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.409996 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-proxy-ca-bundles\") pod \"controller-manager-6dd64d99c8-bp27p\" (UID: \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\") " pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.410831 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-client-ca\") pod \"controller-manager-6dd64d99c8-bp27p\" (UID: \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\") " pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.411010 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-config\") pod \"controller-manager-6dd64d99c8-bp27p\" (UID: \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\") " pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.411321 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-proxy-ca-bundles\") pod \"controller-manager-6dd64d99c8-bp27p\" (UID: \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\") " pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.415444 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-serving-cert\") pod \"controller-manager-6dd64d99c8-bp27p\" (UID: \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\") " pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.428729 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6vpp\" (UniqueName: \"kubernetes.io/projected/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-kube-api-access-z6vpp\") pod \"controller-manager-6dd64d99c8-bp27p\" (UID: \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\") " pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.549963 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.709878 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w2tc4" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.709922 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w2tc4" Feb 17 16:06:22 crc kubenswrapper[4672]: I0217 16:06:22.781746 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dd64d99c8-bp27p"] Feb 17 16:06:22 crc kubenswrapper[4672]: W0217 16:06:22.787645 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6b539d5_5bc7_4c27_96ed_4aed91e5d774.slice/crio-eb91da4dff21aef87af4dd0e12327ff2956423dd8ee08b1403c6f15c73468bfe WatchSource:0}: Error finding container eb91da4dff21aef87af4dd0e12327ff2956423dd8ee08b1403c6f15c73468bfe: Status 404 returned error can't find the container with id eb91da4dff21aef87af4dd0e12327ff2956423dd8ee08b1403c6f15c73468bfe Feb 17 16:06:23 crc kubenswrapper[4672]: I0217 16:06:23.187777 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" event={"ID":"f6b539d5-5bc7-4c27-96ed-4aed91e5d774","Type":"ContainerStarted","Data":"eb91da4dff21aef87af4dd0e12327ff2956423dd8ee08b1403c6f15c73468bfe"} Feb 17 16:06:23 crc kubenswrapper[4672]: I0217 16:06:23.327142 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 16:06:23 crc kubenswrapper[4672]: I0217 16:06:23.327765 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 16:06:23 crc kubenswrapper[4672]: I0217 16:06:23.329788 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 16:06:23 crc kubenswrapper[4672]: I0217 16:06:23.330046 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 16:06:23 crc kubenswrapper[4672]: I0217 16:06:23.336317 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 16:06:23 crc kubenswrapper[4672]: I0217 16:06:23.373838 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nd8vd" podUID="028c8d9b-9bd5-4cf5-9628-849e8b5aacaf" containerName="registry-server" probeResult="failure" output=< Feb 17 16:06:23 crc kubenswrapper[4672]: timeout: failed to connect service ":50051" within 1s Feb 17 16:06:23 crc kubenswrapper[4672]: > Feb 17 16:06:23 crc kubenswrapper[4672]: I0217 16:06:23.419005 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8318140f-d0de-4b99-b450-cc271fed4d84-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8318140f-d0de-4b99-b450-cc271fed4d84\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 16:06:23 crc kubenswrapper[4672]: I0217 16:06:23.419128 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8318140f-d0de-4b99-b450-cc271fed4d84-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8318140f-d0de-4b99-b450-cc271fed4d84\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 16:06:23 crc kubenswrapper[4672]: I0217 16:06:23.519980 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8318140f-d0de-4b99-b450-cc271fed4d84-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8318140f-d0de-4b99-b450-cc271fed4d84\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 16:06:23 crc kubenswrapper[4672]: I0217 16:06:23.520067 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8318140f-d0de-4b99-b450-cc271fed4d84-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8318140f-d0de-4b99-b450-cc271fed4d84\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 16:06:23 crc kubenswrapper[4672]: I0217 16:06:23.520190 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8318140f-d0de-4b99-b450-cc271fed4d84-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8318140f-d0de-4b99-b450-cc271fed4d84\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 16:06:23 crc kubenswrapper[4672]: I0217 16:06:23.551373 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8318140f-d0de-4b99-b450-cc271fed4d84-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8318140f-d0de-4b99-b450-cc271fed4d84\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 16:06:23 crc kubenswrapper[4672]: I0217 16:06:23.640869 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 16:06:23 crc kubenswrapper[4672]: I0217 16:06:23.761648 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w2tc4" podUID="f07446c8-4550-461c-a53d-c1d4bd056cfd" containerName="registry-server" probeResult="failure" output=< Feb 17 16:06:23 crc kubenswrapper[4672]: timeout: failed to connect service ":50051" within 1s Feb 17 16:06:23 crc kubenswrapper[4672]: > Feb 17 16:06:23 crc kubenswrapper[4672]: I0217 16:06:23.947907 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 16:06:23 crc kubenswrapper[4672]: W0217 16:06:23.955071 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8318140f_d0de_4b99_b450_cc271fed4d84.slice/crio-b335360d68c5b6a1b2adc41842c9982f7de556c7590fcd2f1fc24e184af55878 WatchSource:0}: Error finding container b335360d68c5b6a1b2adc41842c9982f7de556c7590fcd2f1fc24e184af55878: Status 404 returned error can't find the container with id b335360d68c5b6a1b2adc41842c9982f7de556c7590fcd2f1fc24e184af55878 Feb 17 16:06:24 crc kubenswrapper[4672]: I0217 16:06:24.201630 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" event={"ID":"f6b539d5-5bc7-4c27-96ed-4aed91e5d774","Type":"ContainerStarted","Data":"fcfd8e29ecfe03dcfa3c4f2cc8e0226bf63fe6553d40e0544a2af8035794375a"} Feb 17 16:06:24 crc kubenswrapper[4672]: I0217 16:06:24.202057 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" Feb 17 16:06:24 crc kubenswrapper[4672]: I0217 16:06:24.203158 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8318140f-d0de-4b99-b450-cc271fed4d84","Type":"ContainerStarted","Data":"b335360d68c5b6a1b2adc41842c9982f7de556c7590fcd2f1fc24e184af55878"} Feb 17 16:06:24 crc kubenswrapper[4672]: I0217 16:06:24.217147 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" Feb 17 16:06:24 crc kubenswrapper[4672]: I0217 16:06:24.255068 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" podStartSLOduration=8.255047244 podStartE2EDuration="8.255047244s" podCreationTimestamp="2026-02-17 16:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:06:24.220191042 +0000 UTC m=+192.974279764" watchObservedRunningTime="2026-02-17 16:06:24.255047244 +0000 UTC m=+193.009135976" Feb 17 16:06:25 crc kubenswrapper[4672]: I0217 16:06:25.210844 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8318140f-d0de-4b99-b450-cc271fed4d84","Type":"ContainerStarted","Data":"b226a8c2e3c3366dcb84915bc770309bff9885e9b757c2b74bf0f8389e63c2e5"} Feb 17 16:06:25 crc kubenswrapper[4672]: I0217 16:06:25.223763 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.223741756 podStartE2EDuration="2.223741756s" podCreationTimestamp="2026-02-17 16:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:06:25.222330768 +0000 UTC m=+193.976419500" watchObservedRunningTime="2026-02-17 16:06:25.223741756 +0000 UTC m=+193.977830488" Feb 17 16:06:26 crc kubenswrapper[4672]: I0217 16:06:26.227881 4672 generic.go:334] "Generic (PLEG): container finished" podID="8318140f-d0de-4b99-b450-cc271fed4d84" containerID="b226a8c2e3c3366dcb84915bc770309bff9885e9b757c2b74bf0f8389e63c2e5" exitCode=0 Feb 17 16:06:26 crc kubenswrapper[4672]: I0217 16:06:26.227991 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8318140f-d0de-4b99-b450-cc271fed4d84","Type":"ContainerDied","Data":"b226a8c2e3c3366dcb84915bc770309bff9885e9b757c2b74bf0f8389e63c2e5"} Feb 17 16:06:27 crc kubenswrapper[4672]: I0217 16:06:27.533915 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 16:06:27 crc kubenswrapper[4672]: I0217 16:06:27.569475 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:06:27 crc kubenswrapper[4672]: I0217 16:06:27.569545 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:06:27 crc kubenswrapper[4672]: I0217 16:06:27.682453 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8318140f-d0de-4b99-b450-cc271fed4d84-kubelet-dir\") pod \"8318140f-d0de-4b99-b450-cc271fed4d84\" (UID: \"8318140f-d0de-4b99-b450-cc271fed4d84\") " Feb 17 16:06:27 crc kubenswrapper[4672]: I0217 16:06:27.682576 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8318140f-d0de-4b99-b450-cc271fed4d84-kube-api-access\") pod \"8318140f-d0de-4b99-b450-cc271fed4d84\" (UID: \"8318140f-d0de-4b99-b450-cc271fed4d84\") " Feb 17 16:06:27 crc kubenswrapper[4672]: I0217 16:06:27.682573 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8318140f-d0de-4b99-b450-cc271fed4d84-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8318140f-d0de-4b99-b450-cc271fed4d84" (UID: "8318140f-d0de-4b99-b450-cc271fed4d84"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:27 crc kubenswrapper[4672]: I0217 16:06:27.687567 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8318140f-d0de-4b99-b450-cc271fed4d84-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8318140f-d0de-4b99-b450-cc271fed4d84" (UID: "8318140f-d0de-4b99-b450-cc271fed4d84"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:27 crc kubenswrapper[4672]: I0217 16:06:27.783674 4672 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8318140f-d0de-4b99-b450-cc271fed4d84-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:27 crc kubenswrapper[4672]: I0217 16:06:27.783715 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8318140f-d0de-4b99-b450-cc271fed4d84-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:28 crc kubenswrapper[4672]: I0217 16:06:28.243399 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8318140f-d0de-4b99-b450-cc271fed4d84","Type":"ContainerDied","Data":"b335360d68c5b6a1b2adc41842c9982f7de556c7590fcd2f1fc24e184af55878"} Feb 17 16:06:28 crc kubenswrapper[4672]: I0217 16:06:28.243724 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b335360d68c5b6a1b2adc41842c9982f7de556c7590fcd2f1fc24e184af55878" Feb 17 16:06:28 crc kubenswrapper[4672]: I0217 16:06:28.243504 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 16:06:29 crc kubenswrapper[4672]: I0217 16:06:29.178332 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vxnc7" Feb 17 16:06:29 crc kubenswrapper[4672]: I0217 16:06:29.178724 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vxnc7" Feb 17 16:06:29 crc kubenswrapper[4672]: I0217 16:06:29.247443 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vxnc7" Feb 17 16:06:29 crc kubenswrapper[4672]: I0217 16:06:29.284910 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vxnc7" Feb 17 16:06:29 crc kubenswrapper[4672]: I0217 16:06:29.658820 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wvksq" Feb 17 16:06:29 crc kubenswrapper[4672]: I0217 16:06:29.658925 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wvksq" Feb 17 16:06:29 crc kubenswrapper[4672]: I0217 16:06:29.672120 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2mj5d" Feb 17 16:06:29 crc kubenswrapper[4672]: I0217 16:06:29.718984 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2mj5d" Feb 17 16:06:29 crc kubenswrapper[4672]: I0217 16:06:29.720160 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wvksq" Feb 17 16:06:29 crc kubenswrapper[4672]: I0217 16:06:29.743469 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d4qrd" Feb 17 16:06:29 crc kubenswrapper[4672]: I0217 16:06:29.743859 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d4qrd" Feb 17 16:06:29 crc kubenswrapper[4672]: I0217 16:06:29.781364 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d4qrd" Feb 17 16:06:30 crc kubenswrapper[4672]: I0217 16:06:30.256369 4672 generic.go:334] "Generic (PLEG): container finished" podID="a14c1588-0007-41ea-b334-f2bc0b2a5587" containerID="704cfbb758c45783862959f5019977841907cfe4e407e0f92e2157824250218e" exitCode=0 Feb 17 16:06:30 crc kubenswrapper[4672]: I0217 16:06:30.256558 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skvcq" event={"ID":"a14c1588-0007-41ea-b334-f2bc0b2a5587","Type":"ContainerDied","Data":"704cfbb758c45783862959f5019977841907cfe4e407e0f92e2157824250218e"} Feb 17 16:06:30 crc kubenswrapper[4672]: I0217 16:06:30.311595 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wvksq" Feb 17 16:06:30 crc kubenswrapper[4672]: I0217 16:06:30.324067 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d4qrd" Feb 17 16:06:30 crc kubenswrapper[4672]: I0217 16:06:30.735891 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 16:06:30 crc kubenswrapper[4672]: E0217 16:06:30.736337 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8318140f-d0de-4b99-b450-cc271fed4d84" containerName="pruner" Feb 17 16:06:30 crc kubenswrapper[4672]: I0217 16:06:30.736378 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="8318140f-d0de-4b99-b450-cc271fed4d84" containerName="pruner" Feb 17 16:06:30 crc kubenswrapper[4672]: I0217 16:06:30.736668 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="8318140f-d0de-4b99-b450-cc271fed4d84" containerName="pruner" Feb 17 16:06:30 crc kubenswrapper[4672]: I0217 16:06:30.738256 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:06:30 crc kubenswrapper[4672]: I0217 16:06:30.744756 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 16:06:30 crc kubenswrapper[4672]: I0217 16:06:30.745844 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 16:06:30 crc kubenswrapper[4672]: I0217 16:06:30.746481 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 16:06:30 crc kubenswrapper[4672]: I0217 16:06:30.930160 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/780d94df-fc74-4af2-9e51-eea226989b67-kubelet-dir\") pod \"installer-9-crc\" (UID: \"780d94df-fc74-4af2-9e51-eea226989b67\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:06:30 crc kubenswrapper[4672]: I0217 16:06:30.930225 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/780d94df-fc74-4af2-9e51-eea226989b67-var-lock\") pod \"installer-9-crc\" (UID: \"780d94df-fc74-4af2-9e51-eea226989b67\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:06:30 crc kubenswrapper[4672]: I0217 16:06:30.930285 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/780d94df-fc74-4af2-9e51-eea226989b67-kube-api-access\") pod \"installer-9-crc\" (UID: \"780d94df-fc74-4af2-9e51-eea226989b67\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.031688 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/780d94df-fc74-4af2-9e51-eea226989b67-kubelet-dir\") pod \"installer-9-crc\" (UID: \"780d94df-fc74-4af2-9e51-eea226989b67\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.031755 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/780d94df-fc74-4af2-9e51-eea226989b67-var-lock\") pod \"installer-9-crc\" (UID: \"780d94df-fc74-4af2-9e51-eea226989b67\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.031810 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/780d94df-fc74-4af2-9e51-eea226989b67-kube-api-access\") pod \"installer-9-crc\" (UID: \"780d94df-fc74-4af2-9e51-eea226989b67\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.031834 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/780d94df-fc74-4af2-9e51-eea226989b67-kubelet-dir\") pod \"installer-9-crc\" (UID: \"780d94df-fc74-4af2-9e51-eea226989b67\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.031941 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/780d94df-fc74-4af2-9e51-eea226989b67-var-lock\") pod \"installer-9-crc\" (UID: \"780d94df-fc74-4af2-9e51-eea226989b67\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.067984 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/780d94df-fc74-4af2-9e51-eea226989b67-kube-api-access\") pod \"installer-9-crc\" (UID: \"780d94df-fc74-4af2-9e51-eea226989b67\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.076854 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2mj5d"] Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.085248 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.302062 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skvcq" event={"ID":"a14c1588-0007-41ea-b334-f2bc0b2a5587","Type":"ContainerStarted","Data":"7ce58878e34ac656f80e31f15082e8609e2e303b124ec65d388209af45ef87e4"} Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.302798 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2mj5d" podUID="2217a413-541b-46bc-9563-b382fb9f090d" containerName="registry-server" containerID="cri-o://1b1aedafd28439d4179113e7fd8db740f8916aa7ff207e8430cfd8bb650811e8" gracePeriod=2 Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.311055 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l98cc" Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.323348 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-skvcq" podStartSLOduration=2.101121957 podStartE2EDuration="50.32332994s" podCreationTimestamp="2026-02-17 16:05:41 +0000 UTC" firstStartedPulling="2026-02-17 16:05:42.482613385 +0000 UTC m=+151.236702117" lastFinishedPulling="2026-02-17 16:06:30.704821328 +0000 UTC m=+199.458910100" observedRunningTime="2026-02-17 16:06:31.321541947 +0000 UTC m=+200.075630679" watchObservedRunningTime="2026-02-17 16:06:31.32332994 +0000 UTC m=+200.077418672" Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.363456 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l98cc" Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.504977 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-skvcq" Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.505039 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-skvcq" Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.621837 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 16:06:31 crc kubenswrapper[4672]: W0217 16:06:31.626382 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod780d94df_fc74_4af2_9e51_eea226989b67.slice/crio-b9108b5b86df5573457ab3e0da8f8d66312212dea8226306862c05fa3e443302 WatchSource:0}: Error finding container b9108b5b86df5573457ab3e0da8f8d66312212dea8226306862c05fa3e443302: Status 404 returned error can't find the container with id b9108b5b86df5573457ab3e0da8f8d66312212dea8226306862c05fa3e443302 Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.713337 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2mj5d" Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.842499 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm2pf\" (UniqueName: \"kubernetes.io/projected/2217a413-541b-46bc-9563-b382fb9f090d-kube-api-access-zm2pf\") pod \"2217a413-541b-46bc-9563-b382fb9f090d\" (UID: \"2217a413-541b-46bc-9563-b382fb9f090d\") " Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.842883 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2217a413-541b-46bc-9563-b382fb9f090d-utilities\") pod \"2217a413-541b-46bc-9563-b382fb9f090d\" (UID: \"2217a413-541b-46bc-9563-b382fb9f090d\") " Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.842932 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2217a413-541b-46bc-9563-b382fb9f090d-catalog-content\") pod \"2217a413-541b-46bc-9563-b382fb9f090d\" (UID: \"2217a413-541b-46bc-9563-b382fb9f090d\") " Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.844165 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2217a413-541b-46bc-9563-b382fb9f090d-utilities" (OuterVolumeSpecName: "utilities") pod "2217a413-541b-46bc-9563-b382fb9f090d" (UID: "2217a413-541b-46bc-9563-b382fb9f090d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.847860 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2217a413-541b-46bc-9563-b382fb9f090d-kube-api-access-zm2pf" (OuterVolumeSpecName: "kube-api-access-zm2pf") pod "2217a413-541b-46bc-9563-b382fb9f090d" (UID: "2217a413-541b-46bc-9563-b382fb9f090d"). InnerVolumeSpecName "kube-api-access-zm2pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.901246 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2217a413-541b-46bc-9563-b382fb9f090d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2217a413-541b-46bc-9563-b382fb9f090d" (UID: "2217a413-541b-46bc-9563-b382fb9f090d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.944378 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2217a413-541b-46bc-9563-b382fb9f090d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.944424 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2217a413-541b-46bc-9563-b382fb9f090d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:31 crc kubenswrapper[4672]: I0217 16:06:31.944437 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm2pf\" (UniqueName: \"kubernetes.io/projected/2217a413-541b-46bc-9563-b382fb9f090d-kube-api-access-zm2pf\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.075661 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d4qrd"] Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.308107 4672 generic.go:334] "Generic (PLEG): container finished" podID="2217a413-541b-46bc-9563-b382fb9f090d" containerID="1b1aedafd28439d4179113e7fd8db740f8916aa7ff207e8430cfd8bb650811e8" exitCode=0 Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.308162 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mj5d" event={"ID":"2217a413-541b-46bc-9563-b382fb9f090d","Type":"ContainerDied","Data":"1b1aedafd28439d4179113e7fd8db740f8916aa7ff207e8430cfd8bb650811e8"} Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.308187 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mj5d" event={"ID":"2217a413-541b-46bc-9563-b382fb9f090d","Type":"ContainerDied","Data":"64b01acef81d24af7e432554023b6f6d34aec97a1210dfac755957445239e650"} Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.308201 4672 scope.go:117] "RemoveContainer" containerID="1b1aedafd28439d4179113e7fd8db740f8916aa7ff207e8430cfd8bb650811e8" Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.308287 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2mj5d" Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.312543 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"780d94df-fc74-4af2-9e51-eea226989b67","Type":"ContainerStarted","Data":"cded28b4fbd62f51c69307964652daa8a49b1200374c2a406fd64143d1ce57b1"} Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.312582 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"780d94df-fc74-4af2-9e51-eea226989b67","Type":"ContainerStarted","Data":"b9108b5b86df5573457ab3e0da8f8d66312212dea8226306862c05fa3e443302"} Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.326832 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2mj5d"] Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.328442 4672 scope.go:117] "RemoveContainer" containerID="04635c2703cb08cadff9f8cfb718faca87011b5d8dbe7a1e1aa3adbe3d99235b" Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.330374 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2mj5d"] Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.358129 4672 scope.go:117] "RemoveContainer" containerID="1d4166387646f9d0f5420811fdee06b68ba62ba08f69be9654edf224da14c7f7" Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.371389 4672 scope.go:117] "RemoveContainer" containerID="1b1aedafd28439d4179113e7fd8db740f8916aa7ff207e8430cfd8bb650811e8" Feb 17 16:06:32 crc kubenswrapper[4672]: E0217 16:06:32.371903 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b1aedafd28439d4179113e7fd8db740f8916aa7ff207e8430cfd8bb650811e8\": container with ID starting with 1b1aedafd28439d4179113e7fd8db740f8916aa7ff207e8430cfd8bb650811e8 not found: ID does not exist" containerID="1b1aedafd28439d4179113e7fd8db740f8916aa7ff207e8430cfd8bb650811e8" Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.371942 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b1aedafd28439d4179113e7fd8db740f8916aa7ff207e8430cfd8bb650811e8"} err="failed to get container status \"1b1aedafd28439d4179113e7fd8db740f8916aa7ff207e8430cfd8bb650811e8\": rpc error: code = NotFound desc = could not find container \"1b1aedafd28439d4179113e7fd8db740f8916aa7ff207e8430cfd8bb650811e8\": container with ID starting with 1b1aedafd28439d4179113e7fd8db740f8916aa7ff207e8430cfd8bb650811e8 not found: ID does not exist" Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.371993 4672 scope.go:117] "RemoveContainer" containerID="04635c2703cb08cadff9f8cfb718faca87011b5d8dbe7a1e1aa3adbe3d99235b" Feb 17 16:06:32 crc kubenswrapper[4672]: E0217 16:06:32.372269 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04635c2703cb08cadff9f8cfb718faca87011b5d8dbe7a1e1aa3adbe3d99235b\": container with ID starting with 04635c2703cb08cadff9f8cfb718faca87011b5d8dbe7a1e1aa3adbe3d99235b not found: ID does not exist" containerID="04635c2703cb08cadff9f8cfb718faca87011b5d8dbe7a1e1aa3adbe3d99235b" Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.372303 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04635c2703cb08cadff9f8cfb718faca87011b5d8dbe7a1e1aa3adbe3d99235b"} err="failed to get container status \"04635c2703cb08cadff9f8cfb718faca87011b5d8dbe7a1e1aa3adbe3d99235b\": rpc error: code = NotFound desc = could not find container \"04635c2703cb08cadff9f8cfb718faca87011b5d8dbe7a1e1aa3adbe3d99235b\": container with ID starting with 04635c2703cb08cadff9f8cfb718faca87011b5d8dbe7a1e1aa3adbe3d99235b not found: ID does not exist" Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.372345 4672 scope.go:117] "RemoveContainer" containerID="1d4166387646f9d0f5420811fdee06b68ba62ba08f69be9654edf224da14c7f7" Feb 17 16:06:32 crc kubenswrapper[4672]: E0217 16:06:32.372740 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d4166387646f9d0f5420811fdee06b68ba62ba08f69be9654edf224da14c7f7\": container with ID starting with 1d4166387646f9d0f5420811fdee06b68ba62ba08f69be9654edf224da14c7f7 not found: ID does not exist" containerID="1d4166387646f9d0f5420811fdee06b68ba62ba08f69be9654edf224da14c7f7" Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.372763 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d4166387646f9d0f5420811fdee06b68ba62ba08f69be9654edf224da14c7f7"} err="failed to get container status \"1d4166387646f9d0f5420811fdee06b68ba62ba08f69be9654edf224da14c7f7\": rpc error: code = NotFound desc = could not find container \"1d4166387646f9d0f5420811fdee06b68ba62ba08f69be9654edf224da14c7f7\": container with ID starting with 1d4166387646f9d0f5420811fdee06b68ba62ba08f69be9654edf224da14c7f7 not found: ID does not exist" Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.377047 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nd8vd" Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.420120 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.420100319 podStartE2EDuration="2.420100319s" podCreationTimestamp="2026-02-17 16:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:06:32.348401677 +0000 UTC m=+201.102490419" watchObservedRunningTime="2026-02-17 16:06:32.420100319 +0000 UTC m=+201.174189051" Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.439415 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nd8vd" Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.544290 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-skvcq" podUID="a14c1588-0007-41ea-b334-f2bc0b2a5587" containerName="registry-server" probeResult="failure" output=< Feb 17 16:06:32 crc kubenswrapper[4672]: timeout: failed to connect service ":50051" within 1s Feb 17 16:06:32 crc kubenswrapper[4672]: > Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.758980 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w2tc4" Feb 17 16:06:32 crc kubenswrapper[4672]: I0217 16:06:32.808092 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w2tc4" Feb 17 16:06:33 crc kubenswrapper[4672]: I0217 16:06:33.323404 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d4qrd" podUID="fd92fc97-4e60-481b-8d9f-91642c614e48" containerName="registry-server" containerID="cri-o://ccbe95bd6e9d3d7bb93704f38f18b3ad5553ef92dce4c48cf8c355eb2ace721c" gracePeriod=2 Feb 17 16:06:33 crc kubenswrapper[4672]: I0217 16:06:33.798634 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4qrd" Feb 17 16:06:33 crc kubenswrapper[4672]: I0217 16:06:33.953243 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2217a413-541b-46bc-9563-b382fb9f090d" path="/var/lib/kubelet/pods/2217a413-541b-46bc-9563-b382fb9f090d/volumes" Feb 17 16:06:33 crc kubenswrapper[4672]: I0217 16:06:33.970980 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd92fc97-4e60-481b-8d9f-91642c614e48-catalog-content\") pod \"fd92fc97-4e60-481b-8d9f-91642c614e48\" (UID: \"fd92fc97-4e60-481b-8d9f-91642c614e48\") " Feb 17 16:06:33 crc kubenswrapper[4672]: I0217 16:06:33.971105 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94q6r\" (UniqueName: \"kubernetes.io/projected/fd92fc97-4e60-481b-8d9f-91642c614e48-kube-api-access-94q6r\") pod \"fd92fc97-4e60-481b-8d9f-91642c614e48\" (UID: \"fd92fc97-4e60-481b-8d9f-91642c614e48\") " Feb 17 16:06:33 crc kubenswrapper[4672]: I0217 16:06:33.971161 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd92fc97-4e60-481b-8d9f-91642c614e48-utilities\") pod \"fd92fc97-4e60-481b-8d9f-91642c614e48\" (UID: \"fd92fc97-4e60-481b-8d9f-91642c614e48\") " Feb 17 16:06:33 crc kubenswrapper[4672]: I0217 16:06:33.973179 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd92fc97-4e60-481b-8d9f-91642c614e48-utilities" (OuterVolumeSpecName: "utilities") pod "fd92fc97-4e60-481b-8d9f-91642c614e48" (UID: "fd92fc97-4e60-481b-8d9f-91642c614e48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:06:33 crc kubenswrapper[4672]: I0217 16:06:33.976473 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd92fc97-4e60-481b-8d9f-91642c614e48-kube-api-access-94q6r" (OuterVolumeSpecName: "kube-api-access-94q6r") pod "fd92fc97-4e60-481b-8d9f-91642c614e48" (UID: "fd92fc97-4e60-481b-8d9f-91642c614e48"). InnerVolumeSpecName "kube-api-access-94q6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:34 crc kubenswrapper[4672]: I0217 16:06:34.055046 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd92fc97-4e60-481b-8d9f-91642c614e48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd92fc97-4e60-481b-8d9f-91642c614e48" (UID: "fd92fc97-4e60-481b-8d9f-91642c614e48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:06:34 crc kubenswrapper[4672]: I0217 16:06:34.073187 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94q6r\" (UniqueName: \"kubernetes.io/projected/fd92fc97-4e60-481b-8d9f-91642c614e48-kube-api-access-94q6r\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:34 crc kubenswrapper[4672]: I0217 16:06:34.073234 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd92fc97-4e60-481b-8d9f-91642c614e48-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:34 crc kubenswrapper[4672]: I0217 16:06:34.073252 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd92fc97-4e60-481b-8d9f-91642c614e48-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:34 crc kubenswrapper[4672]: I0217 16:06:34.340534 4672 generic.go:334] "Generic (PLEG): container finished" podID="fd92fc97-4e60-481b-8d9f-91642c614e48" containerID="ccbe95bd6e9d3d7bb93704f38f18b3ad5553ef92dce4c48cf8c355eb2ace721c" exitCode=0 Feb 17 16:06:34 crc kubenswrapper[4672]: I0217 16:06:34.340601 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4qrd" event={"ID":"fd92fc97-4e60-481b-8d9f-91642c614e48","Type":"ContainerDied","Data":"ccbe95bd6e9d3d7bb93704f38f18b3ad5553ef92dce4c48cf8c355eb2ace721c"} Feb 17 16:06:34 crc kubenswrapper[4672]: I0217 16:06:34.340644 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4qrd" event={"ID":"fd92fc97-4e60-481b-8d9f-91642c614e48","Type":"ContainerDied","Data":"75b516eb1355d4dbab7495cd227252df9c45fc1de1a3bb6891c47e0456f39baa"} Feb 17 16:06:34 crc kubenswrapper[4672]: I0217 16:06:34.340675 4672 scope.go:117] "RemoveContainer" containerID="ccbe95bd6e9d3d7bb93704f38f18b3ad5553ef92dce4c48cf8c355eb2ace721c" Feb 17 16:06:34 crc kubenswrapper[4672]: I0217 16:06:34.341443 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4qrd" Feb 17 16:06:34 crc kubenswrapper[4672]: I0217 16:06:34.376541 4672 scope.go:117] "RemoveContainer" containerID="ba68f0e8d7c2df7d9b63b1ea1f28812836887fec794051d3d309c25e1aa3177a" Feb 17 16:06:34 crc kubenswrapper[4672]: I0217 16:06:34.378168 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d4qrd"] Feb 17 16:06:34 crc kubenswrapper[4672]: I0217 16:06:34.385532 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d4qrd"] Feb 17 16:06:34 crc kubenswrapper[4672]: I0217 16:06:34.395771 4672 scope.go:117] "RemoveContainer" containerID="f06cfd169f60f3b2f78b261423cfd215914b3f1ab4870bcfe93428bd995a4804" Feb 17 16:06:34 crc kubenswrapper[4672]: I0217 16:06:34.424463 4672 scope.go:117] "RemoveContainer" containerID="ccbe95bd6e9d3d7bb93704f38f18b3ad5553ef92dce4c48cf8c355eb2ace721c" Feb 17 16:06:34 crc kubenswrapper[4672]: E0217 16:06:34.425022 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccbe95bd6e9d3d7bb93704f38f18b3ad5553ef92dce4c48cf8c355eb2ace721c\": container with ID starting with ccbe95bd6e9d3d7bb93704f38f18b3ad5553ef92dce4c48cf8c355eb2ace721c not found: ID does not exist" containerID="ccbe95bd6e9d3d7bb93704f38f18b3ad5553ef92dce4c48cf8c355eb2ace721c" Feb 17 16:06:34 crc kubenswrapper[4672]: I0217 16:06:34.425061 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccbe95bd6e9d3d7bb93704f38f18b3ad5553ef92dce4c48cf8c355eb2ace721c"} err="failed to get container status \"ccbe95bd6e9d3d7bb93704f38f18b3ad5553ef92dce4c48cf8c355eb2ace721c\": rpc error: code = NotFound desc = could not find container \"ccbe95bd6e9d3d7bb93704f38f18b3ad5553ef92dce4c48cf8c355eb2ace721c\": container with ID starting with ccbe95bd6e9d3d7bb93704f38f18b3ad5553ef92dce4c48cf8c355eb2ace721c not found: ID does not exist" Feb 17 16:06:34 crc kubenswrapper[4672]: I0217 16:06:34.425088 4672 scope.go:117] "RemoveContainer" containerID="ba68f0e8d7c2df7d9b63b1ea1f28812836887fec794051d3d309c25e1aa3177a" Feb 17 16:06:34 crc kubenswrapper[4672]: E0217 16:06:34.425391 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba68f0e8d7c2df7d9b63b1ea1f28812836887fec794051d3d309c25e1aa3177a\": container with ID starting with ba68f0e8d7c2df7d9b63b1ea1f28812836887fec794051d3d309c25e1aa3177a not found: ID does not exist" containerID="ba68f0e8d7c2df7d9b63b1ea1f28812836887fec794051d3d309c25e1aa3177a" Feb 17 16:06:34 crc kubenswrapper[4672]: I0217 16:06:34.425439 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba68f0e8d7c2df7d9b63b1ea1f28812836887fec794051d3d309c25e1aa3177a"} err="failed to get container status \"ba68f0e8d7c2df7d9b63b1ea1f28812836887fec794051d3d309c25e1aa3177a\": rpc error: code = NotFound desc = could not find container \"ba68f0e8d7c2df7d9b63b1ea1f28812836887fec794051d3d309c25e1aa3177a\": container with ID starting with ba68f0e8d7c2df7d9b63b1ea1f28812836887fec794051d3d309c25e1aa3177a not found: ID does not exist" Feb 17 16:06:34 crc kubenswrapper[4672]: I0217 16:06:34.425466 4672 scope.go:117] "RemoveContainer" containerID="f06cfd169f60f3b2f78b261423cfd215914b3f1ab4870bcfe93428bd995a4804" Feb 17 16:06:34 crc kubenswrapper[4672]: E0217 16:06:34.425896 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f06cfd169f60f3b2f78b261423cfd215914b3f1ab4870bcfe93428bd995a4804\": container with ID starting with f06cfd169f60f3b2f78b261423cfd215914b3f1ab4870bcfe93428bd995a4804 not found: ID does not exist" containerID="f06cfd169f60f3b2f78b261423cfd215914b3f1ab4870bcfe93428bd995a4804" Feb 17 16:06:34 crc kubenswrapper[4672]: I0217 16:06:34.425929 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f06cfd169f60f3b2f78b261423cfd215914b3f1ab4870bcfe93428bd995a4804"} err="failed to get container status \"f06cfd169f60f3b2f78b261423cfd215914b3f1ab4870bcfe93428bd995a4804\": rpc error: code = NotFound desc = could not find container \"f06cfd169f60f3b2f78b261423cfd215914b3f1ab4870bcfe93428bd995a4804\": container with ID starting with f06cfd169f60f3b2f78b261423cfd215914b3f1ab4870bcfe93428bd995a4804 not found: ID does not exist" Feb 17 16:06:35 crc kubenswrapper[4672]: I0217 16:06:35.954758 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd92fc97-4e60-481b-8d9f-91642c614e48" path="/var/lib/kubelet/pods/fd92fc97-4e60-481b-8d9f-91642c614e48/volumes" Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.183354 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dd64d99c8-bp27p"] Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.184350 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" podUID="f6b539d5-5bc7-4c27-96ed-4aed91e5d774" containerName="controller-manager" containerID="cri-o://fcfd8e29ecfe03dcfa3c4f2cc8e0226bf63fe6553d40e0544a2af8035794375a" gracePeriod=30 Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.206620 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n"] Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.206857 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" podUID="941d737b-74a1-4c73-9b3f-9cd0f37cc501" containerName="route-controller-manager" containerID="cri-o://09fb292de176ad3c8335962c412d17681a73e61f5d06dca2bf2a06c76e6f8d37" gracePeriod=30 Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.365006 4672 generic.go:334] "Generic (PLEG): container finished" podID="f6b539d5-5bc7-4c27-96ed-4aed91e5d774" containerID="fcfd8e29ecfe03dcfa3c4f2cc8e0226bf63fe6553d40e0544a2af8035794375a" exitCode=0 Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.365074 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" event={"ID":"f6b539d5-5bc7-4c27-96ed-4aed91e5d774","Type":"ContainerDied","Data":"fcfd8e29ecfe03dcfa3c4f2cc8e0226bf63fe6553d40e0544a2af8035794375a"} Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.365952 4672 generic.go:334] "Generic (PLEG): container finished" podID="941d737b-74a1-4c73-9b3f-9cd0f37cc501" containerID="09fb292de176ad3c8335962c412d17681a73e61f5d06dca2bf2a06c76e6f8d37" exitCode=0 Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.365982 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" event={"ID":"941d737b-74a1-4c73-9b3f-9cd0f37cc501","Type":"ContainerDied","Data":"09fb292de176ad3c8335962c412d17681a73e61f5d06dca2bf2a06c76e6f8d37"} Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.471456 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w2tc4"] Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.471701 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w2tc4" podUID="f07446c8-4550-461c-a53d-c1d4bd056cfd" containerName="registry-server" containerID="cri-o://adf9ab48c7aac5a45eae3893686dd50b38eddb95f81ccf50d8ba0047ef736cae" gracePeriod=2 Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.765166 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.835588 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941d737b-74a1-4c73-9b3f-9cd0f37cc501-config\") pod \"941d737b-74a1-4c73-9b3f-9cd0f37cc501\" (UID: \"941d737b-74a1-4c73-9b3f-9cd0f37cc501\") " Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.835751 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggs9n\" (UniqueName: \"kubernetes.io/projected/941d737b-74a1-4c73-9b3f-9cd0f37cc501-kube-api-access-ggs9n\") pod \"941d737b-74a1-4c73-9b3f-9cd0f37cc501\" (UID: \"941d737b-74a1-4c73-9b3f-9cd0f37cc501\") " Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.835789 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/941d737b-74a1-4c73-9b3f-9cd0f37cc501-client-ca\") pod \"941d737b-74a1-4c73-9b3f-9cd0f37cc501\" (UID: \"941d737b-74a1-4c73-9b3f-9cd0f37cc501\") " Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.835829 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/941d737b-74a1-4c73-9b3f-9cd0f37cc501-serving-cert\") pod \"941d737b-74a1-4c73-9b3f-9cd0f37cc501\" (UID: \"941d737b-74a1-4c73-9b3f-9cd0f37cc501\") " Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.836612 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941d737b-74a1-4c73-9b3f-9cd0f37cc501-config" (OuterVolumeSpecName: "config") pod "941d737b-74a1-4c73-9b3f-9cd0f37cc501" (UID: "941d737b-74a1-4c73-9b3f-9cd0f37cc501"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.837144 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941d737b-74a1-4c73-9b3f-9cd0f37cc501-client-ca" (OuterVolumeSpecName: "client-ca") pod "941d737b-74a1-4c73-9b3f-9cd0f37cc501" (UID: "941d737b-74a1-4c73-9b3f-9cd0f37cc501"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.841319 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/941d737b-74a1-4c73-9b3f-9cd0f37cc501-kube-api-access-ggs9n" (OuterVolumeSpecName: "kube-api-access-ggs9n") pod "941d737b-74a1-4c73-9b3f-9cd0f37cc501" (UID: "941d737b-74a1-4c73-9b3f-9cd0f37cc501"). InnerVolumeSpecName "kube-api-access-ggs9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.843641 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/941d737b-74a1-4c73-9b3f-9cd0f37cc501-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "941d737b-74a1-4c73-9b3f-9cd0f37cc501" (UID: "941d737b-74a1-4c73-9b3f-9cd0f37cc501"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.851075 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.858637 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2tc4" Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.936742 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-serving-cert\") pod \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\" (UID: \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\") " Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.936811 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-client-ca\") pod \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\" (UID: \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\") " Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.936844 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f07446c8-4550-461c-a53d-c1d4bd056cfd-utilities\") pod \"f07446c8-4550-461c-a53d-c1d4bd056cfd\" (UID: \"f07446c8-4550-461c-a53d-c1d4bd056cfd\") " Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.936881 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfjfs\" (UniqueName: \"kubernetes.io/projected/f07446c8-4550-461c-a53d-c1d4bd056cfd-kube-api-access-rfjfs\") pod \"f07446c8-4550-461c-a53d-c1d4bd056cfd\" (UID: \"f07446c8-4550-461c-a53d-c1d4bd056cfd\") " Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.936918 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-config\") pod \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\" (UID: \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\") " Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.937012 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6vpp\" (UniqueName: \"kubernetes.io/projected/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-kube-api-access-z6vpp\") pod \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\" (UID: \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\") " Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.937034 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f07446c8-4550-461c-a53d-c1d4bd056cfd-catalog-content\") pod \"f07446c8-4550-461c-a53d-c1d4bd056cfd\" (UID: \"f07446c8-4550-461c-a53d-c1d4bd056cfd\") " Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.937060 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-proxy-ca-bundles\") pod \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\" (UID: \"f6b539d5-5bc7-4c27-96ed-4aed91e5d774\") " Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.937368 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-client-ca" (OuterVolumeSpecName: "client-ca") pod "f6b539d5-5bc7-4c27-96ed-4aed91e5d774" (UID: "f6b539d5-5bc7-4c27-96ed-4aed91e5d774"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.937847 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f6b539d5-5bc7-4c27-96ed-4aed91e5d774" (UID: "f6b539d5-5bc7-4c27-96ed-4aed91e5d774"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.937959 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-config" (OuterVolumeSpecName: "config") pod "f6b539d5-5bc7-4c27-96ed-4aed91e5d774" (UID: "f6b539d5-5bc7-4c27-96ed-4aed91e5d774"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.938704 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f07446c8-4550-461c-a53d-c1d4bd056cfd-utilities" (OuterVolumeSpecName: "utilities") pod "f07446c8-4550-461c-a53d-c1d4bd056cfd" (UID: "f07446c8-4550-461c-a53d-c1d4bd056cfd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.938922 4672 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.938955 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f07446c8-4550-461c-a53d-c1d4bd056cfd-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.938974 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.938991 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggs9n\" (UniqueName: \"kubernetes.io/projected/941d737b-74a1-4c73-9b3f-9cd0f37cc501-kube-api-access-ggs9n\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.939009 4672 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/941d737b-74a1-4c73-9b3f-9cd0f37cc501-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.939024 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/941d737b-74a1-4c73-9b3f-9cd0f37cc501-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.939039 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941d737b-74a1-4c73-9b3f-9cd0f37cc501-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.939052 4672 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.939694 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f6b539d5-5bc7-4c27-96ed-4aed91e5d774" (UID: "f6b539d5-5bc7-4c27-96ed-4aed91e5d774"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.939810 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-kube-api-access-z6vpp" (OuterVolumeSpecName: "kube-api-access-z6vpp") pod "f6b539d5-5bc7-4c27-96ed-4aed91e5d774" (UID: "f6b539d5-5bc7-4c27-96ed-4aed91e5d774"). InnerVolumeSpecName "kube-api-access-z6vpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:36 crc kubenswrapper[4672]: I0217 16:06:36.939917 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f07446c8-4550-461c-a53d-c1d4bd056cfd-kube-api-access-rfjfs" (OuterVolumeSpecName: "kube-api-access-rfjfs") pod "f07446c8-4550-461c-a53d-c1d4bd056cfd" (UID: "f07446c8-4550-461c-a53d-c1d4bd056cfd"). InnerVolumeSpecName "kube-api-access-rfjfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.039956 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.039992 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfjfs\" (UniqueName: \"kubernetes.io/projected/f07446c8-4550-461c-a53d-c1d4bd056cfd-kube-api-access-rfjfs\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.040004 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6vpp\" (UniqueName: \"kubernetes.io/projected/f6b539d5-5bc7-4c27-96ed-4aed91e5d774-kube-api-access-z6vpp\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.044193 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f07446c8-4550-461c-a53d-c1d4bd056cfd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f07446c8-4550-461c-a53d-c1d4bd056cfd" (UID: "f07446c8-4550-461c-a53d-c1d4bd056cfd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.141691 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f07446c8-4550-461c-a53d-c1d4bd056cfd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.240252 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69fd6df768-dhgdt"] Feb 17 16:06:37 crc kubenswrapper[4672]: E0217 16:06:37.240719 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd92fc97-4e60-481b-8d9f-91642c614e48" containerName="extract-utilities" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.240759 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd92fc97-4e60-481b-8d9f-91642c614e48" containerName="extract-utilities" Feb 17 16:06:37 crc kubenswrapper[4672]: E0217 16:06:37.240787 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2217a413-541b-46bc-9563-b382fb9f090d" containerName="extract-content" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.240806 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="2217a413-541b-46bc-9563-b382fb9f090d" containerName="extract-content" Feb 17 16:06:37 crc kubenswrapper[4672]: E0217 16:06:37.240835 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f07446c8-4550-461c-a53d-c1d4bd056cfd" containerName="registry-server" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.240852 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="f07446c8-4550-461c-a53d-c1d4bd056cfd" containerName="registry-server" Feb 17 16:06:37 crc kubenswrapper[4672]: E0217 16:06:37.240879 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6b539d5-5bc7-4c27-96ed-4aed91e5d774" containerName="controller-manager" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.240895 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b539d5-5bc7-4c27-96ed-4aed91e5d774" containerName="controller-manager" Feb 17 16:06:37 crc kubenswrapper[4672]: E0217 16:06:37.240921 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2217a413-541b-46bc-9563-b382fb9f090d" containerName="registry-server" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.240973 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="2217a413-541b-46bc-9563-b382fb9f090d" containerName="registry-server" Feb 17 16:06:37 crc kubenswrapper[4672]: E0217 16:06:37.240998 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2217a413-541b-46bc-9563-b382fb9f090d" containerName="extract-utilities" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.241016 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="2217a413-541b-46bc-9563-b382fb9f090d" containerName="extract-utilities" Feb 17 16:06:37 crc kubenswrapper[4672]: E0217 16:06:37.241042 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f07446c8-4550-461c-a53d-c1d4bd056cfd" containerName="extract-content" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.241058 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="f07446c8-4550-461c-a53d-c1d4bd056cfd" containerName="extract-content" Feb 17 16:06:37 crc kubenswrapper[4672]: E0217 16:06:37.241086 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f07446c8-4550-461c-a53d-c1d4bd056cfd" containerName="extract-utilities" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.241102 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="f07446c8-4550-461c-a53d-c1d4bd056cfd" containerName="extract-utilities" Feb 17 16:06:37 crc kubenswrapper[4672]: E0217 16:06:37.241124 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941d737b-74a1-4c73-9b3f-9cd0f37cc501" containerName="route-controller-manager" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.241140 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="941d737b-74a1-4c73-9b3f-9cd0f37cc501" containerName="route-controller-manager" Feb 17 16:06:37 crc kubenswrapper[4672]: E0217 16:06:37.241165 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd92fc97-4e60-481b-8d9f-91642c614e48" containerName="extract-content" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.241181 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd92fc97-4e60-481b-8d9f-91642c614e48" containerName="extract-content" Feb 17 16:06:37 crc kubenswrapper[4672]: E0217 16:06:37.241199 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd92fc97-4e60-481b-8d9f-91642c614e48" containerName="registry-server" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.241217 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd92fc97-4e60-481b-8d9f-91642c614e48" containerName="registry-server" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.241459 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="2217a413-541b-46bc-9563-b382fb9f090d" containerName="registry-server" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.241489 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="941d737b-74a1-4c73-9b3f-9cd0f37cc501" containerName="route-controller-manager" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.241549 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6b539d5-5bc7-4c27-96ed-4aed91e5d774" containerName="controller-manager" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.241570 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd92fc97-4e60-481b-8d9f-91642c614e48" containerName="registry-server" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.241595 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="f07446c8-4550-461c-a53d-c1d4bd056cfd" containerName="registry-server" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.242299 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.250322 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69fd6df768-dhgdt"] Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.345157 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/751224af-7f09-421f-9337-8e187bb9abe9-serving-cert\") pod \"controller-manager-69fd6df768-dhgdt\" (UID: \"751224af-7f09-421f-9337-8e187bb9abe9\") " pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.345235 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/751224af-7f09-421f-9337-8e187bb9abe9-config\") pod \"controller-manager-69fd6df768-dhgdt\" (UID: \"751224af-7f09-421f-9337-8e187bb9abe9\") " pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.345263 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjhzk\" (UniqueName: \"kubernetes.io/projected/751224af-7f09-421f-9337-8e187bb9abe9-kube-api-access-qjhzk\") pod \"controller-manager-69fd6df768-dhgdt\" (UID: \"751224af-7f09-421f-9337-8e187bb9abe9\") " pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.345287 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/751224af-7f09-421f-9337-8e187bb9abe9-proxy-ca-bundles\") pod \"controller-manager-69fd6df768-dhgdt\" (UID: \"751224af-7f09-421f-9337-8e187bb9abe9\") " pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.345317 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/751224af-7f09-421f-9337-8e187bb9abe9-client-ca\") pod \"controller-manager-69fd6df768-dhgdt\" (UID: \"751224af-7f09-421f-9337-8e187bb9abe9\") " pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.374353 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" event={"ID":"f6b539d5-5bc7-4c27-96ed-4aed91e5d774","Type":"ContainerDied","Data":"eb91da4dff21aef87af4dd0e12327ff2956423dd8ee08b1403c6f15c73468bfe"} Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.374389 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dd64d99c8-bp27p" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.374413 4672 scope.go:117] "RemoveContainer" containerID="fcfd8e29ecfe03dcfa3c4f2cc8e0226bf63fe6553d40e0544a2af8035794375a" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.377834 4672 generic.go:334] "Generic (PLEG): container finished" podID="f07446c8-4550-461c-a53d-c1d4bd056cfd" containerID="adf9ab48c7aac5a45eae3893686dd50b38eddb95f81ccf50d8ba0047ef736cae" exitCode=0 Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.377878 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2tc4" event={"ID":"f07446c8-4550-461c-a53d-c1d4bd056cfd","Type":"ContainerDied","Data":"adf9ab48c7aac5a45eae3893686dd50b38eddb95f81ccf50d8ba0047ef736cae"} Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.377896 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2tc4" event={"ID":"f07446c8-4550-461c-a53d-c1d4bd056cfd","Type":"ContainerDied","Data":"4a3e8eb279b65b0f3d3831dee5a7d978d2ccd8b6b76e8bb03d61b4e989a018a6"} Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.377955 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2tc4" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.383591 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.388631 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n" event={"ID":"941d737b-74a1-4c73-9b3f-9cd0f37cc501","Type":"ContainerDied","Data":"41360c64d5e6ef1799691f490e28b2b7459bfd86b7fe9e7a9529d7788fe15db9"} Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.402631 4672 scope.go:117] "RemoveContainer" containerID="adf9ab48c7aac5a45eae3893686dd50b38eddb95f81ccf50d8ba0047ef736cae" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.416040 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w2tc4"] Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.432807 4672 scope.go:117] "RemoveContainer" containerID="787ee26f626e77d7d4fe8cee28a7804b45b7b92cf79e95e9a7e6338b11cbcbd0" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.446105 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/751224af-7f09-421f-9337-8e187bb9abe9-client-ca\") pod \"controller-manager-69fd6df768-dhgdt\" (UID: \"751224af-7f09-421f-9337-8e187bb9abe9\") " pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.446173 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/751224af-7f09-421f-9337-8e187bb9abe9-serving-cert\") pod \"controller-manager-69fd6df768-dhgdt\" (UID: \"751224af-7f09-421f-9337-8e187bb9abe9\") " pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.446253 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/751224af-7f09-421f-9337-8e187bb9abe9-config\") pod \"controller-manager-69fd6df768-dhgdt\" (UID: \"751224af-7f09-421f-9337-8e187bb9abe9\") " pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.446295 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjhzk\" (UniqueName: \"kubernetes.io/projected/751224af-7f09-421f-9337-8e187bb9abe9-kube-api-access-qjhzk\") pod \"controller-manager-69fd6df768-dhgdt\" (UID: \"751224af-7f09-421f-9337-8e187bb9abe9\") " pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.446335 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/751224af-7f09-421f-9337-8e187bb9abe9-proxy-ca-bundles\") pod \"controller-manager-69fd6df768-dhgdt\" (UID: \"751224af-7f09-421f-9337-8e187bb9abe9\") " pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.447596 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w2tc4"] Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.448240 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/751224af-7f09-421f-9337-8e187bb9abe9-proxy-ca-bundles\") pod \"controller-manager-69fd6df768-dhgdt\" (UID: \"751224af-7f09-421f-9337-8e187bb9abe9\") " pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.449752 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/751224af-7f09-421f-9337-8e187bb9abe9-config\") pod \"controller-manager-69fd6df768-dhgdt\" (UID: \"751224af-7f09-421f-9337-8e187bb9abe9\") " pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.449980 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/751224af-7f09-421f-9337-8e187bb9abe9-client-ca\") pod \"controller-manager-69fd6df768-dhgdt\" (UID: \"751224af-7f09-421f-9337-8e187bb9abe9\") " pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.455451 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/751224af-7f09-421f-9337-8e187bb9abe9-serving-cert\") pod \"controller-manager-69fd6df768-dhgdt\" (UID: \"751224af-7f09-421f-9337-8e187bb9abe9\") " pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.467694 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dd64d99c8-bp27p"] Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.468876 4672 scope.go:117] "RemoveContainer" containerID="0ed979bfe4c406ced027f2ca892de09b714baac76fcb42ef29cb4f136d208feb" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.478865 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6dd64d99c8-bp27p"] Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.488831 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n"] Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.491626 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjhzk\" (UniqueName: \"kubernetes.io/projected/751224af-7f09-421f-9337-8e187bb9abe9-kube-api-access-qjhzk\") pod \"controller-manager-69fd6df768-dhgdt\" (UID: \"751224af-7f09-421f-9337-8e187bb9abe9\") " pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.494490 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b47dbb559-gtn5n"] Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.554329 4672 scope.go:117] "RemoveContainer" containerID="adf9ab48c7aac5a45eae3893686dd50b38eddb95f81ccf50d8ba0047ef736cae" Feb 17 16:06:37 crc kubenswrapper[4672]: E0217 16:06:37.554972 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adf9ab48c7aac5a45eae3893686dd50b38eddb95f81ccf50d8ba0047ef736cae\": container with ID starting with adf9ab48c7aac5a45eae3893686dd50b38eddb95f81ccf50d8ba0047ef736cae not found: ID does not exist" containerID="adf9ab48c7aac5a45eae3893686dd50b38eddb95f81ccf50d8ba0047ef736cae" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.555057 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adf9ab48c7aac5a45eae3893686dd50b38eddb95f81ccf50d8ba0047ef736cae"} err="failed to get container status \"adf9ab48c7aac5a45eae3893686dd50b38eddb95f81ccf50d8ba0047ef736cae\": rpc error: code = NotFound desc = could not find container \"adf9ab48c7aac5a45eae3893686dd50b38eddb95f81ccf50d8ba0047ef736cae\": container with ID starting with adf9ab48c7aac5a45eae3893686dd50b38eddb95f81ccf50d8ba0047ef736cae not found: ID does not exist" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.555118 4672 scope.go:117] "RemoveContainer" containerID="787ee26f626e77d7d4fe8cee28a7804b45b7b92cf79e95e9a7e6338b11cbcbd0" Feb 17 16:06:37 crc kubenswrapper[4672]: E0217 16:06:37.555582 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"787ee26f626e77d7d4fe8cee28a7804b45b7b92cf79e95e9a7e6338b11cbcbd0\": container with ID starting with 787ee26f626e77d7d4fe8cee28a7804b45b7b92cf79e95e9a7e6338b11cbcbd0 not found: ID does not exist" containerID="787ee26f626e77d7d4fe8cee28a7804b45b7b92cf79e95e9a7e6338b11cbcbd0" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.555646 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"787ee26f626e77d7d4fe8cee28a7804b45b7b92cf79e95e9a7e6338b11cbcbd0"} err="failed to get container status \"787ee26f626e77d7d4fe8cee28a7804b45b7b92cf79e95e9a7e6338b11cbcbd0\": rpc error: code = NotFound desc = could not find container \"787ee26f626e77d7d4fe8cee28a7804b45b7b92cf79e95e9a7e6338b11cbcbd0\": container with ID starting with 787ee26f626e77d7d4fe8cee28a7804b45b7b92cf79e95e9a7e6338b11cbcbd0 not found: ID does not exist" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.555688 4672 scope.go:117] "RemoveContainer" containerID="0ed979bfe4c406ced027f2ca892de09b714baac76fcb42ef29cb4f136d208feb" Feb 17 16:06:37 crc kubenswrapper[4672]: E0217 16:06:37.556104 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ed979bfe4c406ced027f2ca892de09b714baac76fcb42ef29cb4f136d208feb\": container with ID starting with 0ed979bfe4c406ced027f2ca892de09b714baac76fcb42ef29cb4f136d208feb not found: ID does not exist" containerID="0ed979bfe4c406ced027f2ca892de09b714baac76fcb42ef29cb4f136d208feb" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.556157 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ed979bfe4c406ced027f2ca892de09b714baac76fcb42ef29cb4f136d208feb"} err="failed to get container status \"0ed979bfe4c406ced027f2ca892de09b714baac76fcb42ef29cb4f136d208feb\": rpc error: code = NotFound desc = could not find container \"0ed979bfe4c406ced027f2ca892de09b714baac76fcb42ef29cb4f136d208feb\": container with ID starting with 0ed979bfe4c406ced027f2ca892de09b714baac76fcb42ef29cb4f136d208feb not found: ID does not exist" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.556195 4672 scope.go:117] "RemoveContainer" containerID="09fb292de176ad3c8335962c412d17681a73e61f5d06dca2bf2a06c76e6f8d37" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.571050 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.955681 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="941d737b-74a1-4c73-9b3f-9cd0f37cc501" path="/var/lib/kubelet/pods/941d737b-74a1-4c73-9b3f-9cd0f37cc501/volumes" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.956747 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f07446c8-4550-461c-a53d-c1d4bd056cfd" path="/var/lib/kubelet/pods/f07446c8-4550-461c-a53d-c1d4bd056cfd/volumes" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.957654 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6b539d5-5bc7-4c27-96ed-4aed91e5d774" path="/var/lib/kubelet/pods/f6b539d5-5bc7-4c27-96ed-4aed91e5d774/volumes" Feb 17 16:06:37 crc kubenswrapper[4672]: I0217 16:06:37.998501 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69fd6df768-dhgdt"] Feb 17 16:06:38 crc kubenswrapper[4672]: W0217 16:06:38.005288 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod751224af_7f09_421f_9337_8e187bb9abe9.slice/crio-042c1aa2cd006822a08417a43f7be61172fb2cbcaeaa97b579d3ae8a3fc97ee3 WatchSource:0}: Error finding container 042c1aa2cd006822a08417a43f7be61172fb2cbcaeaa97b579d3ae8a3fc97ee3: Status 404 returned error can't find the container with id 042c1aa2cd006822a08417a43f7be61172fb2cbcaeaa97b579d3ae8a3fc97ee3 Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.236750 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9"] Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.237669 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.242748 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.242874 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.243180 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.243484 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.243953 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.244302 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.261839 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9"] Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.358709 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr4bk\" (UniqueName: \"kubernetes.io/projected/61826563-80f3-473d-a345-16a690367132-kube-api-access-xr4bk\") pod \"route-controller-manager-6d4cbb6688-dpbt9\" (UID: \"61826563-80f3-473d-a345-16a690367132\") " pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.359154 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61826563-80f3-473d-a345-16a690367132-client-ca\") pod \"route-controller-manager-6d4cbb6688-dpbt9\" (UID: \"61826563-80f3-473d-a345-16a690367132\") " pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.359210 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61826563-80f3-473d-a345-16a690367132-config\") pod \"route-controller-manager-6d4cbb6688-dpbt9\" (UID: \"61826563-80f3-473d-a345-16a690367132\") " pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.359318 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61826563-80f3-473d-a345-16a690367132-serving-cert\") pod \"route-controller-manager-6d4cbb6688-dpbt9\" (UID: \"61826563-80f3-473d-a345-16a690367132\") " pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.393590 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" event={"ID":"751224af-7f09-421f-9337-8e187bb9abe9","Type":"ContainerStarted","Data":"f729ded4c61dd3c6f1db00e8c8e11fb4ba2bcd412c0025cc433f943647f2734a"} Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.393643 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" event={"ID":"751224af-7f09-421f-9337-8e187bb9abe9","Type":"ContainerStarted","Data":"042c1aa2cd006822a08417a43f7be61172fb2cbcaeaa97b579d3ae8a3fc97ee3"} Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.394032 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.417909 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" podStartSLOduration=2.417891816 podStartE2EDuration="2.417891816s" podCreationTimestamp="2026-02-17 16:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:06:38.417562088 +0000 UTC m=+207.171650860" watchObservedRunningTime="2026-02-17 16:06:38.417891816 +0000 UTC m=+207.171980548" Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.428528 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.460967 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr4bk\" (UniqueName: \"kubernetes.io/projected/61826563-80f3-473d-a345-16a690367132-kube-api-access-xr4bk\") pod \"route-controller-manager-6d4cbb6688-dpbt9\" (UID: \"61826563-80f3-473d-a345-16a690367132\") " pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.461019 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61826563-80f3-473d-a345-16a690367132-client-ca\") pod \"route-controller-manager-6d4cbb6688-dpbt9\" (UID: \"61826563-80f3-473d-a345-16a690367132\") " pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.461055 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61826563-80f3-473d-a345-16a690367132-config\") pod \"route-controller-manager-6d4cbb6688-dpbt9\" (UID: \"61826563-80f3-473d-a345-16a690367132\") " pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.461097 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61826563-80f3-473d-a345-16a690367132-serving-cert\") pod \"route-controller-manager-6d4cbb6688-dpbt9\" (UID: \"61826563-80f3-473d-a345-16a690367132\") " pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.461994 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61826563-80f3-473d-a345-16a690367132-client-ca\") pod \"route-controller-manager-6d4cbb6688-dpbt9\" (UID: \"61826563-80f3-473d-a345-16a690367132\") " pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.463316 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61826563-80f3-473d-a345-16a690367132-config\") pod \"route-controller-manager-6d4cbb6688-dpbt9\" (UID: \"61826563-80f3-473d-a345-16a690367132\") " pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.469559 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61826563-80f3-473d-a345-16a690367132-serving-cert\") pod \"route-controller-manager-6d4cbb6688-dpbt9\" (UID: \"61826563-80f3-473d-a345-16a690367132\") " pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.484117 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr4bk\" (UniqueName: \"kubernetes.io/projected/61826563-80f3-473d-a345-16a690367132-kube-api-access-xr4bk\") pod \"route-controller-manager-6d4cbb6688-dpbt9\" (UID: \"61826563-80f3-473d-a345-16a690367132\") " pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.553187 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" Feb 17 16:06:38 crc kubenswrapper[4672]: I0217 16:06:38.996978 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9"] Feb 17 16:06:39 crc kubenswrapper[4672]: W0217 16:06:39.001679 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61826563_80f3_473d_a345_16a690367132.slice/crio-bc745b4f71d013a88cea96258340a57fe72d934309bfe150c6a4660a69cc1ab9 WatchSource:0}: Error finding container bc745b4f71d013a88cea96258340a57fe72d934309bfe150c6a4660a69cc1ab9: Status 404 returned error can't find the container with id bc745b4f71d013a88cea96258340a57fe72d934309bfe150c6a4660a69cc1ab9 Feb 17 16:06:39 crc kubenswrapper[4672]: I0217 16:06:39.402622 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" event={"ID":"61826563-80f3-473d-a345-16a690367132","Type":"ContainerStarted","Data":"7f9761102c7c2aee3d48efc9cac4cb3e2506a3579bdf454be9df8d7a05d7b7c4"} Feb 17 16:06:39 crc kubenswrapper[4672]: I0217 16:06:39.402661 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" event={"ID":"61826563-80f3-473d-a345-16a690367132","Type":"ContainerStarted","Data":"bc745b4f71d013a88cea96258340a57fe72d934309bfe150c6a4660a69cc1ab9"} Feb 17 16:06:39 crc kubenswrapper[4672]: I0217 16:06:39.403014 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" Feb 17 16:06:39 crc kubenswrapper[4672]: I0217 16:06:39.432410 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" podStartSLOduration=3.4323822440000002 podStartE2EDuration="3.432382244s" podCreationTimestamp="2026-02-17 16:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:06:39.42811097 +0000 UTC m=+208.182199702" watchObservedRunningTime="2026-02-17 16:06:39.432382244 +0000 UTC m=+208.186471016" Feb 17 16:06:39 crc kubenswrapper[4672]: I0217 16:06:39.884071 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" Feb 17 16:06:41 crc kubenswrapper[4672]: I0217 16:06:41.558865 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-skvcq" Feb 17 16:06:41 crc kubenswrapper[4672]: I0217 16:06:41.630569 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-skvcq" Feb 17 16:06:43 crc kubenswrapper[4672]: I0217 16:06:43.484228 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-skvcq"] Feb 17 16:06:43 crc kubenswrapper[4672]: I0217 16:06:43.484827 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-skvcq" podUID="a14c1588-0007-41ea-b334-f2bc0b2a5587" containerName="registry-server" containerID="cri-o://7ce58878e34ac656f80e31f15082e8609e2e303b124ec65d388209af45ef87e4" gracePeriod=2 Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.049588 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skvcq" Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.145435 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a14c1588-0007-41ea-b334-f2bc0b2a5587-catalog-content\") pod \"a14c1588-0007-41ea-b334-f2bc0b2a5587\" (UID: \"a14c1588-0007-41ea-b334-f2bc0b2a5587\") " Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.145550 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a14c1588-0007-41ea-b334-f2bc0b2a5587-utilities\") pod \"a14c1588-0007-41ea-b334-f2bc0b2a5587\" (UID: \"a14c1588-0007-41ea-b334-f2bc0b2a5587\") " Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.145629 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cvrz\" (UniqueName: \"kubernetes.io/projected/a14c1588-0007-41ea-b334-f2bc0b2a5587-kube-api-access-7cvrz\") pod \"a14c1588-0007-41ea-b334-f2bc0b2a5587\" (UID: \"a14c1588-0007-41ea-b334-f2bc0b2a5587\") " Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.146567 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a14c1588-0007-41ea-b334-f2bc0b2a5587-utilities" (OuterVolumeSpecName: "utilities") pod "a14c1588-0007-41ea-b334-f2bc0b2a5587" (UID: "a14c1588-0007-41ea-b334-f2bc0b2a5587"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.164904 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a14c1588-0007-41ea-b334-f2bc0b2a5587-kube-api-access-7cvrz" (OuterVolumeSpecName: "kube-api-access-7cvrz") pod "a14c1588-0007-41ea-b334-f2bc0b2a5587" (UID: "a14c1588-0007-41ea-b334-f2bc0b2a5587"). InnerVolumeSpecName "kube-api-access-7cvrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.194888 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a14c1588-0007-41ea-b334-f2bc0b2a5587-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a14c1588-0007-41ea-b334-f2bc0b2a5587" (UID: "a14c1588-0007-41ea-b334-f2bc0b2a5587"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.247548 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a14c1588-0007-41ea-b334-f2bc0b2a5587-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.247579 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cvrz\" (UniqueName: \"kubernetes.io/projected/a14c1588-0007-41ea-b334-f2bc0b2a5587-kube-api-access-7cvrz\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.247592 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a14c1588-0007-41ea-b334-f2bc0b2a5587-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.433887 4672 generic.go:334] "Generic (PLEG): container finished" podID="a14c1588-0007-41ea-b334-f2bc0b2a5587" containerID="7ce58878e34ac656f80e31f15082e8609e2e303b124ec65d388209af45ef87e4" exitCode=0 Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.433939 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skvcq" event={"ID":"a14c1588-0007-41ea-b334-f2bc0b2a5587","Type":"ContainerDied","Data":"7ce58878e34ac656f80e31f15082e8609e2e303b124ec65d388209af45ef87e4"} Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.433977 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skvcq" event={"ID":"a14c1588-0007-41ea-b334-f2bc0b2a5587","Type":"ContainerDied","Data":"236a4a560b59b58b6757c4b4134eeee2f064f8083b7c5b2e0465ae37aa29a93c"} Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.433999 4672 scope.go:117] "RemoveContainer" containerID="7ce58878e34ac656f80e31f15082e8609e2e303b124ec65d388209af45ef87e4" Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.434010 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skvcq" Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.449197 4672 scope.go:117] "RemoveContainer" containerID="704cfbb758c45783862959f5019977841907cfe4e407e0f92e2157824250218e" Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.469370 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-skvcq"] Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.474446 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-skvcq"] Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.480922 4672 scope.go:117] "RemoveContainer" containerID="aac792017eb8d9dee5a920abc92512584f448bd2ba2d8261f47e6c6062805856" Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.504695 4672 scope.go:117] "RemoveContainer" containerID="7ce58878e34ac656f80e31f15082e8609e2e303b124ec65d388209af45ef87e4" Feb 17 16:06:44 crc kubenswrapper[4672]: E0217 16:06:44.505726 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ce58878e34ac656f80e31f15082e8609e2e303b124ec65d388209af45ef87e4\": container with ID starting with 7ce58878e34ac656f80e31f15082e8609e2e303b124ec65d388209af45ef87e4 not found: ID does not exist" containerID="7ce58878e34ac656f80e31f15082e8609e2e303b124ec65d388209af45ef87e4" Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.505777 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ce58878e34ac656f80e31f15082e8609e2e303b124ec65d388209af45ef87e4"} err="failed to get container status \"7ce58878e34ac656f80e31f15082e8609e2e303b124ec65d388209af45ef87e4\": rpc error: code = NotFound desc = could not find container \"7ce58878e34ac656f80e31f15082e8609e2e303b124ec65d388209af45ef87e4\": container with ID starting with 7ce58878e34ac656f80e31f15082e8609e2e303b124ec65d388209af45ef87e4 not found: ID does not exist" Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.505811 4672 scope.go:117] "RemoveContainer" containerID="704cfbb758c45783862959f5019977841907cfe4e407e0f92e2157824250218e" Feb 17 16:06:44 crc kubenswrapper[4672]: E0217 16:06:44.506144 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"704cfbb758c45783862959f5019977841907cfe4e407e0f92e2157824250218e\": container with ID starting with 704cfbb758c45783862959f5019977841907cfe4e407e0f92e2157824250218e not found: ID does not exist" containerID="704cfbb758c45783862959f5019977841907cfe4e407e0f92e2157824250218e" Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.506208 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"704cfbb758c45783862959f5019977841907cfe4e407e0f92e2157824250218e"} err="failed to get container status \"704cfbb758c45783862959f5019977841907cfe4e407e0f92e2157824250218e\": rpc error: code = NotFound desc = could not find container \"704cfbb758c45783862959f5019977841907cfe4e407e0f92e2157824250218e\": container with ID starting with 704cfbb758c45783862959f5019977841907cfe4e407e0f92e2157824250218e not found: ID does not exist" Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.506247 4672 scope.go:117] "RemoveContainer" containerID="aac792017eb8d9dee5a920abc92512584f448bd2ba2d8261f47e6c6062805856" Feb 17 16:06:44 crc kubenswrapper[4672]: E0217 16:06:44.506849 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aac792017eb8d9dee5a920abc92512584f448bd2ba2d8261f47e6c6062805856\": container with ID starting with aac792017eb8d9dee5a920abc92512584f448bd2ba2d8261f47e6c6062805856 not found: ID does not exist" containerID="aac792017eb8d9dee5a920abc92512584f448bd2ba2d8261f47e6c6062805856" Feb 17 16:06:44 crc kubenswrapper[4672]: I0217 16:06:44.506878 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aac792017eb8d9dee5a920abc92512584f448bd2ba2d8261f47e6c6062805856"} err="failed to get container status \"aac792017eb8d9dee5a920abc92512584f448bd2ba2d8261f47e6c6062805856\": rpc error: code = NotFound desc = could not find container \"aac792017eb8d9dee5a920abc92512584f448bd2ba2d8261f47e6c6062805856\": container with ID starting with aac792017eb8d9dee5a920abc92512584f448bd2ba2d8261f47e6c6062805856 not found: ID does not exist" Feb 17 16:06:45 crc kubenswrapper[4672]: I0217 16:06:45.950292 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a14c1588-0007-41ea-b334-f2bc0b2a5587" path="/var/lib/kubelet/pods/a14c1588-0007-41ea-b334-f2bc0b2a5587/volumes" Feb 17 16:06:46 crc kubenswrapper[4672]: I0217 16:06:46.993731 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" podUID="59e82a1f-2c6a-4938-9696-ffe2eac280ce" containerName="oauth-openshift" containerID="cri-o://8469d573b653a8806c853a1173e0645c56ba099dcf83caa84671c857c933e1b9" gracePeriod=15 Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.455143 4672 generic.go:334] "Generic (PLEG): container finished" podID="59e82a1f-2c6a-4938-9696-ffe2eac280ce" containerID="8469d573b653a8806c853a1173e0645c56ba099dcf83caa84671c857c933e1b9" exitCode=0 Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.455194 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" event={"ID":"59e82a1f-2c6a-4938-9696-ffe2eac280ce","Type":"ContainerDied","Data":"8469d573b653a8806c853a1173e0645c56ba099dcf83caa84671c857c933e1b9"} Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.455223 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" event={"ID":"59e82a1f-2c6a-4938-9696-ffe2eac280ce","Type":"ContainerDied","Data":"218653ffd6e6f1e67a7dbd7e3a9ac0ccf75b109821e487b67c07515f389610ad"} Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.455240 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="218653ffd6e6f1e67a7dbd7e3a9ac0ccf75b109821e487b67c07515f389610ad" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.498365 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.597094 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-cliconfig\") pod \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.597178 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-session\") pod \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.597224 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtvnk\" (UniqueName: \"kubernetes.io/projected/59e82a1f-2c6a-4938-9696-ffe2eac280ce-kube-api-access-wtvnk\") pod \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.597285 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-audit-policies\") pod \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.597335 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-template-error\") pod \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.597390 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-trusted-ca-bundle\") pod \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.597408 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-service-ca\") pod \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.597437 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-template-login\") pod \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.597466 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-idp-0-file-data\") pod \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.597527 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-template-provider-selection\") pod \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.597562 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/59e82a1f-2c6a-4938-9696-ffe2eac280ce-audit-dir\") pod \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.597583 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-serving-cert\") pod \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.597679 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-router-certs\") pod \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.597714 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-ocp-branding-template\") pod \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\" (UID: \"59e82a1f-2c6a-4938-9696-ffe2eac280ce\") " Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.598389 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "59e82a1f-2c6a-4938-9696-ffe2eac280ce" (UID: "59e82a1f-2c6a-4938-9696-ffe2eac280ce"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.598689 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "59e82a1f-2c6a-4938-9696-ffe2eac280ce" (UID: "59e82a1f-2c6a-4938-9696-ffe2eac280ce"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.598747 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59e82a1f-2c6a-4938-9696-ffe2eac280ce-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "59e82a1f-2c6a-4938-9696-ffe2eac280ce" (UID: "59e82a1f-2c6a-4938-9696-ffe2eac280ce"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.599420 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "59e82a1f-2c6a-4938-9696-ffe2eac280ce" (UID: "59e82a1f-2c6a-4938-9696-ffe2eac280ce"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.600078 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "59e82a1f-2c6a-4938-9696-ffe2eac280ce" (UID: "59e82a1f-2c6a-4938-9696-ffe2eac280ce"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.602700 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59e82a1f-2c6a-4938-9696-ffe2eac280ce-kube-api-access-wtvnk" (OuterVolumeSpecName: "kube-api-access-wtvnk") pod "59e82a1f-2c6a-4938-9696-ffe2eac280ce" (UID: "59e82a1f-2c6a-4938-9696-ffe2eac280ce"). InnerVolumeSpecName "kube-api-access-wtvnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.602885 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "59e82a1f-2c6a-4938-9696-ffe2eac280ce" (UID: "59e82a1f-2c6a-4938-9696-ffe2eac280ce"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.606625 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "59e82a1f-2c6a-4938-9696-ffe2eac280ce" (UID: "59e82a1f-2c6a-4938-9696-ffe2eac280ce"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.607016 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "59e82a1f-2c6a-4938-9696-ffe2eac280ce" (UID: "59e82a1f-2c6a-4938-9696-ffe2eac280ce"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.607335 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "59e82a1f-2c6a-4938-9696-ffe2eac280ce" (UID: "59e82a1f-2c6a-4938-9696-ffe2eac280ce"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.607625 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "59e82a1f-2c6a-4938-9696-ffe2eac280ce" (UID: "59e82a1f-2c6a-4938-9696-ffe2eac280ce"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.607819 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "59e82a1f-2c6a-4938-9696-ffe2eac280ce" (UID: "59e82a1f-2c6a-4938-9696-ffe2eac280ce"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.607922 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "59e82a1f-2c6a-4938-9696-ffe2eac280ce" (UID: "59e82a1f-2c6a-4938-9696-ffe2eac280ce"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.608123 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "59e82a1f-2c6a-4938-9696-ffe2eac280ce" (UID: "59e82a1f-2c6a-4938-9696-ffe2eac280ce"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.699532 4672 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.699597 4672 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.699619 4672 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.699637 4672 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.699658 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtvnk\" (UniqueName: \"kubernetes.io/projected/59e82a1f-2c6a-4938-9696-ffe2eac280ce-kube-api-access-wtvnk\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.699676 4672 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.699695 4672 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.699712 4672 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.699729 4672 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.699747 4672 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.699766 4672 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.699788 4672 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.699808 4672 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/59e82a1f-2c6a-4938-9696-ffe2eac280ce-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:47 crc kubenswrapper[4672]: I0217 16:06:47.699825 4672 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/59e82a1f-2c6a-4938-9696-ffe2eac280ce-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:48 crc kubenswrapper[4672]: I0217 16:06:48.461397 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fds9q" Feb 17 16:06:48 crc kubenswrapper[4672]: I0217 16:06:48.486863 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fds9q"] Feb 17 16:06:48 crc kubenswrapper[4672]: I0217 16:06:48.492238 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fds9q"] Feb 17 16:06:49 crc kubenswrapper[4672]: I0217 16:06:49.957307 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59e82a1f-2c6a-4938-9696-ffe2eac280ce" path="/var/lib/kubelet/pods/59e82a1f-2c6a-4938-9696-ffe2eac280ce/volumes" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.242860 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-84fc7bd96f-849hx"] Feb 17 16:06:55 crc kubenswrapper[4672]: E0217 16:06:55.243082 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a14c1588-0007-41ea-b334-f2bc0b2a5587" containerName="registry-server" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.243096 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="a14c1588-0007-41ea-b334-f2bc0b2a5587" containerName="registry-server" Feb 17 16:06:55 crc kubenswrapper[4672]: E0217 16:06:55.243109 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a14c1588-0007-41ea-b334-f2bc0b2a5587" containerName="extract-content" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.243116 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="a14c1588-0007-41ea-b334-f2bc0b2a5587" containerName="extract-content" Feb 17 16:06:55 crc kubenswrapper[4672]: E0217 16:06:55.243123 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a14c1588-0007-41ea-b334-f2bc0b2a5587" containerName="extract-utilities" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.243131 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="a14c1588-0007-41ea-b334-f2bc0b2a5587" containerName="extract-utilities" Feb 17 16:06:55 crc kubenswrapper[4672]: E0217 16:06:55.243152 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e82a1f-2c6a-4938-9696-ffe2eac280ce" containerName="oauth-openshift" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.243159 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e82a1f-2c6a-4938-9696-ffe2eac280ce" containerName="oauth-openshift" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.243287 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="a14c1588-0007-41ea-b334-f2bc0b2a5587" containerName="registry-server" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.243302 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e82a1f-2c6a-4938-9696-ffe2eac280ce" containerName="oauth-openshift" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.243733 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.258032 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.258459 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.258697 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.258847 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.258954 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.259103 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.269557 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.269620 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.269772 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.270161 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.270221 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.270345 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.287315 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.287792 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.288990 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-84fc7bd96f-849hx"] Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.294075 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.297022 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-audit-policies\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.297251 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.297411 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.297634 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98tkp\" (UniqueName: \"kubernetes.io/projected/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-kube-api-access-98tkp\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.297818 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.298012 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-user-template-error\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.298180 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-user-template-login\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.298342 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-audit-dir\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.298515 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-system-router-certs\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.298807 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.298989 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.299286 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-system-session\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.299455 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.299664 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-system-service-ca\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.401470 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-user-template-error\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.401567 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-user-template-login\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.401606 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-audit-dir\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.401646 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-system-router-certs\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.401687 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.401730 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.401774 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-system-session\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.401804 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.401855 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-system-service-ca\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.401909 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-audit-policies\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.401948 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.401980 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.402018 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98tkp\" (UniqueName: \"kubernetes.io/projected/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-kube-api-access-98tkp\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.402061 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.403213 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-audit-dir\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.403653 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-audit-policies\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.404811 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.405342 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-system-service-ca\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.405740 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.410378 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-user-template-error\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.410804 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.411065 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-system-session\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.411492 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-system-router-certs\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.414379 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.414722 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.415651 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-user-template-login\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.415817 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.424049 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98tkp\" (UniqueName: \"kubernetes.io/projected/c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6-kube-api-access-98tkp\") pod \"oauth-openshift-84fc7bd96f-849hx\" (UID: \"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6\") " pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:55 crc kubenswrapper[4672]: I0217 16:06:55.590011 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.052495 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-84fc7bd96f-849hx"] Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.194541 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69fd6df768-dhgdt"] Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.194791 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" podUID="751224af-7f09-421f-9337-8e187bb9abe9" containerName="controller-manager" containerID="cri-o://f729ded4c61dd3c6f1db00e8c8e11fb4ba2bcd412c0025cc433f943647f2734a" gracePeriod=30 Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.300152 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9"] Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.300389 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" podUID="61826563-80f3-473d-a345-16a690367132" containerName="route-controller-manager" containerID="cri-o://7f9761102c7c2aee3d48efc9cac4cb3e2506a3579bdf454be9df8d7a05d7b7c4" gracePeriod=30 Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.515402 4672 generic.go:334] "Generic (PLEG): container finished" podID="751224af-7f09-421f-9337-8e187bb9abe9" containerID="f729ded4c61dd3c6f1db00e8c8e11fb4ba2bcd412c0025cc433f943647f2734a" exitCode=0 Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.515459 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" event={"ID":"751224af-7f09-421f-9337-8e187bb9abe9","Type":"ContainerDied","Data":"f729ded4c61dd3c6f1db00e8c8e11fb4ba2bcd412c0025cc433f943647f2734a"} Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.518177 4672 generic.go:334] "Generic (PLEG): container finished" podID="61826563-80f3-473d-a345-16a690367132" containerID="7f9761102c7c2aee3d48efc9cac4cb3e2506a3579bdf454be9df8d7a05d7b7c4" exitCode=0 Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.518236 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" event={"ID":"61826563-80f3-473d-a345-16a690367132","Type":"ContainerDied","Data":"7f9761102c7c2aee3d48efc9cac4cb3e2506a3579bdf454be9df8d7a05d7b7c4"} Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.523400 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" event={"ID":"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6","Type":"ContainerStarted","Data":"da6c0b93effdd320cc2783b2acb880d776a443381aa7366674628e4699147c1e"} Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.523434 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" event={"ID":"c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6","Type":"ContainerStarted","Data":"072bf2a6278bfddd67f174f321cc4822fd1b7ad749138b7abc2bf3d2091ee2e4"} Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.525906 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.531275 4672 patch_prober.go:28] interesting pod/oauth-openshift-84fc7bd96f-849hx container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.63:6443/healthz\": dial tcp 10.217.0.63:6443: connect: connection refused" start-of-body= Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.531317 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" podUID="c22a8a37-cbdd-4aed-8bc1-39d57dc5a4c6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.63:6443/healthz\": dial tcp 10.217.0.63:6443: connect: connection refused" Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.546771 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" podStartSLOduration=34.5467543 podStartE2EDuration="34.5467543s" podCreationTimestamp="2026-02-17 16:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:06:56.542852495 +0000 UTC m=+225.296941227" watchObservedRunningTime="2026-02-17 16:06:56.5467543 +0000 UTC m=+225.300843032" Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.763875 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.767766 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.819393 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjhzk\" (UniqueName: \"kubernetes.io/projected/751224af-7f09-421f-9337-8e187bb9abe9-kube-api-access-qjhzk\") pod \"751224af-7f09-421f-9337-8e187bb9abe9\" (UID: \"751224af-7f09-421f-9337-8e187bb9abe9\") " Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.819444 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/751224af-7f09-421f-9337-8e187bb9abe9-client-ca\") pod \"751224af-7f09-421f-9337-8e187bb9abe9\" (UID: \"751224af-7f09-421f-9337-8e187bb9abe9\") " Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.819472 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61826563-80f3-473d-a345-16a690367132-config\") pod \"61826563-80f3-473d-a345-16a690367132\" (UID: \"61826563-80f3-473d-a345-16a690367132\") " Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.819494 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61826563-80f3-473d-a345-16a690367132-client-ca\") pod \"61826563-80f3-473d-a345-16a690367132\" (UID: \"61826563-80f3-473d-a345-16a690367132\") " Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.819596 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/751224af-7f09-421f-9337-8e187bb9abe9-serving-cert\") pod \"751224af-7f09-421f-9337-8e187bb9abe9\" (UID: \"751224af-7f09-421f-9337-8e187bb9abe9\") " Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.819612 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/751224af-7f09-421f-9337-8e187bb9abe9-proxy-ca-bundles\") pod \"751224af-7f09-421f-9337-8e187bb9abe9\" (UID: \"751224af-7f09-421f-9337-8e187bb9abe9\") " Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.819638 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/751224af-7f09-421f-9337-8e187bb9abe9-config\") pod \"751224af-7f09-421f-9337-8e187bb9abe9\" (UID: \"751224af-7f09-421f-9337-8e187bb9abe9\") " Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.819654 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr4bk\" (UniqueName: \"kubernetes.io/projected/61826563-80f3-473d-a345-16a690367132-kube-api-access-xr4bk\") pod \"61826563-80f3-473d-a345-16a690367132\" (UID: \"61826563-80f3-473d-a345-16a690367132\") " Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.819671 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61826563-80f3-473d-a345-16a690367132-serving-cert\") pod \"61826563-80f3-473d-a345-16a690367132\" (UID: \"61826563-80f3-473d-a345-16a690367132\") " Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.822363 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/751224af-7f09-421f-9337-8e187bb9abe9-client-ca" (OuterVolumeSpecName: "client-ca") pod "751224af-7f09-421f-9337-8e187bb9abe9" (UID: "751224af-7f09-421f-9337-8e187bb9abe9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.822782 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/751224af-7f09-421f-9337-8e187bb9abe9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "751224af-7f09-421f-9337-8e187bb9abe9" (UID: "751224af-7f09-421f-9337-8e187bb9abe9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.822877 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61826563-80f3-473d-a345-16a690367132-config" (OuterVolumeSpecName: "config") pod "61826563-80f3-473d-a345-16a690367132" (UID: "61826563-80f3-473d-a345-16a690367132"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.822997 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/751224af-7f09-421f-9337-8e187bb9abe9-config" (OuterVolumeSpecName: "config") pod "751224af-7f09-421f-9337-8e187bb9abe9" (UID: "751224af-7f09-421f-9337-8e187bb9abe9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.826096 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61826563-80f3-473d-a345-16a690367132-client-ca" (OuterVolumeSpecName: "client-ca") pod "61826563-80f3-473d-a345-16a690367132" (UID: "61826563-80f3-473d-a345-16a690367132"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.826237 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/751224af-7f09-421f-9337-8e187bb9abe9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "751224af-7f09-421f-9337-8e187bb9abe9" (UID: "751224af-7f09-421f-9337-8e187bb9abe9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.826309 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61826563-80f3-473d-a345-16a690367132-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "61826563-80f3-473d-a345-16a690367132" (UID: "61826563-80f3-473d-a345-16a690367132"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.826346 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61826563-80f3-473d-a345-16a690367132-kube-api-access-xr4bk" (OuterVolumeSpecName: "kube-api-access-xr4bk") pod "61826563-80f3-473d-a345-16a690367132" (UID: "61826563-80f3-473d-a345-16a690367132"). InnerVolumeSpecName "kube-api-access-xr4bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.828896 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/751224af-7f09-421f-9337-8e187bb9abe9-kube-api-access-qjhzk" (OuterVolumeSpecName: "kube-api-access-qjhzk") pod "751224af-7f09-421f-9337-8e187bb9abe9" (UID: "751224af-7f09-421f-9337-8e187bb9abe9"). InnerVolumeSpecName "kube-api-access-qjhzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.920929 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/751224af-7f09-421f-9337-8e187bb9abe9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.920968 4672 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/751224af-7f09-421f-9337-8e187bb9abe9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.920981 4672 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61826563-80f3-473d-a345-16a690367132-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.920989 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/751224af-7f09-421f-9337-8e187bb9abe9-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.920998 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xr4bk\" (UniqueName: \"kubernetes.io/projected/61826563-80f3-473d-a345-16a690367132-kube-api-access-xr4bk\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.921010 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjhzk\" (UniqueName: \"kubernetes.io/projected/751224af-7f09-421f-9337-8e187bb9abe9-kube-api-access-qjhzk\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.921019 4672 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/751224af-7f09-421f-9337-8e187bb9abe9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.921027 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61826563-80f3-473d-a345-16a690367132-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:56 crc kubenswrapper[4672]: I0217 16:06:56.921035 4672 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61826563-80f3-473d-a345-16a690367132-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.243551 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-79c585c586-hvcvb"] Feb 17 16:06:57 crc kubenswrapper[4672]: E0217 16:06:57.258229 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61826563-80f3-473d-a345-16a690367132" containerName="route-controller-manager" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.258635 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="61826563-80f3-473d-a345-16a690367132" containerName="route-controller-manager" Feb 17 16:06:57 crc kubenswrapper[4672]: E0217 16:06:57.258685 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="751224af-7f09-421f-9337-8e187bb9abe9" containerName="controller-manager" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.258696 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="751224af-7f09-421f-9337-8e187bb9abe9" containerName="controller-manager" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.260948 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="61826563-80f3-473d-a345-16a690367132" containerName="route-controller-manager" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.261027 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="751224af-7f09-421f-9337-8e187bb9abe9" containerName="controller-manager" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.261746 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79c585c586-hvcvb"] Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.261903 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79c585c586-hvcvb" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.324883 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0151b258-04f1-4275-9069-f7ed844a7296-client-ca\") pod \"controller-manager-79c585c586-hvcvb\" (UID: \"0151b258-04f1-4275-9069-f7ed844a7296\") " pod="openshift-controller-manager/controller-manager-79c585c586-hvcvb" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.324947 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rc6r\" (UniqueName: \"kubernetes.io/projected/0151b258-04f1-4275-9069-f7ed844a7296-kube-api-access-4rc6r\") pod \"controller-manager-79c585c586-hvcvb\" (UID: \"0151b258-04f1-4275-9069-f7ed844a7296\") " pod="openshift-controller-manager/controller-manager-79c585c586-hvcvb" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.324977 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0151b258-04f1-4275-9069-f7ed844a7296-serving-cert\") pod \"controller-manager-79c585c586-hvcvb\" (UID: \"0151b258-04f1-4275-9069-f7ed844a7296\") " pod="openshift-controller-manager/controller-manager-79c585c586-hvcvb" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.325056 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0151b258-04f1-4275-9069-f7ed844a7296-proxy-ca-bundles\") pod \"controller-manager-79c585c586-hvcvb\" (UID: \"0151b258-04f1-4275-9069-f7ed844a7296\") " pod="openshift-controller-manager/controller-manager-79c585c586-hvcvb" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.325111 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0151b258-04f1-4275-9069-f7ed844a7296-config\") pod \"controller-manager-79c585c586-hvcvb\" (UID: \"0151b258-04f1-4275-9069-f7ed844a7296\") " pod="openshift-controller-manager/controller-manager-79c585c586-hvcvb" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.426553 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0151b258-04f1-4275-9069-f7ed844a7296-client-ca\") pod \"controller-manager-79c585c586-hvcvb\" (UID: \"0151b258-04f1-4275-9069-f7ed844a7296\") " pod="openshift-controller-manager/controller-manager-79c585c586-hvcvb" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.426633 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rc6r\" (UniqueName: \"kubernetes.io/projected/0151b258-04f1-4275-9069-f7ed844a7296-kube-api-access-4rc6r\") pod \"controller-manager-79c585c586-hvcvb\" (UID: \"0151b258-04f1-4275-9069-f7ed844a7296\") " pod="openshift-controller-manager/controller-manager-79c585c586-hvcvb" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.426672 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0151b258-04f1-4275-9069-f7ed844a7296-serving-cert\") pod \"controller-manager-79c585c586-hvcvb\" (UID: \"0151b258-04f1-4275-9069-f7ed844a7296\") " pod="openshift-controller-manager/controller-manager-79c585c586-hvcvb" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.426717 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0151b258-04f1-4275-9069-f7ed844a7296-proxy-ca-bundles\") pod \"controller-manager-79c585c586-hvcvb\" (UID: \"0151b258-04f1-4275-9069-f7ed844a7296\") " pod="openshift-controller-manager/controller-manager-79c585c586-hvcvb" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.426755 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0151b258-04f1-4275-9069-f7ed844a7296-config\") pod \"controller-manager-79c585c586-hvcvb\" (UID: \"0151b258-04f1-4275-9069-f7ed844a7296\") " pod="openshift-controller-manager/controller-manager-79c585c586-hvcvb" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.429200 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0151b258-04f1-4275-9069-f7ed844a7296-config\") pod \"controller-manager-79c585c586-hvcvb\" (UID: \"0151b258-04f1-4275-9069-f7ed844a7296\") " pod="openshift-controller-manager/controller-manager-79c585c586-hvcvb" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.429920 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0151b258-04f1-4275-9069-f7ed844a7296-client-ca\") pod \"controller-manager-79c585c586-hvcvb\" (UID: \"0151b258-04f1-4275-9069-f7ed844a7296\") " pod="openshift-controller-manager/controller-manager-79c585c586-hvcvb" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.432503 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0151b258-04f1-4275-9069-f7ed844a7296-proxy-ca-bundles\") pod \"controller-manager-79c585c586-hvcvb\" (UID: \"0151b258-04f1-4275-9069-f7ed844a7296\") " pod="openshift-controller-manager/controller-manager-79c585c586-hvcvb" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.434185 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0151b258-04f1-4275-9069-f7ed844a7296-serving-cert\") pod \"controller-manager-79c585c586-hvcvb\" (UID: \"0151b258-04f1-4275-9069-f7ed844a7296\") " pod="openshift-controller-manager/controller-manager-79c585c586-hvcvb" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.445468 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rc6r\" (UniqueName: \"kubernetes.io/projected/0151b258-04f1-4275-9069-f7ed844a7296-kube-api-access-4rc6r\") pod \"controller-manager-79c585c586-hvcvb\" (UID: \"0151b258-04f1-4275-9069-f7ed844a7296\") " pod="openshift-controller-manager/controller-manager-79c585c586-hvcvb" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.530358 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" event={"ID":"61826563-80f3-473d-a345-16a690367132","Type":"ContainerDied","Data":"bc745b4f71d013a88cea96258340a57fe72d934309bfe150c6a4660a69cc1ab9"} Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.530382 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.530445 4672 scope.go:117] "RemoveContainer" containerID="7f9761102c7c2aee3d48efc9cac4cb3e2506a3579bdf454be9df8d7a05d7b7c4" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.532725 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" event={"ID":"751224af-7f09-421f-9337-8e187bb9abe9","Type":"ContainerDied","Data":"042c1aa2cd006822a08417a43f7be61172fb2cbcaeaa97b579d3ae8a3fc97ee3"} Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.532941 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69fd6df768-dhgdt" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.538283 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-84fc7bd96f-849hx" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.558951 4672 scope.go:117] "RemoveContainer" containerID="f729ded4c61dd3c6f1db00e8c8e11fb4ba2bcd412c0025cc433f943647f2734a" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.566601 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.566780 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.566856 4672 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.567979 4672 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1"} pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.568181 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" containerID="cri-o://796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1" gracePeriod=600 Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.584066 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79c585c586-hvcvb" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.599839 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9"] Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.612352 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d4cbb6688-dpbt9"] Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.622147 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69fd6df768-dhgdt"] Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.625576 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-69fd6df768-dhgdt"] Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.951487 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61826563-80f3-473d-a345-16a690367132" path="/var/lib/kubelet/pods/61826563-80f3-473d-a345-16a690367132/volumes" Feb 17 16:06:57 crc kubenswrapper[4672]: I0217 16:06:57.952002 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="751224af-7f09-421f-9337-8e187bb9abe9" path="/var/lib/kubelet/pods/751224af-7f09-421f-9337-8e187bb9abe9/volumes" Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.108718 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79c585c586-hvcvb"] Feb 17 16:06:58 crc kubenswrapper[4672]: W0217 16:06:58.118185 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0151b258_04f1_4275_9069_f7ed844a7296.slice/crio-c7d418d72f6b7a482168327ef441582b2639e24c2e419a6d3905e0207f1bf833 WatchSource:0}: Error finding container c7d418d72f6b7a482168327ef441582b2639e24c2e419a6d3905e0207f1bf833: Status 404 returned error can't find the container with id c7d418d72f6b7a482168327ef441582b2639e24c2e419a6d3905e0207f1bf833 Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.245920 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cdc776c58-stshg"] Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.246742 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cdc776c58-stshg" Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.248986 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.249880 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.252229 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.252554 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.252729 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.252947 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.270896 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cdc776c58-stshg"] Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.338450 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nr4k\" (UniqueName: \"kubernetes.io/projected/0b3e324d-0dd9-4d76-a74e-afefdbb2d75a-kube-api-access-6nr4k\") pod \"route-controller-manager-6cdc776c58-stshg\" (UID: \"0b3e324d-0dd9-4d76-a74e-afefdbb2d75a\") " pod="openshift-route-controller-manager/route-controller-manager-6cdc776c58-stshg" Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.338505 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b3e324d-0dd9-4d76-a74e-afefdbb2d75a-serving-cert\") pod \"route-controller-manager-6cdc776c58-stshg\" (UID: \"0b3e324d-0dd9-4d76-a74e-afefdbb2d75a\") " pod="openshift-route-controller-manager/route-controller-manager-6cdc776c58-stshg" Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.338614 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b3e324d-0dd9-4d76-a74e-afefdbb2d75a-config\") pod \"route-controller-manager-6cdc776c58-stshg\" (UID: \"0b3e324d-0dd9-4d76-a74e-afefdbb2d75a\") " pod="openshift-route-controller-manager/route-controller-manager-6cdc776c58-stshg" Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.338685 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b3e324d-0dd9-4d76-a74e-afefdbb2d75a-client-ca\") pod \"route-controller-manager-6cdc776c58-stshg\" (UID: \"0b3e324d-0dd9-4d76-a74e-afefdbb2d75a\") " pod="openshift-route-controller-manager/route-controller-manager-6cdc776c58-stshg" Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.439681 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nr4k\" (UniqueName: \"kubernetes.io/projected/0b3e324d-0dd9-4d76-a74e-afefdbb2d75a-kube-api-access-6nr4k\") pod \"route-controller-manager-6cdc776c58-stshg\" (UID: \"0b3e324d-0dd9-4d76-a74e-afefdbb2d75a\") " pod="openshift-route-controller-manager/route-controller-manager-6cdc776c58-stshg" Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.439732 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b3e324d-0dd9-4d76-a74e-afefdbb2d75a-serving-cert\") pod \"route-controller-manager-6cdc776c58-stshg\" (UID: \"0b3e324d-0dd9-4d76-a74e-afefdbb2d75a\") " pod="openshift-route-controller-manager/route-controller-manager-6cdc776c58-stshg" Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.439767 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b3e324d-0dd9-4d76-a74e-afefdbb2d75a-config\") pod \"route-controller-manager-6cdc776c58-stshg\" (UID: \"0b3e324d-0dd9-4d76-a74e-afefdbb2d75a\") " pod="openshift-route-controller-manager/route-controller-manager-6cdc776c58-stshg" Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.439807 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b3e324d-0dd9-4d76-a74e-afefdbb2d75a-client-ca\") pod \"route-controller-manager-6cdc776c58-stshg\" (UID: \"0b3e324d-0dd9-4d76-a74e-afefdbb2d75a\") " pod="openshift-route-controller-manager/route-controller-manager-6cdc776c58-stshg" Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.440582 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b3e324d-0dd9-4d76-a74e-afefdbb2d75a-client-ca\") pod \"route-controller-manager-6cdc776c58-stshg\" (UID: \"0b3e324d-0dd9-4d76-a74e-afefdbb2d75a\") " pod="openshift-route-controller-manager/route-controller-manager-6cdc776c58-stshg" Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.440973 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b3e324d-0dd9-4d76-a74e-afefdbb2d75a-config\") pod \"route-controller-manager-6cdc776c58-stshg\" (UID: \"0b3e324d-0dd9-4d76-a74e-afefdbb2d75a\") " pod="openshift-route-controller-manager/route-controller-manager-6cdc776c58-stshg" Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.445941 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b3e324d-0dd9-4d76-a74e-afefdbb2d75a-serving-cert\") pod \"route-controller-manager-6cdc776c58-stshg\" (UID: \"0b3e324d-0dd9-4d76-a74e-afefdbb2d75a\") " pod="openshift-route-controller-manager/route-controller-manager-6cdc776c58-stshg" Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.455683 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nr4k\" (UniqueName: \"kubernetes.io/projected/0b3e324d-0dd9-4d76-a74e-afefdbb2d75a-kube-api-access-6nr4k\") pod \"route-controller-manager-6cdc776c58-stshg\" (UID: \"0b3e324d-0dd9-4d76-a74e-afefdbb2d75a\") " pod="openshift-route-controller-manager/route-controller-manager-6cdc776c58-stshg" Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.545496 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79c585c586-hvcvb" event={"ID":"0151b258-04f1-4275-9069-f7ed844a7296","Type":"ContainerStarted","Data":"3c42d3ce0f24e152fe72dc878c7d65802fc2972c03d8cb6a6658632516038eac"} Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.545551 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79c585c586-hvcvb" event={"ID":"0151b258-04f1-4275-9069-f7ed844a7296","Type":"ContainerStarted","Data":"c7d418d72f6b7a482168327ef441582b2639e24c2e419a6d3905e0207f1bf833"} Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.545713 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-79c585c586-hvcvb" Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.548642 4672 generic.go:334] "Generic (PLEG): container finished" podID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerID="796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1" exitCode=0 Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.548805 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerDied","Data":"796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1"} Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.548838 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerStarted","Data":"0f3ebbc55d351841753f9bbb525ff0055c2fbedda4c7326b4b7118110b3bdaef"} Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.550842 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-79c585c586-hvcvb" Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.564425 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-79c585c586-hvcvb" podStartSLOduration=2.564409289 podStartE2EDuration="2.564409289s" podCreationTimestamp="2026-02-17 16:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:06:58.561254062 +0000 UTC m=+227.315342794" watchObservedRunningTime="2026-02-17 16:06:58.564409289 +0000 UTC m=+227.318498021" Feb 17 16:06:58 crc kubenswrapper[4672]: I0217 16:06:58.571730 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cdc776c58-stshg" Feb 17 16:06:59 crc kubenswrapper[4672]: I0217 16:06:59.005449 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cdc776c58-stshg"] Feb 17 16:06:59 crc kubenswrapper[4672]: I0217 16:06:59.554177 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cdc776c58-stshg" event={"ID":"0b3e324d-0dd9-4d76-a74e-afefdbb2d75a","Type":"ContainerStarted","Data":"bc87525cd2c075921b728fe181ff7a95e67e480ce60feae02bbaf0902cbb6f22"} Feb 17 16:06:59 crc kubenswrapper[4672]: I0217 16:06:59.554445 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cdc776c58-stshg" event={"ID":"0b3e324d-0dd9-4d76-a74e-afefdbb2d75a","Type":"ContainerStarted","Data":"c9fb1690481786ec940ec5b9ed31d6318bc9f8e1a04406b8b302804290505dfb"} Feb 17 16:06:59 crc kubenswrapper[4672]: I0217 16:06:59.575594 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cdc776c58-stshg" podStartSLOduration=3.575579425 podStartE2EDuration="3.575579425s" podCreationTimestamp="2026-02-17 16:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:06:59.572492649 +0000 UTC m=+228.326581391" watchObservedRunningTime="2026-02-17 16:06:59.575579425 +0000 UTC m=+228.329668157" Feb 17 16:07:00 crc kubenswrapper[4672]: I0217 16:07:00.561193 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cdc776c58-stshg" Feb 17 16:07:00 crc kubenswrapper[4672]: I0217 16:07:00.571412 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cdc776c58-stshg" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.659717 4672 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.661564 4672 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.661797 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.661955 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb" gracePeriod=15 Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.662023 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3" gracePeriod=15 Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.662152 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec" gracePeriod=15 Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.662153 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917" gracePeriod=15 Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.662206 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018" gracePeriod=15 Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.663929 4672 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 16:07:09 crc kubenswrapper[4672]: E0217 16:07:09.664299 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.664320 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 16:07:09 crc kubenswrapper[4672]: E0217 16:07:09.664340 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.664352 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 16:07:09 crc kubenswrapper[4672]: E0217 16:07:09.664369 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.664382 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 16:07:09 crc kubenswrapper[4672]: E0217 16:07:09.664399 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.664412 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 16:07:09 crc kubenswrapper[4672]: E0217 16:07:09.664434 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.664447 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 16:07:09 crc kubenswrapper[4672]: E0217 16:07:09.664460 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.664472 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 16:07:09 crc kubenswrapper[4672]: E0217 16:07:09.664489 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.664501 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.664691 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.664718 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.664744 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.664761 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.664775 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.664790 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.848753 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.848806 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.848922 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.849002 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.849035 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.849083 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.849290 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.849401 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.951577 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.951694 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.951773 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.951839 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.951922 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.952029 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.952102 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.952223 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.952446 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.952631 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.952729 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.952814 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.952903 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.952989 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.953071 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:09 crc kubenswrapper[4672]: I0217 16:07:09.953156 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:10 crc kubenswrapper[4672]: I0217 16:07:10.630249 4672 generic.go:334] "Generic (PLEG): container finished" podID="780d94df-fc74-4af2-9e51-eea226989b67" containerID="cded28b4fbd62f51c69307964652daa8a49b1200374c2a406fd64143d1ce57b1" exitCode=0 Feb 17 16:07:10 crc kubenswrapper[4672]: I0217 16:07:10.630368 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"780d94df-fc74-4af2-9e51-eea226989b67","Type":"ContainerDied","Data":"cded28b4fbd62f51c69307964652daa8a49b1200374c2a406fd64143d1ce57b1"} Feb 17 16:07:10 crc kubenswrapper[4672]: I0217 16:07:10.631459 4672 status_manager.go:851] "Failed to get status for pod" podUID="780d94df-fc74-4af2-9e51-eea226989b67" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:10 crc kubenswrapper[4672]: I0217 16:07:10.634013 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 16:07:10 crc kubenswrapper[4672]: I0217 16:07:10.636119 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 16:07:10 crc kubenswrapper[4672]: I0217 16:07:10.637390 4672 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3" exitCode=0 Feb 17 16:07:10 crc kubenswrapper[4672]: I0217 16:07:10.637407 4672 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018" exitCode=0 Feb 17 16:07:10 crc kubenswrapper[4672]: I0217 16:07:10.637414 4672 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917" exitCode=0 Feb 17 16:07:10 crc kubenswrapper[4672]: I0217 16:07:10.637441 4672 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec" exitCode=2 Feb 17 16:07:10 crc kubenswrapper[4672]: I0217 16:07:10.637469 4672 scope.go:117] "RemoveContainer" containerID="6de36d5b3807dac16c0d6ac4a8cabbfc5648199a30e8334205f4167114b198f8" Feb 17 16:07:11 crc kubenswrapper[4672]: I0217 16:07:11.650053 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 16:07:11 crc kubenswrapper[4672]: I0217 16:07:11.953406 4672 status_manager.go:851] "Failed to get status for pod" podUID="780d94df-fc74-4af2-9e51-eea226989b67" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.117399 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.118271 4672 status_manager.go:851] "Failed to get status for pod" podUID="780d94df-fc74-4af2-9e51-eea226989b67" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:12 crc kubenswrapper[4672]: E0217 16:07:12.176990 4672 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:12 crc kubenswrapper[4672]: E0217 16:07:12.177664 4672 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:12 crc kubenswrapper[4672]: E0217 16:07:12.178116 4672 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:12 crc kubenswrapper[4672]: E0217 16:07:12.178652 4672 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:12 crc kubenswrapper[4672]: E0217 16:07:12.179156 4672 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.179218 4672 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 17 16:07:12 crc kubenswrapper[4672]: E0217 16:07:12.179678 4672 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="200ms" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.284941 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/780d94df-fc74-4af2-9e51-eea226989b67-kube-api-access\") pod \"780d94df-fc74-4af2-9e51-eea226989b67\" (UID: \"780d94df-fc74-4af2-9e51-eea226989b67\") " Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.284998 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/780d94df-fc74-4af2-9e51-eea226989b67-var-lock\") pod \"780d94df-fc74-4af2-9e51-eea226989b67\" (UID: \"780d94df-fc74-4af2-9e51-eea226989b67\") " Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.285020 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/780d94df-fc74-4af2-9e51-eea226989b67-kubelet-dir\") pod \"780d94df-fc74-4af2-9e51-eea226989b67\" (UID: \"780d94df-fc74-4af2-9e51-eea226989b67\") " Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.285148 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/780d94df-fc74-4af2-9e51-eea226989b67-var-lock" (OuterVolumeSpecName: "var-lock") pod "780d94df-fc74-4af2-9e51-eea226989b67" (UID: "780d94df-fc74-4af2-9e51-eea226989b67"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.285215 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/780d94df-fc74-4af2-9e51-eea226989b67-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "780d94df-fc74-4af2-9e51-eea226989b67" (UID: "780d94df-fc74-4af2-9e51-eea226989b67"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.285397 4672 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/780d94df-fc74-4af2-9e51-eea226989b67-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.285423 4672 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/780d94df-fc74-4af2-9e51-eea226989b67-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.293735 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/780d94df-fc74-4af2-9e51-eea226989b67-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "780d94df-fc74-4af2-9e51-eea226989b67" (UID: "780d94df-fc74-4af2-9e51-eea226989b67"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:07:12 crc kubenswrapper[4672]: E0217 16:07:12.381822 4672 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="400ms" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.386893 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/780d94df-fc74-4af2-9e51-eea226989b67-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.550644 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.552105 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.552999 4672 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.553577 4672 status_manager.go:851] "Failed to get status for pod" podUID="780d94df-fc74-4af2-9e51-eea226989b67" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.664893 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"780d94df-fc74-4af2-9e51-eea226989b67","Type":"ContainerDied","Data":"b9108b5b86df5573457ab3e0da8f8d66312212dea8226306862c05fa3e443302"} Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.664961 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9108b5b86df5573457ab3e0da8f8d66312212dea8226306862c05fa3e443302" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.664916 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.671047 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.672192 4672 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb" exitCode=0 Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.672292 4672 scope.go:117] "RemoveContainer" containerID="552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.672434 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.688610 4672 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.689015 4672 status_manager.go:851] "Failed to get status for pod" podUID="780d94df-fc74-4af2-9e51-eea226989b67" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.691719 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.691762 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.691786 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.692193 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.692229 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.692249 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.702916 4672 scope.go:117] "RemoveContainer" containerID="28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.730734 4672 scope.go:117] "RemoveContainer" containerID="6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.760176 4672 scope.go:117] "RemoveContainer" containerID="5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.782378 4672 scope.go:117] "RemoveContainer" containerID="6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb" Feb 17 16:07:12 crc kubenswrapper[4672]: E0217 16:07:12.783127 4672 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="800ms" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.792902 4672 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.792937 4672 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.792948 4672 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.802750 4672 scope.go:117] "RemoveContainer" containerID="5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.822116 4672 scope.go:117] "RemoveContainer" containerID="552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3" Feb 17 16:07:12 crc kubenswrapper[4672]: E0217 16:07:12.823275 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\": container with ID starting with 552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3 not found: ID does not exist" containerID="552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.823332 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3"} err="failed to get container status \"552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\": rpc error: code = NotFound desc = could not find container \"552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3\": container with ID starting with 552c916fd9fd9d7d782b6c5903befb38b4b6438d0cb37eb923f285b2b853e3d3 not found: ID does not exist" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.823366 4672 scope.go:117] "RemoveContainer" containerID="28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018" Feb 17 16:07:12 crc kubenswrapper[4672]: E0217 16:07:12.825395 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\": container with ID starting with 28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018 not found: ID does not exist" containerID="28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.825443 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018"} err="failed to get container status \"28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\": rpc error: code = NotFound desc = could not find container \"28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018\": container with ID starting with 28a3406bc885e3deae81b16a3c8c3ef0956d318402ff875229739d9017cd6018 not found: ID does not exist" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.825477 4672 scope.go:117] "RemoveContainer" containerID="6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917" Feb 17 16:07:12 crc kubenswrapper[4672]: E0217 16:07:12.826085 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\": container with ID starting with 6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917 not found: ID does not exist" containerID="6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.826140 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917"} err="failed to get container status \"6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\": rpc error: code = NotFound desc = could not find container \"6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917\": container with ID starting with 6994bebb153cae3ae7824a7cb75e457471e3d3f2b3a158b9f195b98f51462917 not found: ID does not exist" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.826175 4672 scope.go:117] "RemoveContainer" containerID="5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec" Feb 17 16:07:12 crc kubenswrapper[4672]: E0217 16:07:12.826872 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\": container with ID starting with 5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec not found: ID does not exist" containerID="5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.826997 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec"} err="failed to get container status \"5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\": rpc error: code = NotFound desc = could not find container \"5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec\": container with ID starting with 5a402ed9035819b014f7b216c0e35c5d80fc782f8b4914862ad208e36089c3ec not found: ID does not exist" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.827083 4672 scope.go:117] "RemoveContainer" containerID="6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb" Feb 17 16:07:12 crc kubenswrapper[4672]: E0217 16:07:12.827697 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\": container with ID starting with 6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb not found: ID does not exist" containerID="6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.827774 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb"} err="failed to get container status \"6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\": rpc error: code = NotFound desc = could not find container \"6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb\": container with ID starting with 6a0fd41fcdcea91e547dd354457f7acd62dbd836a5399266b9aae522b43a80fb not found: ID does not exist" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.827852 4672 scope.go:117] "RemoveContainer" containerID="5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea" Feb 17 16:07:12 crc kubenswrapper[4672]: E0217 16:07:12.829429 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\": container with ID starting with 5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea not found: ID does not exist" containerID="5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea" Feb 17 16:07:12 crc kubenswrapper[4672]: I0217 16:07:12.829469 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea"} err="failed to get container status \"5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\": rpc error: code = NotFound desc = could not find container \"5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea\": container with ID starting with 5c13b5a2b7e689f36ea8d902d652e3921fbf18e689eafe3cc323cfc8dc080bea not found: ID does not exist" Feb 17 16:07:13 crc kubenswrapper[4672]: I0217 16:07:13.032358 4672 status_manager.go:851] "Failed to get status for pod" podUID="780d94df-fc74-4af2-9e51-eea226989b67" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:13 crc kubenswrapper[4672]: I0217 16:07:13.034232 4672 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:13 crc kubenswrapper[4672]: E0217 16:07:13.072379 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:07:13Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:07:13Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:07:13Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T16:07:13Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:13 crc kubenswrapper[4672]: E0217 16:07:13.073349 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:13 crc kubenswrapper[4672]: E0217 16:07:13.074005 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:13 crc kubenswrapper[4672]: E0217 16:07:13.074897 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:13 crc kubenswrapper[4672]: E0217 16:07:13.075453 4672 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:13 crc kubenswrapper[4672]: E0217 16:07:13.075472 4672 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 16:07:13 crc kubenswrapper[4672]: E0217 16:07:13.584732 4672 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="1.6s" Feb 17 16:07:13 crc kubenswrapper[4672]: I0217 16:07:13.958401 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 17 16:07:14 crc kubenswrapper[4672]: E0217 16:07:14.709698 4672 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.46:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:14 crc kubenswrapper[4672]: I0217 16:07:14.710357 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:14 crc kubenswrapper[4672]: E0217 16:07:14.758306 4672 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.46:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18951460e0989a86 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 16:07:14.757671558 +0000 UTC m=+243.511760330,LastTimestamp:2026-02-17 16:07:14.757671558 +0000 UTC m=+243.511760330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 16:07:15 crc kubenswrapper[4672]: E0217 16:07:15.186364 4672 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="3.2s" Feb 17 16:07:15 crc kubenswrapper[4672]: I0217 16:07:15.694876 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"51332d541c79f67b5ba39409c7b49be121c9853ff78e772152e970ed10c697d1"} Feb 17 16:07:15 crc kubenswrapper[4672]: I0217 16:07:15.694945 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"81728c04bf12eaca34e57b842be9939a88f5f5ff0a1838f97fd68a9df274eb57"} Feb 17 16:07:15 crc kubenswrapper[4672]: E0217 16:07:15.695952 4672 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.46:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:15 crc kubenswrapper[4672]: I0217 16:07:15.696088 4672 status_manager.go:851] "Failed to get status for pod" podUID="780d94df-fc74-4af2-9e51-eea226989b67" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:18 crc kubenswrapper[4672]: E0217 16:07:18.387380 4672 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.46:6443: connect: connection refused" interval="6.4s" Feb 17 16:07:19 crc kubenswrapper[4672]: E0217 16:07:19.654504 4672 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.46:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18951460e0989a86 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 16:07:14.757671558 +0000 UTC m=+243.511760330,LastTimestamp:2026-02-17 16:07:14.757671558 +0000 UTC m=+243.511760330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 16:07:21 crc kubenswrapper[4672]: I0217 16:07:21.944696 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:21 crc kubenswrapper[4672]: I0217 16:07:21.949926 4672 status_manager.go:851] "Failed to get status for pod" podUID="780d94df-fc74-4af2-9e51-eea226989b67" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:21 crc kubenswrapper[4672]: I0217 16:07:21.951971 4672 status_manager.go:851] "Failed to get status for pod" podUID="780d94df-fc74-4af2-9e51-eea226989b67" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:21 crc kubenswrapper[4672]: I0217 16:07:21.973373 4672 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d539581-cd17-46b9-8668-271c89565030" Feb 17 16:07:21 crc kubenswrapper[4672]: I0217 16:07:21.973451 4672 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d539581-cd17-46b9-8668-271c89565030" Feb 17 16:07:21 crc kubenswrapper[4672]: E0217 16:07:21.974390 4672 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:21 crc kubenswrapper[4672]: I0217 16:07:21.975227 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:22 crc kubenswrapper[4672]: W0217 16:07:22.017138 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-31206f3421542ba48e0933e18ab2651cf4b74679b78c0e07a93f6b8649639720 WatchSource:0}: Error finding container 31206f3421542ba48e0933e18ab2651cf4b74679b78c0e07a93f6b8649639720: Status 404 returned error can't find the container with id 31206f3421542ba48e0933e18ab2651cf4b74679b78c0e07a93f6b8649639720 Feb 17 16:07:22 crc kubenswrapper[4672]: I0217 16:07:22.747048 4672 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="d0f3c0a40fa1e65777085d488075e6767ad36ba13a51545ef5dd9254bd4c8adf" exitCode=0 Feb 17 16:07:22 crc kubenswrapper[4672]: I0217 16:07:22.747122 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"d0f3c0a40fa1e65777085d488075e6767ad36ba13a51545ef5dd9254bd4c8adf"} Feb 17 16:07:22 crc kubenswrapper[4672]: I0217 16:07:22.747469 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"31206f3421542ba48e0933e18ab2651cf4b74679b78c0e07a93f6b8649639720"} Feb 17 16:07:22 crc kubenswrapper[4672]: I0217 16:07:22.747788 4672 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d539581-cd17-46b9-8668-271c89565030" Feb 17 16:07:22 crc kubenswrapper[4672]: I0217 16:07:22.747804 4672 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d539581-cd17-46b9-8668-271c89565030" Feb 17 16:07:22 crc kubenswrapper[4672]: E0217 16:07:22.748316 4672 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:22 crc kubenswrapper[4672]: I0217 16:07:22.748534 4672 status_manager.go:851] "Failed to get status for pod" podUID="780d94df-fc74-4af2-9e51-eea226989b67" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:22 crc kubenswrapper[4672]: I0217 16:07:22.754064 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 16:07:22 crc kubenswrapper[4672]: I0217 16:07:22.754146 4672 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4" exitCode=1 Feb 17 16:07:22 crc kubenswrapper[4672]: I0217 16:07:22.754191 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4"} Feb 17 16:07:22 crc kubenswrapper[4672]: I0217 16:07:22.754934 4672 scope.go:117] "RemoveContainer" containerID="f292f2e73fb7fdb227dfca137324c9da4cf1fafc530f0ea9a5f3058fff16c0b4" Feb 17 16:07:22 crc kubenswrapper[4672]: I0217 16:07:22.755371 4672 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:22 crc kubenswrapper[4672]: I0217 16:07:22.755994 4672 status_manager.go:851] "Failed to get status for pod" podUID="780d94df-fc74-4af2-9e51-eea226989b67" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.46:6443: connect: connection refused" Feb 17 16:07:23 crc kubenswrapper[4672]: I0217 16:07:23.482743 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:07:23 crc kubenswrapper[4672]: I0217 16:07:23.766279 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9c59f3190304776ee928e7d5fcf51680bda355eb9de516b99b6580f5ac64dbad"} Feb 17 16:07:23 crc kubenswrapper[4672]: I0217 16:07:23.766352 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3d5311b0f1a0507cd237181932de8f0ff6032ed97c8bd245134c184ff4461aaf"} Feb 17 16:07:23 crc kubenswrapper[4672]: I0217 16:07:23.766383 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c04c3b3ccf417dc1458e976e3f09eb681f95a1343fbb605d38c8826e70c0b661"} Feb 17 16:07:23 crc kubenswrapper[4672]: I0217 16:07:23.770677 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 16:07:23 crc kubenswrapper[4672]: I0217 16:07:23.770760 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a6eb4538469ff97fa87d1ef0d320df3b00d17e289a21f99b082dcb8734e9c13e"} Feb 17 16:07:24 crc kubenswrapper[4672]: I0217 16:07:24.779744 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cbcf737cef0862a69856a3bfb97bafc61eb2294ac94a1d6e182a0d26d09d5aee"} Feb 17 16:07:24 crc kubenswrapper[4672]: I0217 16:07:24.780090 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9b41e6256d3ad7cf3f7d9e7d5302a9697195cffdee391b00546747c086c55903"} Feb 17 16:07:24 crc kubenswrapper[4672]: I0217 16:07:24.780013 4672 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d539581-cd17-46b9-8668-271c89565030" Feb 17 16:07:24 crc kubenswrapper[4672]: I0217 16:07:24.780114 4672 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d539581-cd17-46b9-8668-271c89565030" Feb 17 16:07:26 crc kubenswrapper[4672]: I0217 16:07:26.009678 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:07:26 crc kubenswrapper[4672]: I0217 16:07:26.014051 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:07:26 crc kubenswrapper[4672]: I0217 16:07:26.794864 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:07:26 crc kubenswrapper[4672]: I0217 16:07:26.976256 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:26 crc kubenswrapper[4672]: I0217 16:07:26.976335 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:26 crc kubenswrapper[4672]: I0217 16:07:26.986005 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:29 crc kubenswrapper[4672]: I0217 16:07:29.873327 4672 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:30 crc kubenswrapper[4672]: I0217 16:07:30.820360 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:30 crc kubenswrapper[4672]: I0217 16:07:30.820359 4672 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d539581-cd17-46b9-8668-271c89565030" Feb 17 16:07:30 crc kubenswrapper[4672]: I0217 16:07:30.820803 4672 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d539581-cd17-46b9-8668-271c89565030" Feb 17 16:07:30 crc kubenswrapper[4672]: I0217 16:07:30.827741 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:31 crc kubenswrapper[4672]: I0217 16:07:31.828121 4672 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d539581-cd17-46b9-8668-271c89565030" Feb 17 16:07:31 crc kubenswrapper[4672]: I0217 16:07:31.828169 4672 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d539581-cd17-46b9-8668-271c89565030" Feb 17 16:07:31 crc kubenswrapper[4672]: I0217 16:07:31.983677 4672 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ca90c706-6852-4cd2-adbf-eafc72fd0930" Feb 17 16:07:32 crc kubenswrapper[4672]: I0217 16:07:32.835178 4672 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d539581-cd17-46b9-8668-271c89565030" Feb 17 16:07:32 crc kubenswrapper[4672]: I0217 16:07:32.835223 4672 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d539581-cd17-46b9-8668-271c89565030" Feb 17 16:07:32 crc kubenswrapper[4672]: I0217 16:07:32.838842 4672 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ca90c706-6852-4cd2-adbf-eafc72fd0930" Feb 17 16:07:33 crc kubenswrapper[4672]: I0217 16:07:33.490862 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 16:07:39 crc kubenswrapper[4672]: I0217 16:07:39.778790 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 16:07:40 crc kubenswrapper[4672]: I0217 16:07:40.656677 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 16:07:40 crc kubenswrapper[4672]: I0217 16:07:40.860675 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 16:07:40 crc kubenswrapper[4672]: I0217 16:07:40.895752 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 16:07:40 crc kubenswrapper[4672]: I0217 16:07:40.913871 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 16:07:41 crc kubenswrapper[4672]: I0217 16:07:41.207616 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 16:07:41 crc kubenswrapper[4672]: I0217 16:07:41.446255 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 16:07:41 crc kubenswrapper[4672]: I0217 16:07:41.625493 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 16:07:41 crc kubenswrapper[4672]: I0217 16:07:41.708891 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 16:07:41 crc kubenswrapper[4672]: I0217 16:07:41.732186 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 16:07:41 crc kubenswrapper[4672]: I0217 16:07:41.792238 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 16:07:41 crc kubenswrapper[4672]: I0217 16:07:41.924169 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 16:07:42 crc kubenswrapper[4672]: I0217 16:07:42.031618 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 16:07:42 crc kubenswrapper[4672]: I0217 16:07:42.067524 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 16:07:42 crc kubenswrapper[4672]: I0217 16:07:42.115372 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 16:07:42 crc kubenswrapper[4672]: I0217 16:07:42.194939 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 16:07:42 crc kubenswrapper[4672]: I0217 16:07:42.211633 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 16:07:42 crc kubenswrapper[4672]: I0217 16:07:42.272290 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 16:07:42 crc kubenswrapper[4672]: I0217 16:07:42.368179 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 16:07:42 crc kubenswrapper[4672]: I0217 16:07:42.617482 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 16:07:43 crc kubenswrapper[4672]: I0217 16:07:43.207157 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 16:07:43 crc kubenswrapper[4672]: I0217 16:07:43.222443 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 16:07:43 crc kubenswrapper[4672]: I0217 16:07:43.253566 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 16:07:43 crc kubenswrapper[4672]: I0217 16:07:43.255605 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 16:07:43 crc kubenswrapper[4672]: I0217 16:07:43.279240 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 16:07:43 crc kubenswrapper[4672]: I0217 16:07:43.291398 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 16:07:43 crc kubenswrapper[4672]: I0217 16:07:43.353158 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 16:07:43 crc kubenswrapper[4672]: I0217 16:07:43.363198 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 16:07:43 crc kubenswrapper[4672]: I0217 16:07:43.424375 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 16:07:43 crc kubenswrapper[4672]: I0217 16:07:43.520746 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 16:07:43 crc kubenswrapper[4672]: I0217 16:07:43.529812 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 16:07:43 crc kubenswrapper[4672]: I0217 16:07:43.565062 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 16:07:43 crc kubenswrapper[4672]: I0217 16:07:43.610714 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 16:07:43 crc kubenswrapper[4672]: I0217 16:07:43.761569 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 16:07:43 crc kubenswrapper[4672]: I0217 16:07:43.815303 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 16:07:43 crc kubenswrapper[4672]: I0217 16:07:43.945581 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 16:07:43 crc kubenswrapper[4672]: I0217 16:07:43.946340 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 16:07:43 crc kubenswrapper[4672]: I0217 16:07:43.968426 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 16:07:44 crc kubenswrapper[4672]: I0217 16:07:44.018767 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 16:07:44 crc kubenswrapper[4672]: I0217 16:07:44.092035 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 16:07:44 crc kubenswrapper[4672]: I0217 16:07:44.301586 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 16:07:44 crc kubenswrapper[4672]: I0217 16:07:44.334283 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 16:07:44 crc kubenswrapper[4672]: I0217 16:07:44.430295 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 16:07:44 crc kubenswrapper[4672]: I0217 16:07:44.435578 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 16:07:44 crc kubenswrapper[4672]: I0217 16:07:44.490784 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 16:07:44 crc kubenswrapper[4672]: I0217 16:07:44.563954 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 16:07:44 crc kubenswrapper[4672]: I0217 16:07:44.596874 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 16:07:44 crc kubenswrapper[4672]: I0217 16:07:44.598686 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 16:07:44 crc kubenswrapper[4672]: I0217 16:07:44.682917 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 16:07:44 crc kubenswrapper[4672]: I0217 16:07:44.686316 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 16:07:44 crc kubenswrapper[4672]: I0217 16:07:44.803863 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 16:07:44 crc kubenswrapper[4672]: I0217 16:07:44.841806 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 16:07:44 crc kubenswrapper[4672]: I0217 16:07:44.897924 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 16:07:44 crc kubenswrapper[4672]: I0217 16:07:44.936973 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 16:07:44 crc kubenswrapper[4672]: I0217 16:07:44.939162 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 16:07:44 crc kubenswrapper[4672]: I0217 16:07:44.962589 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 16:07:45 crc kubenswrapper[4672]: I0217 16:07:45.316625 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 16:07:45 crc kubenswrapper[4672]: I0217 16:07:45.322585 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 16:07:45 crc kubenswrapper[4672]: I0217 16:07:45.394637 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 16:07:45 crc kubenswrapper[4672]: I0217 16:07:45.410197 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 16:07:45 crc kubenswrapper[4672]: I0217 16:07:45.440166 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 16:07:45 crc kubenswrapper[4672]: I0217 16:07:45.488039 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 16:07:45 crc kubenswrapper[4672]: I0217 16:07:45.572079 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 16:07:45 crc kubenswrapper[4672]: I0217 16:07:45.627674 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 16:07:45 crc kubenswrapper[4672]: I0217 16:07:45.690826 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 16:07:45 crc kubenswrapper[4672]: I0217 16:07:45.696576 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 16:07:45 crc kubenswrapper[4672]: I0217 16:07:45.723085 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 16:07:45 crc kubenswrapper[4672]: I0217 16:07:45.753183 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 16:07:45 crc kubenswrapper[4672]: I0217 16:07:45.814675 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 16:07:45 crc kubenswrapper[4672]: I0217 16:07:45.834907 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 16:07:45 crc kubenswrapper[4672]: I0217 16:07:45.859007 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 16:07:45 crc kubenswrapper[4672]: I0217 16:07:45.903290 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 16:07:45 crc kubenswrapper[4672]: I0217 16:07:45.955840 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 16:07:45 crc kubenswrapper[4672]: I0217 16:07:45.984668 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 16:07:46 crc kubenswrapper[4672]: I0217 16:07:46.037258 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 16:07:46 crc kubenswrapper[4672]: I0217 16:07:46.064594 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 16:07:46 crc kubenswrapper[4672]: I0217 16:07:46.301042 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 16:07:46 crc kubenswrapper[4672]: I0217 16:07:46.383364 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 16:07:46 crc kubenswrapper[4672]: I0217 16:07:46.420358 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 16:07:46 crc kubenswrapper[4672]: I0217 16:07:46.567771 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 16:07:46 crc kubenswrapper[4672]: I0217 16:07:46.617058 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 16:07:46 crc kubenswrapper[4672]: I0217 16:07:46.625001 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 16:07:46 crc kubenswrapper[4672]: I0217 16:07:46.626222 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 16:07:46 crc kubenswrapper[4672]: I0217 16:07:46.721989 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 16:07:46 crc kubenswrapper[4672]: I0217 16:07:46.777026 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 16:07:46 crc kubenswrapper[4672]: I0217 16:07:46.808961 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 16:07:46 crc kubenswrapper[4672]: I0217 16:07:46.841717 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 16:07:46 crc kubenswrapper[4672]: I0217 16:07:46.968606 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 16:07:46 crc kubenswrapper[4672]: I0217 16:07:46.974706 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 16:07:47 crc kubenswrapper[4672]: I0217 16:07:47.009973 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 16:07:47 crc kubenswrapper[4672]: I0217 16:07:47.077817 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 16:07:47 crc kubenswrapper[4672]: I0217 16:07:47.160206 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 16:07:47 crc kubenswrapper[4672]: I0217 16:07:47.275839 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 16:07:47 crc kubenswrapper[4672]: I0217 16:07:47.431197 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 16:07:47 crc kubenswrapper[4672]: I0217 16:07:47.460292 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 16:07:47 crc kubenswrapper[4672]: I0217 16:07:47.493554 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 16:07:47 crc kubenswrapper[4672]: I0217 16:07:47.584275 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 16:07:47 crc kubenswrapper[4672]: I0217 16:07:47.591952 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 16:07:47 crc kubenswrapper[4672]: I0217 16:07:47.691086 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 16:07:47 crc kubenswrapper[4672]: I0217 16:07:47.723334 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 16:07:47 crc kubenswrapper[4672]: I0217 16:07:47.769567 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 16:07:47 crc kubenswrapper[4672]: I0217 16:07:47.771473 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 16:07:47 crc kubenswrapper[4672]: I0217 16:07:47.883998 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 16:07:47 crc kubenswrapper[4672]: I0217 16:07:47.889619 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 16:07:48 crc kubenswrapper[4672]: I0217 16:07:48.006369 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 16:07:48 crc kubenswrapper[4672]: I0217 16:07:48.043093 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 16:07:48 crc kubenswrapper[4672]: I0217 16:07:48.053758 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 16:07:48 crc kubenswrapper[4672]: I0217 16:07:48.079308 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 16:07:48 crc kubenswrapper[4672]: I0217 16:07:48.081199 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 16:07:48 crc kubenswrapper[4672]: I0217 16:07:48.149860 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 16:07:48 crc kubenswrapper[4672]: I0217 16:07:48.222141 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 16:07:48 crc kubenswrapper[4672]: I0217 16:07:48.258680 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 16:07:48 crc kubenswrapper[4672]: I0217 16:07:48.438728 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 16:07:48 crc kubenswrapper[4672]: I0217 16:07:48.479124 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 16:07:48 crc kubenswrapper[4672]: I0217 16:07:48.558795 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 16:07:48 crc kubenswrapper[4672]: I0217 16:07:48.585145 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 16:07:48 crc kubenswrapper[4672]: I0217 16:07:48.697629 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 16:07:48 crc kubenswrapper[4672]: I0217 16:07:48.720047 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 16:07:48 crc kubenswrapper[4672]: I0217 16:07:48.783767 4672 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 16:07:48 crc kubenswrapper[4672]: I0217 16:07:48.816072 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 16:07:48 crc kubenswrapper[4672]: I0217 16:07:48.823606 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 16:07:48 crc kubenswrapper[4672]: I0217 16:07:48.916686 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 16:07:48 crc kubenswrapper[4672]: I0217 16:07:48.980737 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.031336 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.084750 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.096371 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.105631 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.106448 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.120142 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.122253 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.199433 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.201200 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.239008 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.269260 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.287892 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.351241 4672 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.355442 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.355500 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.363317 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.384680 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.384665479 podStartE2EDuration="20.384665479s" podCreationTimestamp="2026-02-17 16:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:07:49.383554279 +0000 UTC m=+278.137643091" watchObservedRunningTime="2026-02-17 16:07:49.384665479 +0000 UTC m=+278.138754211" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.419366 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.485056 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.504051 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.528572 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.618567 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.633955 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.641161 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.647006 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.699088 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.705955 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.736359 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.773872 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.844933 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.859385 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.900755 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.925208 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 16:07:49 crc kubenswrapper[4672]: I0217 16:07:49.957473 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 16:07:50 crc kubenswrapper[4672]: I0217 16:07:50.006795 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 16:07:50 crc kubenswrapper[4672]: I0217 16:07:50.028810 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 16:07:50 crc kubenswrapper[4672]: I0217 16:07:50.071266 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 16:07:50 crc kubenswrapper[4672]: I0217 16:07:50.072481 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 16:07:50 crc kubenswrapper[4672]: I0217 16:07:50.089192 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 16:07:50 crc kubenswrapper[4672]: I0217 16:07:50.222301 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 16:07:50 crc kubenswrapper[4672]: I0217 16:07:50.242889 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 16:07:50 crc kubenswrapper[4672]: I0217 16:07:50.259561 4672 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 16:07:50 crc kubenswrapper[4672]: I0217 16:07:50.294914 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 16:07:50 crc kubenswrapper[4672]: I0217 16:07:50.338893 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 16:07:50 crc kubenswrapper[4672]: I0217 16:07:50.358256 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 16:07:50 crc kubenswrapper[4672]: I0217 16:07:50.406790 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 16:07:50 crc kubenswrapper[4672]: I0217 16:07:50.451173 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 16:07:50 crc kubenswrapper[4672]: I0217 16:07:50.478688 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 16:07:50 crc kubenswrapper[4672]: I0217 16:07:50.520579 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 16:07:50 crc kubenswrapper[4672]: I0217 16:07:50.626263 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 16:07:50 crc kubenswrapper[4672]: I0217 16:07:50.676182 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 16:07:50 crc kubenswrapper[4672]: I0217 16:07:50.746417 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 16:07:50 crc kubenswrapper[4672]: I0217 16:07:50.818200 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 16:07:50 crc kubenswrapper[4672]: I0217 16:07:50.821895 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 16:07:50 crc kubenswrapper[4672]: I0217 16:07:50.878711 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 16:07:50 crc kubenswrapper[4672]: I0217 16:07:50.930754 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 16:07:50 crc kubenswrapper[4672]: I0217 16:07:50.991675 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 16:07:51 crc kubenswrapper[4672]: I0217 16:07:51.013656 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 16:07:51 crc kubenswrapper[4672]: I0217 16:07:51.036699 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 16:07:51 crc kubenswrapper[4672]: I0217 16:07:51.147805 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 16:07:51 crc kubenswrapper[4672]: I0217 16:07:51.255447 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 16:07:51 crc kubenswrapper[4672]: I0217 16:07:51.407644 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 16:07:51 crc kubenswrapper[4672]: I0217 16:07:51.479091 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 16:07:51 crc kubenswrapper[4672]: I0217 16:07:51.550912 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 16:07:51 crc kubenswrapper[4672]: I0217 16:07:51.590330 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 16:07:51 crc kubenswrapper[4672]: I0217 16:07:51.635120 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 16:07:51 crc kubenswrapper[4672]: I0217 16:07:51.712834 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 16:07:51 crc kubenswrapper[4672]: I0217 16:07:51.729701 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 16:07:51 crc kubenswrapper[4672]: I0217 16:07:51.741698 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 16:07:51 crc kubenswrapper[4672]: I0217 16:07:51.804786 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 16:07:51 crc kubenswrapper[4672]: I0217 16:07:51.865127 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 16:07:51 crc kubenswrapper[4672]: I0217 16:07:51.921469 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 16:07:52 crc kubenswrapper[4672]: I0217 16:07:52.133917 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 16:07:52 crc kubenswrapper[4672]: I0217 16:07:52.278693 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 16:07:52 crc kubenswrapper[4672]: I0217 16:07:52.292394 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 16:07:52 crc kubenswrapper[4672]: I0217 16:07:52.305631 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 16:07:52 crc kubenswrapper[4672]: I0217 16:07:52.461467 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 16:07:52 crc kubenswrapper[4672]: I0217 16:07:52.544972 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 16:07:52 crc kubenswrapper[4672]: I0217 16:07:52.553448 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 16:07:52 crc kubenswrapper[4672]: I0217 16:07:52.566607 4672 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 16:07:52 crc kubenswrapper[4672]: I0217 16:07:52.607426 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 16:07:52 crc kubenswrapper[4672]: I0217 16:07:52.608884 4672 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 16:07:52 crc kubenswrapper[4672]: I0217 16:07:52.609259 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://51332d541c79f67b5ba39409c7b49be121c9853ff78e772152e970ed10c697d1" gracePeriod=5 Feb 17 16:07:52 crc kubenswrapper[4672]: I0217 16:07:52.722897 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 16:07:52 crc kubenswrapper[4672]: I0217 16:07:52.728641 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 16:07:52 crc kubenswrapper[4672]: I0217 16:07:52.740780 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 16:07:52 crc kubenswrapper[4672]: I0217 16:07:52.745593 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 16:07:52 crc kubenswrapper[4672]: I0217 16:07:52.844089 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 16:07:52 crc kubenswrapper[4672]: I0217 16:07:52.918719 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 16:07:52 crc kubenswrapper[4672]: I0217 16:07:52.927981 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 16:07:52 crc kubenswrapper[4672]: I0217 16:07:52.934714 4672 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 16:07:53 crc kubenswrapper[4672]: I0217 16:07:53.127451 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 16:07:53 crc kubenswrapper[4672]: I0217 16:07:53.127936 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 16:07:53 crc kubenswrapper[4672]: I0217 16:07:53.187140 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 16:07:53 crc kubenswrapper[4672]: I0217 16:07:53.191535 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 16:07:53 crc kubenswrapper[4672]: I0217 16:07:53.232112 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 16:07:53 crc kubenswrapper[4672]: I0217 16:07:53.240726 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 16:07:53 crc kubenswrapper[4672]: I0217 16:07:53.458127 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 16:07:53 crc kubenswrapper[4672]: I0217 16:07:53.493031 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 16:07:53 crc kubenswrapper[4672]: I0217 16:07:53.581170 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 16:07:53 crc kubenswrapper[4672]: I0217 16:07:53.587057 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 16:07:53 crc kubenswrapper[4672]: I0217 16:07:53.764921 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 16:07:53 crc kubenswrapper[4672]: I0217 16:07:53.873548 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 16:07:53 crc kubenswrapper[4672]: I0217 16:07:53.914430 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 16:07:53 crc kubenswrapper[4672]: I0217 16:07:53.960367 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 16:07:54 crc kubenswrapper[4672]: I0217 16:07:54.014469 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 16:07:54 crc kubenswrapper[4672]: I0217 16:07:54.068501 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 16:07:54 crc kubenswrapper[4672]: I0217 16:07:54.082428 4672 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 16:07:54 crc kubenswrapper[4672]: I0217 16:07:54.160575 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 16:07:54 crc kubenswrapper[4672]: I0217 16:07:54.272832 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 16:07:54 crc kubenswrapper[4672]: I0217 16:07:54.336882 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 16:07:54 crc kubenswrapper[4672]: I0217 16:07:54.396919 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 16:07:54 crc kubenswrapper[4672]: I0217 16:07:54.559877 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 16:07:54 crc kubenswrapper[4672]: I0217 16:07:54.639591 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 16:07:54 crc kubenswrapper[4672]: I0217 16:07:54.680624 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 16:07:54 crc kubenswrapper[4672]: I0217 16:07:54.897993 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 16:07:54 crc kubenswrapper[4672]: I0217 16:07:54.950355 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 16:07:55 crc kubenswrapper[4672]: I0217 16:07:55.015149 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 16:07:55 crc kubenswrapper[4672]: I0217 16:07:55.127185 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 16:07:55 crc kubenswrapper[4672]: I0217 16:07:55.141315 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 16:07:55 crc kubenswrapper[4672]: I0217 16:07:55.143971 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 16:07:55 crc kubenswrapper[4672]: I0217 16:07:55.213454 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 16:07:55 crc kubenswrapper[4672]: I0217 16:07:55.297813 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 16:07:55 crc kubenswrapper[4672]: I0217 16:07:55.370030 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 16:07:55 crc kubenswrapper[4672]: I0217 16:07:55.575393 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 16:07:56 crc kubenswrapper[4672]: I0217 16:07:56.010756 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 16:07:56 crc kubenswrapper[4672]: I0217 16:07:56.253056 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 16:07:56 crc kubenswrapper[4672]: I0217 16:07:56.275443 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 16:07:56 crc kubenswrapper[4672]: I0217 16:07:56.482345 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 16:07:56 crc kubenswrapper[4672]: I0217 16:07:56.496495 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 16:07:57 crc kubenswrapper[4672]: I0217 16:07:57.074915 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 16:07:57 crc kubenswrapper[4672]: I0217 16:07:57.299255 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 16:07:58 crc kubenswrapper[4672]: I0217 16:07:58.020923 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 16:07:58 crc kubenswrapper[4672]: I0217 16:07:58.021008 4672 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="51332d541c79f67b5ba39409c7b49be121c9853ff78e772152e970ed10c697d1" exitCode=137 Feb 17 16:07:58 crc kubenswrapper[4672]: I0217 16:07:58.078846 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 16:07:58 crc kubenswrapper[4672]: I0217 16:07:58.205388 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 16:07:58 crc kubenswrapper[4672]: I0217 16:07:58.205544 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:58 crc kubenswrapper[4672]: I0217 16:07:58.318429 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 16:07:58 crc kubenswrapper[4672]: I0217 16:07:58.318781 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 16:07:58 crc kubenswrapper[4672]: I0217 16:07:58.318622 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:07:58 crc kubenswrapper[4672]: I0217 16:07:58.318882 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:07:58 crc kubenswrapper[4672]: I0217 16:07:58.318898 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 16:07:58 crc kubenswrapper[4672]: I0217 16:07:58.318991 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 16:07:58 crc kubenswrapper[4672]: I0217 16:07:58.319020 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 16:07:58 crc kubenswrapper[4672]: I0217 16:07:58.319126 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:07:58 crc kubenswrapper[4672]: I0217 16:07:58.319404 4672 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:58 crc kubenswrapper[4672]: I0217 16:07:58.319438 4672 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:58 crc kubenswrapper[4672]: I0217 16:07:58.319457 4672 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:58 crc kubenswrapper[4672]: I0217 16:07:58.319671 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:07:58 crc kubenswrapper[4672]: I0217 16:07:58.335384 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:07:58 crc kubenswrapper[4672]: I0217 16:07:58.420414 4672 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:58 crc kubenswrapper[4672]: I0217 16:07:58.420468 4672 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:58 crc kubenswrapper[4672]: I0217 16:07:58.620173 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 16:07:58 crc kubenswrapper[4672]: I0217 16:07:58.787825 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 16:07:59 crc kubenswrapper[4672]: I0217 16:07:59.030202 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 16:07:59 crc kubenswrapper[4672]: I0217 16:07:59.030281 4672 scope.go:117] "RemoveContainer" containerID="51332d541c79f67b5ba39409c7b49be121c9853ff78e772152e970ed10c697d1" Feb 17 16:07:59 crc kubenswrapper[4672]: I0217 16:07:59.030589 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 16:07:59 crc kubenswrapper[4672]: I0217 16:07:59.951228 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 17 16:08:10 crc kubenswrapper[4672]: I0217 16:08:10.095216 4672 generic.go:334] "Generic (PLEG): container finished" podID="2ed3c87a-d599-4e91-92ce-377ddef564da" containerID="994f5beba1593c7a76312740bff3f4e0fd815bb7e935f7bd0b28b9387dabdf02" exitCode=0 Feb 17 16:08:10 crc kubenswrapper[4672]: I0217 16:08:10.095291 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" event={"ID":"2ed3c87a-d599-4e91-92ce-377ddef564da","Type":"ContainerDied","Data":"994f5beba1593c7a76312740bff3f4e0fd815bb7e935f7bd0b28b9387dabdf02"} Feb 17 16:08:10 crc kubenswrapper[4672]: I0217 16:08:10.096475 4672 scope.go:117] "RemoveContainer" containerID="994f5beba1593c7a76312740bff3f4e0fd815bb7e935f7bd0b28b9387dabdf02" Feb 17 16:08:11 crc kubenswrapper[4672]: I0217 16:08:11.105480 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" event={"ID":"2ed3c87a-d599-4e91-92ce-377ddef564da","Type":"ContainerStarted","Data":"5c517c6300f36edca4bf3993e5bffd5e0255035b4e89533ff2f910b3db662b36"} Feb 17 16:08:11 crc kubenswrapper[4672]: I0217 16:08:11.106200 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" Feb 17 16:08:11 crc kubenswrapper[4672]: I0217 16:08:11.109705 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" Feb 17 16:08:11 crc kubenswrapper[4672]: I0217 16:08:11.729422 4672 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 17 16:08:23 crc kubenswrapper[4672]: I0217 16:08:23.851802 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rxbxr"] Feb 17 16:08:23 crc kubenswrapper[4672]: E0217 16:08:23.852225 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780d94df-fc74-4af2-9e51-eea226989b67" containerName="installer" Feb 17 16:08:23 crc kubenswrapper[4672]: I0217 16:08:23.852238 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="780d94df-fc74-4af2-9e51-eea226989b67" containerName="installer" Feb 17 16:08:23 crc kubenswrapper[4672]: E0217 16:08:23.852255 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 16:08:23 crc kubenswrapper[4672]: I0217 16:08:23.852260 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 16:08:23 crc kubenswrapper[4672]: I0217 16:08:23.852359 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 16:08:23 crc kubenswrapper[4672]: I0217 16:08:23.852368 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="780d94df-fc74-4af2-9e51-eea226989b67" containerName="installer" Feb 17 16:08:23 crc kubenswrapper[4672]: I0217 16:08:23.852753 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:23 crc kubenswrapper[4672]: I0217 16:08:23.873978 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rxbxr"] Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.053702 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fec0040a-2190-42da-a2cf-98f00449cad5-registry-certificates\") pod \"image-registry-66df7c8f76-rxbxr\" (UID: \"fec0040a-2190-42da-a2cf-98f00449cad5\") " pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.053763 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fec0040a-2190-42da-a2cf-98f00449cad5-trusted-ca\") pod \"image-registry-66df7c8f76-rxbxr\" (UID: \"fec0040a-2190-42da-a2cf-98f00449cad5\") " pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.053914 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fec0040a-2190-42da-a2cf-98f00449cad5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rxbxr\" (UID: \"fec0040a-2190-42da-a2cf-98f00449cad5\") " pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.054028 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fec0040a-2190-42da-a2cf-98f00449cad5-bound-sa-token\") pod \"image-registry-66df7c8f76-rxbxr\" (UID: \"fec0040a-2190-42da-a2cf-98f00449cad5\") " pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.054075 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-rxbxr\" (UID: \"fec0040a-2190-42da-a2cf-98f00449cad5\") " pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.054132 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmgtg\" (UniqueName: \"kubernetes.io/projected/fec0040a-2190-42da-a2cf-98f00449cad5-kube-api-access-wmgtg\") pod \"image-registry-66df7c8f76-rxbxr\" (UID: \"fec0040a-2190-42da-a2cf-98f00449cad5\") " pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.054156 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fec0040a-2190-42da-a2cf-98f00449cad5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rxbxr\" (UID: \"fec0040a-2190-42da-a2cf-98f00449cad5\") " pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.054171 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fec0040a-2190-42da-a2cf-98f00449cad5-registry-tls\") pod \"image-registry-66df7c8f76-rxbxr\" (UID: \"fec0040a-2190-42da-a2cf-98f00449cad5\") " pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.077644 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-rxbxr\" (UID: \"fec0040a-2190-42da-a2cf-98f00449cad5\") " pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.155175 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fec0040a-2190-42da-a2cf-98f00449cad5-registry-tls\") pod \"image-registry-66df7c8f76-rxbxr\" (UID: \"fec0040a-2190-42da-a2cf-98f00449cad5\") " pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.155267 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fec0040a-2190-42da-a2cf-98f00449cad5-registry-certificates\") pod \"image-registry-66df7c8f76-rxbxr\" (UID: \"fec0040a-2190-42da-a2cf-98f00449cad5\") " pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.155304 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fec0040a-2190-42da-a2cf-98f00449cad5-trusted-ca\") pod \"image-registry-66df7c8f76-rxbxr\" (UID: \"fec0040a-2190-42da-a2cf-98f00449cad5\") " pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.155320 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fec0040a-2190-42da-a2cf-98f00449cad5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rxbxr\" (UID: \"fec0040a-2190-42da-a2cf-98f00449cad5\") " pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.155345 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fec0040a-2190-42da-a2cf-98f00449cad5-bound-sa-token\") pod \"image-registry-66df7c8f76-rxbxr\" (UID: \"fec0040a-2190-42da-a2cf-98f00449cad5\") " pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.155393 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmgtg\" (UniqueName: \"kubernetes.io/projected/fec0040a-2190-42da-a2cf-98f00449cad5-kube-api-access-wmgtg\") pod \"image-registry-66df7c8f76-rxbxr\" (UID: \"fec0040a-2190-42da-a2cf-98f00449cad5\") " pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.155417 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fec0040a-2190-42da-a2cf-98f00449cad5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rxbxr\" (UID: \"fec0040a-2190-42da-a2cf-98f00449cad5\") " pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.156902 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fec0040a-2190-42da-a2cf-98f00449cad5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rxbxr\" (UID: \"fec0040a-2190-42da-a2cf-98f00449cad5\") " pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.157753 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fec0040a-2190-42da-a2cf-98f00449cad5-trusted-ca\") pod \"image-registry-66df7c8f76-rxbxr\" (UID: \"fec0040a-2190-42da-a2cf-98f00449cad5\") " pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.157924 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fec0040a-2190-42da-a2cf-98f00449cad5-registry-certificates\") pod \"image-registry-66df7c8f76-rxbxr\" (UID: \"fec0040a-2190-42da-a2cf-98f00449cad5\") " pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.171935 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fec0040a-2190-42da-a2cf-98f00449cad5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rxbxr\" (UID: \"fec0040a-2190-42da-a2cf-98f00449cad5\") " pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.172439 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fec0040a-2190-42da-a2cf-98f00449cad5-registry-tls\") pod \"image-registry-66df7c8f76-rxbxr\" (UID: \"fec0040a-2190-42da-a2cf-98f00449cad5\") " pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.173153 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fec0040a-2190-42da-a2cf-98f00449cad5-bound-sa-token\") pod \"image-registry-66df7c8f76-rxbxr\" (UID: \"fec0040a-2190-42da-a2cf-98f00449cad5\") " pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.182043 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmgtg\" (UniqueName: \"kubernetes.io/projected/fec0040a-2190-42da-a2cf-98f00449cad5-kube-api-access-wmgtg\") pod \"image-registry-66df7c8f76-rxbxr\" (UID: \"fec0040a-2190-42da-a2cf-98f00449cad5\") " pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.467171 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:24 crc kubenswrapper[4672]: I0217 16:08:24.887448 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rxbxr"] Feb 17 16:08:25 crc kubenswrapper[4672]: I0217 16:08:25.206012 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" event={"ID":"fec0040a-2190-42da-a2cf-98f00449cad5","Type":"ContainerStarted","Data":"52db0d91eac3839935578337624ecb39335a6ab3e0320b5bbbf2042937c5bd87"} Feb 17 16:08:25 crc kubenswrapper[4672]: I0217 16:08:25.206721 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" event={"ID":"fec0040a-2190-42da-a2cf-98f00449cad5","Type":"ContainerStarted","Data":"f1ad166652e48e1355c993a12649e203dd9b9a9465aa3fbbce5194e972ae7add"} Feb 17 16:08:25 crc kubenswrapper[4672]: I0217 16:08:25.206817 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:25 crc kubenswrapper[4672]: I0217 16:08:25.228982 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" podStartSLOduration=2.228957149 podStartE2EDuration="2.228957149s" podCreationTimestamp="2026-02-17 16:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:08:25.226671578 +0000 UTC m=+313.980760330" watchObservedRunningTime="2026-02-17 16:08:25.228957149 +0000 UTC m=+313.983045911" Feb 17 16:08:38 crc kubenswrapper[4672]: I0217 16:08:38.597427 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wvksq"] Feb 17 16:08:38 crc kubenswrapper[4672]: I0217 16:08:38.598322 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wvksq" podUID="708084b0-bae5-4cfc-ab45-cc5ca619f849" containerName="registry-server" containerID="cri-o://2d406caf07c8a3823ad5a2c63720af4c6685e036cb907f90d73608c637fcc924" gracePeriod=30 Feb 17 16:08:38 crc kubenswrapper[4672]: I0217 16:08:38.606234 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vxnc7"] Feb 17 16:08:38 crc kubenswrapper[4672]: I0217 16:08:38.606483 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vxnc7" podUID="db7fc0eb-2899-4c37-bf2e-30d02cbffb2c" containerName="registry-server" containerID="cri-o://58f57cd507ac7f181e0427802d1a565f71f0d0631613d6d531b76ba76120e44a" gracePeriod=30 Feb 17 16:08:38 crc kubenswrapper[4672]: I0217 16:08:38.622472 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bk22j"] Feb 17 16:08:38 crc kubenswrapper[4672]: I0217 16:08:38.624803 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" podUID="2ed3c87a-d599-4e91-92ce-377ddef564da" containerName="marketplace-operator" containerID="cri-o://5c517c6300f36edca4bf3993e5bffd5e0255035b4e89533ff2f910b3db662b36" gracePeriod=30 Feb 17 16:08:38 crc kubenswrapper[4672]: I0217 16:08:38.641966 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l98cc"] Feb 17 16:08:38 crc kubenswrapper[4672]: I0217 16:08:38.642627 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l98cc" podUID="505bfe60-cd7c-4bd6-981a-c14076ef5387" containerName="registry-server" containerID="cri-o://a9533c4cdad3c1526df3d843142d6e6c5c83b9d67292d22e2a1bab0da3bb87cd" gracePeriod=30 Feb 17 16:08:38 crc kubenswrapper[4672]: I0217 16:08:38.650483 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nd8vd"] Feb 17 16:08:38 crc kubenswrapper[4672]: I0217 16:08:38.650745 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nd8vd" podUID="028c8d9b-9bd5-4cf5-9628-849e8b5aacaf" containerName="registry-server" containerID="cri-o://6bc7ef43a67eab305d8e0ff78868a68557dbcc5610a7c4c68bed609473ee87a5" gracePeriod=30 Feb 17 16:08:38 crc kubenswrapper[4672]: I0217 16:08:38.663708 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ctrp5"] Feb 17 16:08:38 crc kubenswrapper[4672]: I0217 16:08:38.664875 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ctrp5" Feb 17 16:08:38 crc kubenswrapper[4672]: I0217 16:08:38.677252 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6d8n\" (UniqueName: \"kubernetes.io/projected/c41c2d4d-2194-4562-97e0-69f36cf4007f-kube-api-access-h6d8n\") pod \"marketplace-operator-79b997595-ctrp5\" (UID: \"c41c2d4d-2194-4562-97e0-69f36cf4007f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ctrp5" Feb 17 16:08:38 crc kubenswrapper[4672]: I0217 16:08:38.677319 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c41c2d4d-2194-4562-97e0-69f36cf4007f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ctrp5\" (UID: \"c41c2d4d-2194-4562-97e0-69f36cf4007f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ctrp5" Feb 17 16:08:38 crc kubenswrapper[4672]: I0217 16:08:38.677353 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c41c2d4d-2194-4562-97e0-69f36cf4007f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ctrp5\" (UID: \"c41c2d4d-2194-4562-97e0-69f36cf4007f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ctrp5" Feb 17 16:08:38 crc kubenswrapper[4672]: I0217 16:08:38.713734 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ctrp5"] Feb 17 16:08:38 crc kubenswrapper[4672]: I0217 16:08:38.778158 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c41c2d4d-2194-4562-97e0-69f36cf4007f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ctrp5\" (UID: \"c41c2d4d-2194-4562-97e0-69f36cf4007f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ctrp5" Feb 17 16:08:38 crc kubenswrapper[4672]: I0217 16:08:38.778219 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6d8n\" (UniqueName: \"kubernetes.io/projected/c41c2d4d-2194-4562-97e0-69f36cf4007f-kube-api-access-h6d8n\") pod \"marketplace-operator-79b997595-ctrp5\" (UID: \"c41c2d4d-2194-4562-97e0-69f36cf4007f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ctrp5" Feb 17 16:08:38 crc kubenswrapper[4672]: I0217 16:08:38.778265 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c41c2d4d-2194-4562-97e0-69f36cf4007f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ctrp5\" (UID: \"c41c2d4d-2194-4562-97e0-69f36cf4007f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ctrp5" Feb 17 16:08:38 crc kubenswrapper[4672]: I0217 16:08:38.779757 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c41c2d4d-2194-4562-97e0-69f36cf4007f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ctrp5\" (UID: \"c41c2d4d-2194-4562-97e0-69f36cf4007f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ctrp5" Feb 17 16:08:38 crc kubenswrapper[4672]: I0217 16:08:38.785004 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c41c2d4d-2194-4562-97e0-69f36cf4007f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ctrp5\" (UID: \"c41c2d4d-2194-4562-97e0-69f36cf4007f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ctrp5" Feb 17 16:08:38 crc kubenswrapper[4672]: I0217 16:08:38.797672 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6d8n\" (UniqueName: \"kubernetes.io/projected/c41c2d4d-2194-4562-97e0-69f36cf4007f-kube-api-access-h6d8n\") pod \"marketplace-operator-79b997595-ctrp5\" (UID: \"c41c2d4d-2194-4562-97e0-69f36cf4007f\") " pod="openshift-marketplace/marketplace-operator-79b997595-ctrp5" Feb 17 16:08:38 crc kubenswrapper[4672]: I0217 16:08:38.992118 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ctrp5" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.070468 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxnc7" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.180973 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l98cc" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.181194 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db7fc0eb-2899-4c37-bf2e-30d02cbffb2c-utilities\") pod \"db7fc0eb-2899-4c37-bf2e-30d02cbffb2c\" (UID: \"db7fc0eb-2899-4c37-bf2e-30d02cbffb2c\") " Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.181341 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bjxk\" (UniqueName: \"kubernetes.io/projected/db7fc0eb-2899-4c37-bf2e-30d02cbffb2c-kube-api-access-5bjxk\") pod \"db7fc0eb-2899-4c37-bf2e-30d02cbffb2c\" (UID: \"db7fc0eb-2899-4c37-bf2e-30d02cbffb2c\") " Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.181462 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db7fc0eb-2899-4c37-bf2e-30d02cbffb2c-catalog-content\") pod \"db7fc0eb-2899-4c37-bf2e-30d02cbffb2c\" (UID: \"db7fc0eb-2899-4c37-bf2e-30d02cbffb2c\") " Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.182091 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db7fc0eb-2899-4c37-bf2e-30d02cbffb2c-utilities" (OuterVolumeSpecName: "utilities") pod "db7fc0eb-2899-4c37-bf2e-30d02cbffb2c" (UID: "db7fc0eb-2899-4c37-bf2e-30d02cbffb2c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.189649 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db7fc0eb-2899-4c37-bf2e-30d02cbffb2c-kube-api-access-5bjxk" (OuterVolumeSpecName: "kube-api-access-5bjxk") pod "db7fc0eb-2899-4c37-bf2e-30d02cbffb2c" (UID: "db7fc0eb-2899-4c37-bf2e-30d02cbffb2c"). InnerVolumeSpecName "kube-api-access-5bjxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.202549 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.205763 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wvksq" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.240876 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db7fc0eb-2899-4c37-bf2e-30d02cbffb2c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db7fc0eb-2899-4c37-bf2e-30d02cbffb2c" (UID: "db7fc0eb-2899-4c37-bf2e-30d02cbffb2c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.245368 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nd8vd" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.281320 4672 generic.go:334] "Generic (PLEG): container finished" podID="2ed3c87a-d599-4e91-92ce-377ddef564da" containerID="5c517c6300f36edca4bf3993e5bffd5e0255035b4e89533ff2f910b3db662b36" exitCode=0 Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.281387 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" event={"ID":"2ed3c87a-d599-4e91-92ce-377ddef564da","Type":"ContainerDied","Data":"5c517c6300f36edca4bf3993e5bffd5e0255035b4e89533ff2f910b3db662b36"} Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.281419 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" event={"ID":"2ed3c87a-d599-4e91-92ce-377ddef564da","Type":"ContainerDied","Data":"3b6eee1fd7cc5c0e14628e9f22d295d86b1e2a141b8832c504bb8bd1fdafef4d"} Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.281438 4672 scope.go:117] "RemoveContainer" containerID="5c517c6300f36edca4bf3993e5bffd5e0255035b4e89533ff2f910b3db662b36" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.281570 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bk22j" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.281964 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/505bfe60-cd7c-4bd6-981a-c14076ef5387-utilities\") pod \"505bfe60-cd7c-4bd6-981a-c14076ef5387\" (UID: \"505bfe60-cd7c-4bd6-981a-c14076ef5387\") " Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.282060 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zc5l\" (UniqueName: \"kubernetes.io/projected/505bfe60-cd7c-4bd6-981a-c14076ef5387-kube-api-access-9zc5l\") pod \"505bfe60-cd7c-4bd6-981a-c14076ef5387\" (UID: \"505bfe60-cd7c-4bd6-981a-c14076ef5387\") " Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.282119 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/505bfe60-cd7c-4bd6-981a-c14076ef5387-catalog-content\") pod \"505bfe60-cd7c-4bd6-981a-c14076ef5387\" (UID: \"505bfe60-cd7c-4bd6-981a-c14076ef5387\") " Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.282390 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db7fc0eb-2899-4c37-bf2e-30d02cbffb2c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.282405 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db7fc0eb-2899-4c37-bf2e-30d02cbffb2c-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.282417 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bjxk\" (UniqueName: \"kubernetes.io/projected/db7fc0eb-2899-4c37-bf2e-30d02cbffb2c-kube-api-access-5bjxk\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.282639 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/505bfe60-cd7c-4bd6-981a-c14076ef5387-utilities" (OuterVolumeSpecName: "utilities") pod "505bfe60-cd7c-4bd6-981a-c14076ef5387" (UID: "505bfe60-cd7c-4bd6-981a-c14076ef5387"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.287229 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/505bfe60-cd7c-4bd6-981a-c14076ef5387-kube-api-access-9zc5l" (OuterVolumeSpecName: "kube-api-access-9zc5l") pod "505bfe60-cd7c-4bd6-981a-c14076ef5387" (UID: "505bfe60-cd7c-4bd6-981a-c14076ef5387"). InnerVolumeSpecName "kube-api-access-9zc5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.289413 4672 generic.go:334] "Generic (PLEG): container finished" podID="505bfe60-cd7c-4bd6-981a-c14076ef5387" containerID="a9533c4cdad3c1526df3d843142d6e6c5c83b9d67292d22e2a1bab0da3bb87cd" exitCode=0 Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.289461 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l98cc" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.289498 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l98cc" event={"ID":"505bfe60-cd7c-4bd6-981a-c14076ef5387","Type":"ContainerDied","Data":"a9533c4cdad3c1526df3d843142d6e6c5c83b9d67292d22e2a1bab0da3bb87cd"} Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.289566 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l98cc" event={"ID":"505bfe60-cd7c-4bd6-981a-c14076ef5387","Type":"ContainerDied","Data":"727715973d29ce2d5ce3c22ad26343a7fc530c19d18ad238c267ca8336d54458"} Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.295347 4672 generic.go:334] "Generic (PLEG): container finished" podID="028c8d9b-9bd5-4cf5-9628-849e8b5aacaf" containerID="6bc7ef43a67eab305d8e0ff78868a68557dbcc5610a7c4c68bed609473ee87a5" exitCode=0 Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.295413 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nd8vd" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.295421 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nd8vd" event={"ID":"028c8d9b-9bd5-4cf5-9628-849e8b5aacaf","Type":"ContainerDied","Data":"6bc7ef43a67eab305d8e0ff78868a68557dbcc5610a7c4c68bed609473ee87a5"} Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.295449 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nd8vd" event={"ID":"028c8d9b-9bd5-4cf5-9628-849e8b5aacaf","Type":"ContainerDied","Data":"30e115772276cf76f98239cbf3d5b791aecaa890069de1bf99a8704d568e0d46"} Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.297416 4672 scope.go:117] "RemoveContainer" containerID="994f5beba1593c7a76312740bff3f4e0fd815bb7e935f7bd0b28b9387dabdf02" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.299883 4672 generic.go:334] "Generic (PLEG): container finished" podID="db7fc0eb-2899-4c37-bf2e-30d02cbffb2c" containerID="58f57cd507ac7f181e0427802d1a565f71f0d0631613d6d531b76ba76120e44a" exitCode=0 Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.299941 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxnc7" event={"ID":"db7fc0eb-2899-4c37-bf2e-30d02cbffb2c","Type":"ContainerDied","Data":"58f57cd507ac7f181e0427802d1a565f71f0d0631613d6d531b76ba76120e44a"} Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.299958 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxnc7" event={"ID":"db7fc0eb-2899-4c37-bf2e-30d02cbffb2c","Type":"ContainerDied","Data":"222782152d1da94aaa5f20349999704de626197a46e88a664a414272eca82da0"} Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.299960 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxnc7" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.309795 4672 generic.go:334] "Generic (PLEG): container finished" podID="708084b0-bae5-4cfc-ab45-cc5ca619f849" containerID="2d406caf07c8a3823ad5a2c63720af4c6685e036cb907f90d73608c637fcc924" exitCode=0 Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.309836 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvksq" event={"ID":"708084b0-bae5-4cfc-ab45-cc5ca619f849","Type":"ContainerDied","Data":"2d406caf07c8a3823ad5a2c63720af4c6685e036cb907f90d73608c637fcc924"} Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.309864 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvksq" event={"ID":"708084b0-bae5-4cfc-ab45-cc5ca619f849","Type":"ContainerDied","Data":"0a2922d54675109365da6a9bb1d000eb6f34624d4bcfe1a4a6b22a7ed5c5a72a"} Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.309936 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wvksq" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.316109 4672 scope.go:117] "RemoveContainer" containerID="5c517c6300f36edca4bf3993e5bffd5e0255035b4e89533ff2f910b3db662b36" Feb 17 16:08:39 crc kubenswrapper[4672]: E0217 16:08:39.316507 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c517c6300f36edca4bf3993e5bffd5e0255035b4e89533ff2f910b3db662b36\": container with ID starting with 5c517c6300f36edca4bf3993e5bffd5e0255035b4e89533ff2f910b3db662b36 not found: ID does not exist" containerID="5c517c6300f36edca4bf3993e5bffd5e0255035b4e89533ff2f910b3db662b36" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.316560 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c517c6300f36edca4bf3993e5bffd5e0255035b4e89533ff2f910b3db662b36"} err="failed to get container status \"5c517c6300f36edca4bf3993e5bffd5e0255035b4e89533ff2f910b3db662b36\": rpc error: code = NotFound desc = could not find container \"5c517c6300f36edca4bf3993e5bffd5e0255035b4e89533ff2f910b3db662b36\": container with ID starting with 5c517c6300f36edca4bf3993e5bffd5e0255035b4e89533ff2f910b3db662b36 not found: ID does not exist" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.316581 4672 scope.go:117] "RemoveContainer" containerID="994f5beba1593c7a76312740bff3f4e0fd815bb7e935f7bd0b28b9387dabdf02" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.316691 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/505bfe60-cd7c-4bd6-981a-c14076ef5387-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "505bfe60-cd7c-4bd6-981a-c14076ef5387" (UID: "505bfe60-cd7c-4bd6-981a-c14076ef5387"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:39 crc kubenswrapper[4672]: E0217 16:08:39.316785 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"994f5beba1593c7a76312740bff3f4e0fd815bb7e935f7bd0b28b9387dabdf02\": container with ID starting with 994f5beba1593c7a76312740bff3f4e0fd815bb7e935f7bd0b28b9387dabdf02 not found: ID does not exist" containerID="994f5beba1593c7a76312740bff3f4e0fd815bb7e935f7bd0b28b9387dabdf02" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.316802 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"994f5beba1593c7a76312740bff3f4e0fd815bb7e935f7bd0b28b9387dabdf02"} err="failed to get container status \"994f5beba1593c7a76312740bff3f4e0fd815bb7e935f7bd0b28b9387dabdf02\": rpc error: code = NotFound desc = could not find container \"994f5beba1593c7a76312740bff3f4e0fd815bb7e935f7bd0b28b9387dabdf02\": container with ID starting with 994f5beba1593c7a76312740bff3f4e0fd815bb7e935f7bd0b28b9387dabdf02 not found: ID does not exist" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.316814 4672 scope.go:117] "RemoveContainer" containerID="a9533c4cdad3c1526df3d843142d6e6c5c83b9d67292d22e2a1bab0da3bb87cd" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.341126 4672 scope.go:117] "RemoveContainer" containerID="1333dd37a3bda9e4e70598bbb65be2a6fe1acfb184436daf55c9bc0fbb437b96" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.342997 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vxnc7"] Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.347755 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vxnc7"] Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.355993 4672 scope.go:117] "RemoveContainer" containerID="5da309d255743a05c224bccd21910188e4ec29791e044f3d15ffb336e2c43d29" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.376681 4672 scope.go:117] "RemoveContainer" containerID="a9533c4cdad3c1526df3d843142d6e6c5c83b9d67292d22e2a1bab0da3bb87cd" Feb 17 16:08:39 crc kubenswrapper[4672]: E0217 16:08:39.381396 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9533c4cdad3c1526df3d843142d6e6c5c83b9d67292d22e2a1bab0da3bb87cd\": container with ID starting with a9533c4cdad3c1526df3d843142d6e6c5c83b9d67292d22e2a1bab0da3bb87cd not found: ID does not exist" containerID="a9533c4cdad3c1526df3d843142d6e6c5c83b9d67292d22e2a1bab0da3bb87cd" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.381454 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9533c4cdad3c1526df3d843142d6e6c5c83b9d67292d22e2a1bab0da3bb87cd"} err="failed to get container status \"a9533c4cdad3c1526df3d843142d6e6c5c83b9d67292d22e2a1bab0da3bb87cd\": rpc error: code = NotFound desc = could not find container \"a9533c4cdad3c1526df3d843142d6e6c5c83b9d67292d22e2a1bab0da3bb87cd\": container with ID starting with a9533c4cdad3c1526df3d843142d6e6c5c83b9d67292d22e2a1bab0da3bb87cd not found: ID does not exist" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.381490 4672 scope.go:117] "RemoveContainer" containerID="1333dd37a3bda9e4e70598bbb65be2a6fe1acfb184436daf55c9bc0fbb437b96" Feb 17 16:08:39 crc kubenswrapper[4672]: E0217 16:08:39.381972 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1333dd37a3bda9e4e70598bbb65be2a6fe1acfb184436daf55c9bc0fbb437b96\": container with ID starting with 1333dd37a3bda9e4e70598bbb65be2a6fe1acfb184436daf55c9bc0fbb437b96 not found: ID does not exist" containerID="1333dd37a3bda9e4e70598bbb65be2a6fe1acfb184436daf55c9bc0fbb437b96" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.382001 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1333dd37a3bda9e4e70598bbb65be2a6fe1acfb184436daf55c9bc0fbb437b96"} err="failed to get container status \"1333dd37a3bda9e4e70598bbb65be2a6fe1acfb184436daf55c9bc0fbb437b96\": rpc error: code = NotFound desc = could not find container \"1333dd37a3bda9e4e70598bbb65be2a6fe1acfb184436daf55c9bc0fbb437b96\": container with ID starting with 1333dd37a3bda9e4e70598bbb65be2a6fe1acfb184436daf55c9bc0fbb437b96 not found: ID does not exist" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.382021 4672 scope.go:117] "RemoveContainer" containerID="5da309d255743a05c224bccd21910188e4ec29791e044f3d15ffb336e2c43d29" Feb 17 16:08:39 crc kubenswrapper[4672]: E0217 16:08:39.382309 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5da309d255743a05c224bccd21910188e4ec29791e044f3d15ffb336e2c43d29\": container with ID starting with 5da309d255743a05c224bccd21910188e4ec29791e044f3d15ffb336e2c43d29 not found: ID does not exist" containerID="5da309d255743a05c224bccd21910188e4ec29791e044f3d15ffb336e2c43d29" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.382371 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5da309d255743a05c224bccd21910188e4ec29791e044f3d15ffb336e2c43d29"} err="failed to get container status \"5da309d255743a05c224bccd21910188e4ec29791e044f3d15ffb336e2c43d29\": rpc error: code = NotFound desc = could not find container \"5da309d255743a05c224bccd21910188e4ec29791e044f3d15ffb336e2c43d29\": container with ID starting with 5da309d255743a05c224bccd21910188e4ec29791e044f3d15ffb336e2c43d29 not found: ID does not exist" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.382444 4672 scope.go:117] "RemoveContainer" containerID="6bc7ef43a67eab305d8e0ff78868a68557dbcc5610a7c4c68bed609473ee87a5" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.383022 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/028c8d9b-9bd5-4cf5-9628-849e8b5aacaf-catalog-content\") pod \"028c8d9b-9bd5-4cf5-9628-849e8b5aacaf\" (UID: \"028c8d9b-9bd5-4cf5-9628-849e8b5aacaf\") " Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.383055 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5zw5\" (UniqueName: \"kubernetes.io/projected/708084b0-bae5-4cfc-ab45-cc5ca619f849-kube-api-access-r5zw5\") pod \"708084b0-bae5-4cfc-ab45-cc5ca619f849\" (UID: \"708084b0-bae5-4cfc-ab45-cc5ca619f849\") " Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.383128 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpbdr\" (UniqueName: \"kubernetes.io/projected/2ed3c87a-d599-4e91-92ce-377ddef564da-kube-api-access-qpbdr\") pod \"2ed3c87a-d599-4e91-92ce-377ddef564da\" (UID: \"2ed3c87a-d599-4e91-92ce-377ddef564da\") " Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.383147 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/708084b0-bae5-4cfc-ab45-cc5ca619f849-catalog-content\") pod \"708084b0-bae5-4cfc-ab45-cc5ca619f849\" (UID: \"708084b0-bae5-4cfc-ab45-cc5ca619f849\") " Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.383171 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2ed3c87a-d599-4e91-92ce-377ddef564da-marketplace-operator-metrics\") pod \"2ed3c87a-d599-4e91-92ce-377ddef564da\" (UID: \"2ed3c87a-d599-4e91-92ce-377ddef564da\") " Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.383198 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ed3c87a-d599-4e91-92ce-377ddef564da-marketplace-trusted-ca\") pod \"2ed3c87a-d599-4e91-92ce-377ddef564da\" (UID: \"2ed3c87a-d599-4e91-92ce-377ddef564da\") " Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.383213 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz2v9\" (UniqueName: \"kubernetes.io/projected/028c8d9b-9bd5-4cf5-9628-849e8b5aacaf-kube-api-access-bz2v9\") pod \"028c8d9b-9bd5-4cf5-9628-849e8b5aacaf\" (UID: \"028c8d9b-9bd5-4cf5-9628-849e8b5aacaf\") " Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.383238 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/708084b0-bae5-4cfc-ab45-cc5ca619f849-utilities\") pod \"708084b0-bae5-4cfc-ab45-cc5ca619f849\" (UID: \"708084b0-bae5-4cfc-ab45-cc5ca619f849\") " Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.383268 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/028c8d9b-9bd5-4cf5-9628-849e8b5aacaf-utilities\") pod \"028c8d9b-9bd5-4cf5-9628-849e8b5aacaf\" (UID: \"028c8d9b-9bd5-4cf5-9628-849e8b5aacaf\") " Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.383470 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/505bfe60-cd7c-4bd6-981a-c14076ef5387-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.383481 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/505bfe60-cd7c-4bd6-981a-c14076ef5387-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.383491 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zc5l\" (UniqueName: \"kubernetes.io/projected/505bfe60-cd7c-4bd6-981a-c14076ef5387-kube-api-access-9zc5l\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.384331 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/028c8d9b-9bd5-4cf5-9628-849e8b5aacaf-utilities" (OuterVolumeSpecName: "utilities") pod "028c8d9b-9bd5-4cf5-9628-849e8b5aacaf" (UID: "028c8d9b-9bd5-4cf5-9628-849e8b5aacaf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.384660 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ed3c87a-d599-4e91-92ce-377ddef564da-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "2ed3c87a-d599-4e91-92ce-377ddef564da" (UID: "2ed3c87a-d599-4e91-92ce-377ddef564da"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.386030 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/028c8d9b-9bd5-4cf5-9628-849e8b5aacaf-kube-api-access-bz2v9" (OuterVolumeSpecName: "kube-api-access-bz2v9") pod "028c8d9b-9bd5-4cf5-9628-849e8b5aacaf" (UID: "028c8d9b-9bd5-4cf5-9628-849e8b5aacaf"). InnerVolumeSpecName "kube-api-access-bz2v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.386378 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ed3c87a-d599-4e91-92ce-377ddef564da-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "2ed3c87a-d599-4e91-92ce-377ddef564da" (UID: "2ed3c87a-d599-4e91-92ce-377ddef564da"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.386962 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ed3c87a-d599-4e91-92ce-377ddef564da-kube-api-access-qpbdr" (OuterVolumeSpecName: "kube-api-access-qpbdr") pod "2ed3c87a-d599-4e91-92ce-377ddef564da" (UID: "2ed3c87a-d599-4e91-92ce-377ddef564da"). InnerVolumeSpecName "kube-api-access-qpbdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.389084 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/708084b0-bae5-4cfc-ab45-cc5ca619f849-kube-api-access-r5zw5" (OuterVolumeSpecName: "kube-api-access-r5zw5") pod "708084b0-bae5-4cfc-ab45-cc5ca619f849" (UID: "708084b0-bae5-4cfc-ab45-cc5ca619f849"). InnerVolumeSpecName "kube-api-access-r5zw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.394529 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/708084b0-bae5-4cfc-ab45-cc5ca619f849-utilities" (OuterVolumeSpecName: "utilities") pod "708084b0-bae5-4cfc-ab45-cc5ca619f849" (UID: "708084b0-bae5-4cfc-ab45-cc5ca619f849"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.398198 4672 scope.go:117] "RemoveContainer" containerID="abc34a9b0dabc007afa1aef5113eca567847d8b7316bcfb7cc5bd1134a2e0951" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.412784 4672 scope.go:117] "RemoveContainer" containerID="2d186d33b5c78309395ce14e7c14e371ed64262e50f5f2fd92d85dfc7c570340" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.430280 4672 scope.go:117] "RemoveContainer" containerID="6bc7ef43a67eab305d8e0ff78868a68557dbcc5610a7c4c68bed609473ee87a5" Feb 17 16:08:39 crc kubenswrapper[4672]: E0217 16:08:39.430677 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bc7ef43a67eab305d8e0ff78868a68557dbcc5610a7c4c68bed609473ee87a5\": container with ID starting with 6bc7ef43a67eab305d8e0ff78868a68557dbcc5610a7c4c68bed609473ee87a5 not found: ID does not exist" containerID="6bc7ef43a67eab305d8e0ff78868a68557dbcc5610a7c4c68bed609473ee87a5" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.430711 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bc7ef43a67eab305d8e0ff78868a68557dbcc5610a7c4c68bed609473ee87a5"} err="failed to get container status \"6bc7ef43a67eab305d8e0ff78868a68557dbcc5610a7c4c68bed609473ee87a5\": rpc error: code = NotFound desc = could not find container \"6bc7ef43a67eab305d8e0ff78868a68557dbcc5610a7c4c68bed609473ee87a5\": container with ID starting with 6bc7ef43a67eab305d8e0ff78868a68557dbcc5610a7c4c68bed609473ee87a5 not found: ID does not exist" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.430733 4672 scope.go:117] "RemoveContainer" containerID="abc34a9b0dabc007afa1aef5113eca567847d8b7316bcfb7cc5bd1134a2e0951" Feb 17 16:08:39 crc kubenswrapper[4672]: E0217 16:08:39.431161 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abc34a9b0dabc007afa1aef5113eca567847d8b7316bcfb7cc5bd1134a2e0951\": container with ID starting with abc34a9b0dabc007afa1aef5113eca567847d8b7316bcfb7cc5bd1134a2e0951 not found: ID does not exist" containerID="abc34a9b0dabc007afa1aef5113eca567847d8b7316bcfb7cc5bd1134a2e0951" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.431209 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abc34a9b0dabc007afa1aef5113eca567847d8b7316bcfb7cc5bd1134a2e0951"} err="failed to get container status \"abc34a9b0dabc007afa1aef5113eca567847d8b7316bcfb7cc5bd1134a2e0951\": rpc error: code = NotFound desc = could not find container \"abc34a9b0dabc007afa1aef5113eca567847d8b7316bcfb7cc5bd1134a2e0951\": container with ID starting with abc34a9b0dabc007afa1aef5113eca567847d8b7316bcfb7cc5bd1134a2e0951 not found: ID does not exist" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.431246 4672 scope.go:117] "RemoveContainer" containerID="2d186d33b5c78309395ce14e7c14e371ed64262e50f5f2fd92d85dfc7c570340" Feb 17 16:08:39 crc kubenswrapper[4672]: E0217 16:08:39.431734 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d186d33b5c78309395ce14e7c14e371ed64262e50f5f2fd92d85dfc7c570340\": container with ID starting with 2d186d33b5c78309395ce14e7c14e371ed64262e50f5f2fd92d85dfc7c570340 not found: ID does not exist" containerID="2d186d33b5c78309395ce14e7c14e371ed64262e50f5f2fd92d85dfc7c570340" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.431754 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d186d33b5c78309395ce14e7c14e371ed64262e50f5f2fd92d85dfc7c570340"} err="failed to get container status \"2d186d33b5c78309395ce14e7c14e371ed64262e50f5f2fd92d85dfc7c570340\": rpc error: code = NotFound desc = could not find container \"2d186d33b5c78309395ce14e7c14e371ed64262e50f5f2fd92d85dfc7c570340\": container with ID starting with 2d186d33b5c78309395ce14e7c14e371ed64262e50f5f2fd92d85dfc7c570340 not found: ID does not exist" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.431768 4672 scope.go:117] "RemoveContainer" containerID="58f57cd507ac7f181e0427802d1a565f71f0d0631613d6d531b76ba76120e44a" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.443324 4672 scope.go:117] "RemoveContainer" containerID="f027eff199f65139fc942a8436409f8687775cc5c5e1fe14f9e3d4aac9ca733d" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.453359 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/708084b0-bae5-4cfc-ab45-cc5ca619f849-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "708084b0-bae5-4cfc-ab45-cc5ca619f849" (UID: "708084b0-bae5-4cfc-ab45-cc5ca619f849"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.459907 4672 scope.go:117] "RemoveContainer" containerID="f2396035ff02b1dd27bf937c4f39c840fc8601c3ee37d7b4752cd18626c58a62" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.474286 4672 scope.go:117] "RemoveContainer" containerID="58f57cd507ac7f181e0427802d1a565f71f0d0631613d6d531b76ba76120e44a" Feb 17 16:08:39 crc kubenswrapper[4672]: E0217 16:08:39.474662 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58f57cd507ac7f181e0427802d1a565f71f0d0631613d6d531b76ba76120e44a\": container with ID starting with 58f57cd507ac7f181e0427802d1a565f71f0d0631613d6d531b76ba76120e44a not found: ID does not exist" containerID="58f57cd507ac7f181e0427802d1a565f71f0d0631613d6d531b76ba76120e44a" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.474793 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58f57cd507ac7f181e0427802d1a565f71f0d0631613d6d531b76ba76120e44a"} err="failed to get container status \"58f57cd507ac7f181e0427802d1a565f71f0d0631613d6d531b76ba76120e44a\": rpc error: code = NotFound desc = could not find container \"58f57cd507ac7f181e0427802d1a565f71f0d0631613d6d531b76ba76120e44a\": container with ID starting with 58f57cd507ac7f181e0427802d1a565f71f0d0631613d6d531b76ba76120e44a not found: ID does not exist" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.474888 4672 scope.go:117] "RemoveContainer" containerID="f027eff199f65139fc942a8436409f8687775cc5c5e1fe14f9e3d4aac9ca733d" Feb 17 16:08:39 crc kubenswrapper[4672]: E0217 16:08:39.475175 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f027eff199f65139fc942a8436409f8687775cc5c5e1fe14f9e3d4aac9ca733d\": container with ID starting with f027eff199f65139fc942a8436409f8687775cc5c5e1fe14f9e3d4aac9ca733d not found: ID does not exist" containerID="f027eff199f65139fc942a8436409f8687775cc5c5e1fe14f9e3d4aac9ca733d" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.475264 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f027eff199f65139fc942a8436409f8687775cc5c5e1fe14f9e3d4aac9ca733d"} err="failed to get container status \"f027eff199f65139fc942a8436409f8687775cc5c5e1fe14f9e3d4aac9ca733d\": rpc error: code = NotFound desc = could not find container \"f027eff199f65139fc942a8436409f8687775cc5c5e1fe14f9e3d4aac9ca733d\": container with ID starting with f027eff199f65139fc942a8436409f8687775cc5c5e1fe14f9e3d4aac9ca733d not found: ID does not exist" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.475343 4672 scope.go:117] "RemoveContainer" containerID="f2396035ff02b1dd27bf937c4f39c840fc8601c3ee37d7b4752cd18626c58a62" Feb 17 16:08:39 crc kubenswrapper[4672]: E0217 16:08:39.475639 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2396035ff02b1dd27bf937c4f39c840fc8601c3ee37d7b4752cd18626c58a62\": container with ID starting with f2396035ff02b1dd27bf937c4f39c840fc8601c3ee37d7b4752cd18626c58a62 not found: ID does not exist" containerID="f2396035ff02b1dd27bf937c4f39c840fc8601c3ee37d7b4752cd18626c58a62" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.475740 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2396035ff02b1dd27bf937c4f39c840fc8601c3ee37d7b4752cd18626c58a62"} err="failed to get container status \"f2396035ff02b1dd27bf937c4f39c840fc8601c3ee37d7b4752cd18626c58a62\": rpc error: code = NotFound desc = could not find container \"f2396035ff02b1dd27bf937c4f39c840fc8601c3ee37d7b4752cd18626c58a62\": container with ID starting with f2396035ff02b1dd27bf937c4f39c840fc8601c3ee37d7b4752cd18626c58a62 not found: ID does not exist" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.475817 4672 scope.go:117] "RemoveContainer" containerID="2d406caf07c8a3823ad5a2c63720af4c6685e036cb907f90d73608c637fcc924" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.484254 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpbdr\" (UniqueName: \"kubernetes.io/projected/2ed3c87a-d599-4e91-92ce-377ddef564da-kube-api-access-qpbdr\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.484362 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/708084b0-bae5-4cfc-ab45-cc5ca619f849-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.484423 4672 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2ed3c87a-d599-4e91-92ce-377ddef564da-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.484480 4672 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ed3c87a-d599-4e91-92ce-377ddef564da-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.484556 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz2v9\" (UniqueName: \"kubernetes.io/projected/028c8d9b-9bd5-4cf5-9628-849e8b5aacaf-kube-api-access-bz2v9\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.484643 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/708084b0-bae5-4cfc-ab45-cc5ca619f849-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.484798 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/028c8d9b-9bd5-4cf5-9628-849e8b5aacaf-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.484870 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5zw5\" (UniqueName: \"kubernetes.io/projected/708084b0-bae5-4cfc-ab45-cc5ca619f849-kube-api-access-r5zw5\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.488998 4672 scope.go:117] "RemoveContainer" containerID="657c071c45776eef125e1f6068661c8962dc83e28dc6d1b9c7258eeae8d864fc" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.501271 4672 scope.go:117] "RemoveContainer" containerID="8e44341433926e5449b1c01c36ccbef82e6d772b6a78c5bac34c1cf02db9143c" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.512418 4672 scope.go:117] "RemoveContainer" containerID="2d406caf07c8a3823ad5a2c63720af4c6685e036cb907f90d73608c637fcc924" Feb 17 16:08:39 crc kubenswrapper[4672]: E0217 16:08:39.513378 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d406caf07c8a3823ad5a2c63720af4c6685e036cb907f90d73608c637fcc924\": container with ID starting with 2d406caf07c8a3823ad5a2c63720af4c6685e036cb907f90d73608c637fcc924 not found: ID does not exist" containerID="2d406caf07c8a3823ad5a2c63720af4c6685e036cb907f90d73608c637fcc924" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.513520 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d406caf07c8a3823ad5a2c63720af4c6685e036cb907f90d73608c637fcc924"} err="failed to get container status \"2d406caf07c8a3823ad5a2c63720af4c6685e036cb907f90d73608c637fcc924\": rpc error: code = NotFound desc = could not find container \"2d406caf07c8a3823ad5a2c63720af4c6685e036cb907f90d73608c637fcc924\": container with ID starting with 2d406caf07c8a3823ad5a2c63720af4c6685e036cb907f90d73608c637fcc924 not found: ID does not exist" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.513870 4672 scope.go:117] "RemoveContainer" containerID="657c071c45776eef125e1f6068661c8962dc83e28dc6d1b9c7258eeae8d864fc" Feb 17 16:08:39 crc kubenswrapper[4672]: E0217 16:08:39.514139 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"657c071c45776eef125e1f6068661c8962dc83e28dc6d1b9c7258eeae8d864fc\": container with ID starting with 657c071c45776eef125e1f6068661c8962dc83e28dc6d1b9c7258eeae8d864fc not found: ID does not exist" containerID="657c071c45776eef125e1f6068661c8962dc83e28dc6d1b9c7258eeae8d864fc" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.514223 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"657c071c45776eef125e1f6068661c8962dc83e28dc6d1b9c7258eeae8d864fc"} err="failed to get container status \"657c071c45776eef125e1f6068661c8962dc83e28dc6d1b9c7258eeae8d864fc\": rpc error: code = NotFound desc = could not find container \"657c071c45776eef125e1f6068661c8962dc83e28dc6d1b9c7258eeae8d864fc\": container with ID starting with 657c071c45776eef125e1f6068661c8962dc83e28dc6d1b9c7258eeae8d864fc not found: ID does not exist" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.514292 4672 scope.go:117] "RemoveContainer" containerID="8e44341433926e5449b1c01c36ccbef82e6d772b6a78c5bac34c1cf02db9143c" Feb 17 16:08:39 crc kubenswrapper[4672]: E0217 16:08:39.514669 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e44341433926e5449b1c01c36ccbef82e6d772b6a78c5bac34c1cf02db9143c\": container with ID starting with 8e44341433926e5449b1c01c36ccbef82e6d772b6a78c5bac34c1cf02db9143c not found: ID does not exist" containerID="8e44341433926e5449b1c01c36ccbef82e6d772b6a78c5bac34c1cf02db9143c" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.514768 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e44341433926e5449b1c01c36ccbef82e6d772b6a78c5bac34c1cf02db9143c"} err="failed to get container status \"8e44341433926e5449b1c01c36ccbef82e6d772b6a78c5bac34c1cf02db9143c\": rpc error: code = NotFound desc = could not find container \"8e44341433926e5449b1c01c36ccbef82e6d772b6a78c5bac34c1cf02db9143c\": container with ID starting with 8e44341433926e5449b1c01c36ccbef82e6d772b6a78c5bac34c1cf02db9143c not found: ID does not exist" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.527197 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/028c8d9b-9bd5-4cf5-9628-849e8b5aacaf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "028c8d9b-9bd5-4cf5-9628-849e8b5aacaf" (UID: "028c8d9b-9bd5-4cf5-9628-849e8b5aacaf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.543810 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ctrp5"] Feb 17 16:08:39 crc kubenswrapper[4672]: W0217 16:08:39.552163 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc41c2d4d_2194_4562_97e0_69f36cf4007f.slice/crio-01e8f7cc5297a99ee03540021e8508f7b5b3916a7a0cb2c7e046e9ad87b35d3e WatchSource:0}: Error finding container 01e8f7cc5297a99ee03540021e8508f7b5b3916a7a0cb2c7e046e9ad87b35d3e: Status 404 returned error can't find the container with id 01e8f7cc5297a99ee03540021e8508f7b5b3916a7a0cb2c7e046e9ad87b35d3e Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.587835 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/028c8d9b-9bd5-4cf5-9628-849e8b5aacaf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.617854 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l98cc"] Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.621754 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l98cc"] Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.634670 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bk22j"] Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.638717 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bk22j"] Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.645488 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nd8vd"] Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.657243 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nd8vd"] Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.662856 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wvksq"] Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.666656 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wvksq"] Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.960104 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="028c8d9b-9bd5-4cf5-9628-849e8b5aacaf" path="/var/lib/kubelet/pods/028c8d9b-9bd5-4cf5-9628-849e8b5aacaf/volumes" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.962725 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ed3c87a-d599-4e91-92ce-377ddef564da" path="/var/lib/kubelet/pods/2ed3c87a-d599-4e91-92ce-377ddef564da/volumes" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.963204 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="505bfe60-cd7c-4bd6-981a-c14076ef5387" path="/var/lib/kubelet/pods/505bfe60-cd7c-4bd6-981a-c14076ef5387/volumes" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.965694 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="708084b0-bae5-4cfc-ab45-cc5ca619f849" path="/var/lib/kubelet/pods/708084b0-bae5-4cfc-ab45-cc5ca619f849/volumes" Feb 17 16:08:39 crc kubenswrapper[4672]: I0217 16:08:39.966245 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db7fc0eb-2899-4c37-bf2e-30d02cbffb2c" path="/var/lib/kubelet/pods/db7fc0eb-2899-4c37-bf2e-30d02cbffb2c/volumes" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.320793 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ctrp5" event={"ID":"c41c2d4d-2194-4562-97e0-69f36cf4007f","Type":"ContainerStarted","Data":"0e38af3791769fc1bfe998285a83eeb0538c217b2f7267d7fb3d0f0233138591"} Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.321435 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ctrp5" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.321673 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ctrp5" event={"ID":"c41c2d4d-2194-4562-97e0-69f36cf4007f","Type":"ContainerStarted","Data":"01e8f7cc5297a99ee03540021e8508f7b5b3916a7a0cb2c7e046e9ad87b35d3e"} Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.326309 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ctrp5" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.345871 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ctrp5" podStartSLOduration=2.345808421 podStartE2EDuration="2.345808421s" podCreationTimestamp="2026-02-17 16:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:08:40.337810118 +0000 UTC m=+329.091898840" watchObservedRunningTime="2026-02-17 16:08:40.345808421 +0000 UTC m=+329.099897153" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.811491 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hc52r"] Feb 17 16:08:40 crc kubenswrapper[4672]: E0217 16:08:40.811954 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505bfe60-cd7c-4bd6-981a-c14076ef5387" containerName="extract-content" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.811967 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="505bfe60-cd7c-4bd6-981a-c14076ef5387" containerName="extract-content" Feb 17 16:08:40 crc kubenswrapper[4672]: E0217 16:08:40.812010 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7fc0eb-2899-4c37-bf2e-30d02cbffb2c" containerName="extract-utilities" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.812018 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7fc0eb-2899-4c37-bf2e-30d02cbffb2c" containerName="extract-utilities" Feb 17 16:08:40 crc kubenswrapper[4672]: E0217 16:08:40.812025 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505bfe60-cd7c-4bd6-981a-c14076ef5387" containerName="registry-server" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.812031 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="505bfe60-cd7c-4bd6-981a-c14076ef5387" containerName="registry-server" Feb 17 16:08:40 crc kubenswrapper[4672]: E0217 16:08:40.812040 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708084b0-bae5-4cfc-ab45-cc5ca619f849" containerName="extract-content" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.812046 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="708084b0-bae5-4cfc-ab45-cc5ca619f849" containerName="extract-content" Feb 17 16:08:40 crc kubenswrapper[4672]: E0217 16:08:40.812053 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708084b0-bae5-4cfc-ab45-cc5ca619f849" containerName="registry-server" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.812058 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="708084b0-bae5-4cfc-ab45-cc5ca619f849" containerName="registry-server" Feb 17 16:08:40 crc kubenswrapper[4672]: E0217 16:08:40.812069 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028c8d9b-9bd5-4cf5-9628-849e8b5aacaf" containerName="extract-utilities" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.812074 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="028c8d9b-9bd5-4cf5-9628-849e8b5aacaf" containerName="extract-utilities" Feb 17 16:08:40 crc kubenswrapper[4672]: E0217 16:08:40.812081 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028c8d9b-9bd5-4cf5-9628-849e8b5aacaf" containerName="extract-content" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.812086 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="028c8d9b-9bd5-4cf5-9628-849e8b5aacaf" containerName="extract-content" Feb 17 16:08:40 crc kubenswrapper[4672]: E0217 16:08:40.812094 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed3c87a-d599-4e91-92ce-377ddef564da" containerName="marketplace-operator" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.812117 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed3c87a-d599-4e91-92ce-377ddef564da" containerName="marketplace-operator" Feb 17 16:08:40 crc kubenswrapper[4672]: E0217 16:08:40.812125 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7fc0eb-2899-4c37-bf2e-30d02cbffb2c" containerName="registry-server" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.812131 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7fc0eb-2899-4c37-bf2e-30d02cbffb2c" containerName="registry-server" Feb 17 16:08:40 crc kubenswrapper[4672]: E0217 16:08:40.812140 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708084b0-bae5-4cfc-ab45-cc5ca619f849" containerName="extract-utilities" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.812145 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="708084b0-bae5-4cfc-ab45-cc5ca619f849" containerName="extract-utilities" Feb 17 16:08:40 crc kubenswrapper[4672]: E0217 16:08:40.812154 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed3c87a-d599-4e91-92ce-377ddef564da" containerName="marketplace-operator" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.812159 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed3c87a-d599-4e91-92ce-377ddef564da" containerName="marketplace-operator" Feb 17 16:08:40 crc kubenswrapper[4672]: E0217 16:08:40.812169 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028c8d9b-9bd5-4cf5-9628-849e8b5aacaf" containerName="registry-server" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.812174 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="028c8d9b-9bd5-4cf5-9628-849e8b5aacaf" containerName="registry-server" Feb 17 16:08:40 crc kubenswrapper[4672]: E0217 16:08:40.812185 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505bfe60-cd7c-4bd6-981a-c14076ef5387" containerName="extract-utilities" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.812191 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="505bfe60-cd7c-4bd6-981a-c14076ef5387" containerName="extract-utilities" Feb 17 16:08:40 crc kubenswrapper[4672]: E0217 16:08:40.812200 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7fc0eb-2899-4c37-bf2e-30d02cbffb2c" containerName="extract-content" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.812208 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7fc0eb-2899-4c37-bf2e-30d02cbffb2c" containerName="extract-content" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.812283 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="028c8d9b-9bd5-4cf5-9628-849e8b5aacaf" containerName="registry-server" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.812293 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="708084b0-bae5-4cfc-ab45-cc5ca619f849" containerName="registry-server" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.812299 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="505bfe60-cd7c-4bd6-981a-c14076ef5387" containerName="registry-server" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.812309 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="db7fc0eb-2899-4c37-bf2e-30d02cbffb2c" containerName="registry-server" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.812319 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ed3c87a-d599-4e91-92ce-377ddef564da" containerName="marketplace-operator" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.812485 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ed3c87a-d599-4e91-92ce-377ddef564da" containerName="marketplace-operator" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.813012 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hc52r" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.815591 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 16:08:40 crc kubenswrapper[4672]: I0217 16:08:40.821124 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hc52r"] Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.006141 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acbcb77a-c8a8-4ec1-80ab-727db7919906-catalog-content\") pod \"certified-operators-hc52r\" (UID: \"acbcb77a-c8a8-4ec1-80ab-727db7919906\") " pod="openshift-marketplace/certified-operators-hc52r" Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.006239 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmqfm\" (UniqueName: \"kubernetes.io/projected/acbcb77a-c8a8-4ec1-80ab-727db7919906-kube-api-access-nmqfm\") pod \"certified-operators-hc52r\" (UID: \"acbcb77a-c8a8-4ec1-80ab-727db7919906\") " pod="openshift-marketplace/certified-operators-hc52r" Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.006532 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acbcb77a-c8a8-4ec1-80ab-727db7919906-utilities\") pod \"certified-operators-hc52r\" (UID: \"acbcb77a-c8a8-4ec1-80ab-727db7919906\") " pod="openshift-marketplace/certified-operators-hc52r" Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.014432 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xgg8j"] Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.015993 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xgg8j" Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.017624 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.028434 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xgg8j"] Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.107853 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmqfm\" (UniqueName: \"kubernetes.io/projected/acbcb77a-c8a8-4ec1-80ab-727db7919906-kube-api-access-nmqfm\") pod \"certified-operators-hc52r\" (UID: \"acbcb77a-c8a8-4ec1-80ab-727db7919906\") " pod="openshift-marketplace/certified-operators-hc52r" Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.108273 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acbcb77a-c8a8-4ec1-80ab-727db7919906-utilities\") pod \"certified-operators-hc52r\" (UID: \"acbcb77a-c8a8-4ec1-80ab-727db7919906\") " pod="openshift-marketplace/certified-operators-hc52r" Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.108427 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acbcb77a-c8a8-4ec1-80ab-727db7919906-catalog-content\") pod \"certified-operators-hc52r\" (UID: \"acbcb77a-c8a8-4ec1-80ab-727db7919906\") " pod="openshift-marketplace/certified-operators-hc52r" Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.108826 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acbcb77a-c8a8-4ec1-80ab-727db7919906-utilities\") pod \"certified-operators-hc52r\" (UID: \"acbcb77a-c8a8-4ec1-80ab-727db7919906\") " pod="openshift-marketplace/certified-operators-hc52r" Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.108847 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acbcb77a-c8a8-4ec1-80ab-727db7919906-catalog-content\") pod \"certified-operators-hc52r\" (UID: \"acbcb77a-c8a8-4ec1-80ab-727db7919906\") " pod="openshift-marketplace/certified-operators-hc52r" Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.131426 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmqfm\" (UniqueName: \"kubernetes.io/projected/acbcb77a-c8a8-4ec1-80ab-727db7919906-kube-api-access-nmqfm\") pod \"certified-operators-hc52r\" (UID: \"acbcb77a-c8a8-4ec1-80ab-727db7919906\") " pod="openshift-marketplace/certified-operators-hc52r" Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.139284 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hc52r" Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.210308 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkgjx\" (UniqueName: \"kubernetes.io/projected/7a4859b6-916a-4d5b-beac-e8bb32161f6a-kube-api-access-mkgjx\") pod \"community-operators-xgg8j\" (UID: \"7a4859b6-916a-4d5b-beac-e8bb32161f6a\") " pod="openshift-marketplace/community-operators-xgg8j" Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.210371 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4859b6-916a-4d5b-beac-e8bb32161f6a-catalog-content\") pod \"community-operators-xgg8j\" (UID: \"7a4859b6-916a-4d5b-beac-e8bb32161f6a\") " pod="openshift-marketplace/community-operators-xgg8j" Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.210404 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4859b6-916a-4d5b-beac-e8bb32161f6a-utilities\") pod \"community-operators-xgg8j\" (UID: \"7a4859b6-916a-4d5b-beac-e8bb32161f6a\") " pod="openshift-marketplace/community-operators-xgg8j" Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.312282 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkgjx\" (UniqueName: \"kubernetes.io/projected/7a4859b6-916a-4d5b-beac-e8bb32161f6a-kube-api-access-mkgjx\") pod \"community-operators-xgg8j\" (UID: \"7a4859b6-916a-4d5b-beac-e8bb32161f6a\") " pod="openshift-marketplace/community-operators-xgg8j" Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.312353 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4859b6-916a-4d5b-beac-e8bb32161f6a-catalog-content\") pod \"community-operators-xgg8j\" (UID: \"7a4859b6-916a-4d5b-beac-e8bb32161f6a\") " pod="openshift-marketplace/community-operators-xgg8j" Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.312393 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4859b6-916a-4d5b-beac-e8bb32161f6a-utilities\") pod \"community-operators-xgg8j\" (UID: \"7a4859b6-916a-4d5b-beac-e8bb32161f6a\") " pod="openshift-marketplace/community-operators-xgg8j" Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.313057 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4859b6-916a-4d5b-beac-e8bb32161f6a-utilities\") pod \"community-operators-xgg8j\" (UID: \"7a4859b6-916a-4d5b-beac-e8bb32161f6a\") " pod="openshift-marketplace/community-operators-xgg8j" Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.313319 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4859b6-916a-4d5b-beac-e8bb32161f6a-catalog-content\") pod \"community-operators-xgg8j\" (UID: \"7a4859b6-916a-4d5b-beac-e8bb32161f6a\") " pod="openshift-marketplace/community-operators-xgg8j" Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.339607 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkgjx\" (UniqueName: \"kubernetes.io/projected/7a4859b6-916a-4d5b-beac-e8bb32161f6a-kube-api-access-mkgjx\") pod \"community-operators-xgg8j\" (UID: \"7a4859b6-916a-4d5b-beac-e8bb32161f6a\") " pod="openshift-marketplace/community-operators-xgg8j" Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.542125 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hc52r"] Feb 17 16:08:41 crc kubenswrapper[4672]: W0217 16:08:41.555315 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacbcb77a_c8a8_4ec1_80ab_727db7919906.slice/crio-24b938829398ad3613fc187683fbe8e2d981b6188931f466d5a7d3cbad317041 WatchSource:0}: Error finding container 24b938829398ad3613fc187683fbe8e2d981b6188931f466d5a7d3cbad317041: Status 404 returned error can't find the container with id 24b938829398ad3613fc187683fbe8e2d981b6188931f466d5a7d3cbad317041 Feb 17 16:08:41 crc kubenswrapper[4672]: I0217 16:08:41.633387 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xgg8j" Feb 17 16:08:42 crc kubenswrapper[4672]: I0217 16:08:42.040080 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xgg8j"] Feb 17 16:08:42 crc kubenswrapper[4672]: I0217 16:08:42.345476 4672 generic.go:334] "Generic (PLEG): container finished" podID="7a4859b6-916a-4d5b-beac-e8bb32161f6a" containerID="b9dc876c952b80cef41ee1403907e0fdcd9527cb3daf8a21fde9b925cbd0b88c" exitCode=0 Feb 17 16:08:42 crc kubenswrapper[4672]: I0217 16:08:42.345554 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xgg8j" event={"ID":"7a4859b6-916a-4d5b-beac-e8bb32161f6a","Type":"ContainerDied","Data":"b9dc876c952b80cef41ee1403907e0fdcd9527cb3daf8a21fde9b925cbd0b88c"} Feb 17 16:08:42 crc kubenswrapper[4672]: I0217 16:08:42.345579 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xgg8j" event={"ID":"7a4859b6-916a-4d5b-beac-e8bb32161f6a","Type":"ContainerStarted","Data":"ad820a0e65adf1666de175a40487a577cb6667b561cb030be59aa51de2957dcd"} Feb 17 16:08:42 crc kubenswrapper[4672]: I0217 16:08:42.347288 4672 generic.go:334] "Generic (PLEG): container finished" podID="acbcb77a-c8a8-4ec1-80ab-727db7919906" containerID="a46d8780592ca8257b3e27b5ed33678711d4be7299d82638955d4f8db2c44d52" exitCode=0 Feb 17 16:08:42 crc kubenswrapper[4672]: I0217 16:08:42.347930 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc52r" event={"ID":"acbcb77a-c8a8-4ec1-80ab-727db7919906","Type":"ContainerDied","Data":"a46d8780592ca8257b3e27b5ed33678711d4be7299d82638955d4f8db2c44d52"} Feb 17 16:08:42 crc kubenswrapper[4672]: I0217 16:08:42.348001 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc52r" event={"ID":"acbcb77a-c8a8-4ec1-80ab-727db7919906","Type":"ContainerStarted","Data":"24b938829398ad3613fc187683fbe8e2d981b6188931f466d5a7d3cbad317041"} Feb 17 16:08:42 crc kubenswrapper[4672]: I0217 16:08:42.614404 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9vqqf"] Feb 17 16:08:42 crc kubenswrapper[4672]: I0217 16:08:42.615530 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vqqf" Feb 17 16:08:42 crc kubenswrapper[4672]: I0217 16:08:42.620837 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vqqf"] Feb 17 16:08:42 crc kubenswrapper[4672]: I0217 16:08:42.623333 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 16:08:42 crc kubenswrapper[4672]: I0217 16:08:42.632735 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvkpz\" (UniqueName: \"kubernetes.io/projected/206bc097-eb65-4755-89c7-4e230efa5224-kube-api-access-wvkpz\") pod \"redhat-marketplace-9vqqf\" (UID: \"206bc097-eb65-4755-89c7-4e230efa5224\") " pod="openshift-marketplace/redhat-marketplace-9vqqf" Feb 17 16:08:42 crc kubenswrapper[4672]: I0217 16:08:42.632787 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206bc097-eb65-4755-89c7-4e230efa5224-catalog-content\") pod \"redhat-marketplace-9vqqf\" (UID: \"206bc097-eb65-4755-89c7-4e230efa5224\") " pod="openshift-marketplace/redhat-marketplace-9vqqf" Feb 17 16:08:42 crc kubenswrapper[4672]: I0217 16:08:42.632810 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206bc097-eb65-4755-89c7-4e230efa5224-utilities\") pod \"redhat-marketplace-9vqqf\" (UID: \"206bc097-eb65-4755-89c7-4e230efa5224\") " pod="openshift-marketplace/redhat-marketplace-9vqqf" Feb 17 16:08:42 crc kubenswrapper[4672]: I0217 16:08:42.733833 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvkpz\" (UniqueName: \"kubernetes.io/projected/206bc097-eb65-4755-89c7-4e230efa5224-kube-api-access-wvkpz\") pod \"redhat-marketplace-9vqqf\" (UID: \"206bc097-eb65-4755-89c7-4e230efa5224\") " pod="openshift-marketplace/redhat-marketplace-9vqqf" Feb 17 16:08:42 crc kubenswrapper[4672]: I0217 16:08:42.733901 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206bc097-eb65-4755-89c7-4e230efa5224-catalog-content\") pod \"redhat-marketplace-9vqqf\" (UID: \"206bc097-eb65-4755-89c7-4e230efa5224\") " pod="openshift-marketplace/redhat-marketplace-9vqqf" Feb 17 16:08:42 crc kubenswrapper[4672]: I0217 16:08:42.733924 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206bc097-eb65-4755-89c7-4e230efa5224-utilities\") pod \"redhat-marketplace-9vqqf\" (UID: \"206bc097-eb65-4755-89c7-4e230efa5224\") " pod="openshift-marketplace/redhat-marketplace-9vqqf" Feb 17 16:08:42 crc kubenswrapper[4672]: I0217 16:08:42.734412 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206bc097-eb65-4755-89c7-4e230efa5224-utilities\") pod \"redhat-marketplace-9vqqf\" (UID: \"206bc097-eb65-4755-89c7-4e230efa5224\") " pod="openshift-marketplace/redhat-marketplace-9vqqf" Feb 17 16:08:42 crc kubenswrapper[4672]: I0217 16:08:42.734746 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206bc097-eb65-4755-89c7-4e230efa5224-catalog-content\") pod \"redhat-marketplace-9vqqf\" (UID: \"206bc097-eb65-4755-89c7-4e230efa5224\") " pod="openshift-marketplace/redhat-marketplace-9vqqf" Feb 17 16:08:42 crc kubenswrapper[4672]: I0217 16:08:42.755559 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvkpz\" (UniqueName: \"kubernetes.io/projected/206bc097-eb65-4755-89c7-4e230efa5224-kube-api-access-wvkpz\") pod \"redhat-marketplace-9vqqf\" (UID: \"206bc097-eb65-4755-89c7-4e230efa5224\") " pod="openshift-marketplace/redhat-marketplace-9vqqf" Feb 17 16:08:42 crc kubenswrapper[4672]: I0217 16:08:42.933185 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vqqf" Feb 17 16:08:43 crc kubenswrapper[4672]: I0217 16:08:43.355272 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vqqf"] Feb 17 16:08:43 crc kubenswrapper[4672]: I0217 16:08:43.358039 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xgg8j" event={"ID":"7a4859b6-916a-4d5b-beac-e8bb32161f6a","Type":"ContainerStarted","Data":"8e1f92ad9ea54acfe7e8e60856086e00780901c5ba2f83d1256d02b32d042dab"} Feb 17 16:08:43 crc kubenswrapper[4672]: I0217 16:08:43.816592 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bxph5"] Feb 17 16:08:43 crc kubenswrapper[4672]: I0217 16:08:43.818109 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bxph5" Feb 17 16:08:43 crc kubenswrapper[4672]: I0217 16:08:43.820013 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 16:08:43 crc kubenswrapper[4672]: I0217 16:08:43.834103 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bxph5"] Feb 17 16:08:43 crc kubenswrapper[4672]: I0217 16:08:43.854142 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/972d555a-9790-4a25-aa88-5ab896b52f5c-catalog-content\") pod \"redhat-operators-bxph5\" (UID: \"972d555a-9790-4a25-aa88-5ab896b52f5c\") " pod="openshift-marketplace/redhat-operators-bxph5" Feb 17 16:08:43 crc kubenswrapper[4672]: I0217 16:08:43.854222 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/972d555a-9790-4a25-aa88-5ab896b52f5c-utilities\") pod \"redhat-operators-bxph5\" (UID: \"972d555a-9790-4a25-aa88-5ab896b52f5c\") " pod="openshift-marketplace/redhat-operators-bxph5" Feb 17 16:08:43 crc kubenswrapper[4672]: I0217 16:08:43.854323 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwzl4\" (UniqueName: \"kubernetes.io/projected/972d555a-9790-4a25-aa88-5ab896b52f5c-kube-api-access-hwzl4\") pod \"redhat-operators-bxph5\" (UID: \"972d555a-9790-4a25-aa88-5ab896b52f5c\") " pod="openshift-marketplace/redhat-operators-bxph5" Feb 17 16:08:43 crc kubenswrapper[4672]: I0217 16:08:43.955237 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwzl4\" (UniqueName: \"kubernetes.io/projected/972d555a-9790-4a25-aa88-5ab896b52f5c-kube-api-access-hwzl4\") pod \"redhat-operators-bxph5\" (UID: \"972d555a-9790-4a25-aa88-5ab896b52f5c\") " pod="openshift-marketplace/redhat-operators-bxph5" Feb 17 16:08:43 crc kubenswrapper[4672]: I0217 16:08:43.955322 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/972d555a-9790-4a25-aa88-5ab896b52f5c-catalog-content\") pod \"redhat-operators-bxph5\" (UID: \"972d555a-9790-4a25-aa88-5ab896b52f5c\") " pod="openshift-marketplace/redhat-operators-bxph5" Feb 17 16:08:43 crc kubenswrapper[4672]: I0217 16:08:43.955349 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/972d555a-9790-4a25-aa88-5ab896b52f5c-utilities\") pod \"redhat-operators-bxph5\" (UID: \"972d555a-9790-4a25-aa88-5ab896b52f5c\") " pod="openshift-marketplace/redhat-operators-bxph5" Feb 17 16:08:43 crc kubenswrapper[4672]: I0217 16:08:43.956015 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/972d555a-9790-4a25-aa88-5ab896b52f5c-utilities\") pod \"redhat-operators-bxph5\" (UID: \"972d555a-9790-4a25-aa88-5ab896b52f5c\") " pod="openshift-marketplace/redhat-operators-bxph5" Feb 17 16:08:43 crc kubenswrapper[4672]: I0217 16:08:43.956637 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/972d555a-9790-4a25-aa88-5ab896b52f5c-catalog-content\") pod \"redhat-operators-bxph5\" (UID: \"972d555a-9790-4a25-aa88-5ab896b52f5c\") " pod="openshift-marketplace/redhat-operators-bxph5" Feb 17 16:08:43 crc kubenswrapper[4672]: I0217 16:08:43.978220 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwzl4\" (UniqueName: \"kubernetes.io/projected/972d555a-9790-4a25-aa88-5ab896b52f5c-kube-api-access-hwzl4\") pod \"redhat-operators-bxph5\" (UID: \"972d555a-9790-4a25-aa88-5ab896b52f5c\") " pod="openshift-marketplace/redhat-operators-bxph5" Feb 17 16:08:44 crc kubenswrapper[4672]: I0217 16:08:44.184213 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bxph5" Feb 17 16:08:44 crc kubenswrapper[4672]: I0217 16:08:44.370484 4672 generic.go:334] "Generic (PLEG): container finished" podID="7a4859b6-916a-4d5b-beac-e8bb32161f6a" containerID="8e1f92ad9ea54acfe7e8e60856086e00780901c5ba2f83d1256d02b32d042dab" exitCode=0 Feb 17 16:08:44 crc kubenswrapper[4672]: I0217 16:08:44.370552 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xgg8j" event={"ID":"7a4859b6-916a-4d5b-beac-e8bb32161f6a","Type":"ContainerDied","Data":"8e1f92ad9ea54acfe7e8e60856086e00780901c5ba2f83d1256d02b32d042dab"} Feb 17 16:08:44 crc kubenswrapper[4672]: I0217 16:08:44.376239 4672 generic.go:334] "Generic (PLEG): container finished" podID="206bc097-eb65-4755-89c7-4e230efa5224" containerID="35071d639b2409c8dfbf1341930d7b5e248e0f171ddf069b6f2e6ff892e024d7" exitCode=0 Feb 17 16:08:44 crc kubenswrapper[4672]: I0217 16:08:44.376277 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vqqf" event={"ID":"206bc097-eb65-4755-89c7-4e230efa5224","Type":"ContainerDied","Data":"35071d639b2409c8dfbf1341930d7b5e248e0f171ddf069b6f2e6ff892e024d7"} Feb 17 16:08:44 crc kubenswrapper[4672]: I0217 16:08:44.376302 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vqqf" event={"ID":"206bc097-eb65-4755-89c7-4e230efa5224","Type":"ContainerStarted","Data":"1a368251d4cfdca217370a9f34b4bd0544e8d345cf1865c8fdbcccc4a83069e2"} Feb 17 16:08:44 crc kubenswrapper[4672]: I0217 16:08:44.472237 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-rxbxr" Feb 17 16:08:44 crc kubenswrapper[4672]: I0217 16:08:44.524418 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lnsj7"] Feb 17 16:08:44 crc kubenswrapper[4672]: I0217 16:08:44.602997 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bxph5"] Feb 17 16:08:44 crc kubenswrapper[4672]: W0217 16:08:44.611543 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod972d555a_9790_4a25_aa88_5ab896b52f5c.slice/crio-d2c9c176d91d84ecfbe2c349cd997da795faa2cb52de8c13c5cfc6d2581707f9 WatchSource:0}: Error finding container d2c9c176d91d84ecfbe2c349cd997da795faa2cb52de8c13c5cfc6d2581707f9: Status 404 returned error can't find the container with id d2c9c176d91d84ecfbe2c349cd997da795faa2cb52de8c13c5cfc6d2581707f9 Feb 17 16:08:45 crc kubenswrapper[4672]: I0217 16:08:45.396728 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xgg8j" event={"ID":"7a4859b6-916a-4d5b-beac-e8bb32161f6a","Type":"ContainerStarted","Data":"92a82175425f3e27c69e5e9424328ca04114ef2448ab03d41cfaa95320eb822e"} Feb 17 16:08:45 crc kubenswrapper[4672]: I0217 16:08:45.402196 4672 generic.go:334] "Generic (PLEG): container finished" podID="972d555a-9790-4a25-aa88-5ab896b52f5c" containerID="692eb013d269b67d9379c68ef63ff1bf1e4fe16ffff5eded120e9b19beea7bbc" exitCode=0 Feb 17 16:08:45 crc kubenswrapper[4672]: I0217 16:08:45.402241 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxph5" event={"ID":"972d555a-9790-4a25-aa88-5ab896b52f5c","Type":"ContainerDied","Data":"692eb013d269b67d9379c68ef63ff1bf1e4fe16ffff5eded120e9b19beea7bbc"} Feb 17 16:08:45 crc kubenswrapper[4672]: I0217 16:08:45.402267 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxph5" event={"ID":"972d555a-9790-4a25-aa88-5ab896b52f5c","Type":"ContainerStarted","Data":"d2c9c176d91d84ecfbe2c349cd997da795faa2cb52de8c13c5cfc6d2581707f9"} Feb 17 16:08:45 crc kubenswrapper[4672]: I0217 16:08:45.413816 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xgg8j" podStartSLOduration=2.956931837 podStartE2EDuration="5.413794826s" podCreationTimestamp="2026-02-17 16:08:40 +0000 UTC" firstStartedPulling="2026-02-17 16:08:42.347986818 +0000 UTC m=+331.102075550" lastFinishedPulling="2026-02-17 16:08:44.804849807 +0000 UTC m=+333.558938539" observedRunningTime="2026-02-17 16:08:45.4135617 +0000 UTC m=+334.167650442" watchObservedRunningTime="2026-02-17 16:08:45.413794826 +0000 UTC m=+334.167883558" Feb 17 16:08:45 crc kubenswrapper[4672]: E0217 16:08:45.664525 4672 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod206bc097_eb65_4755_89c7_4e230efa5224.slice/crio-conmon-e2df41b4bb42791ff0ba32f0265b9446ec98394281e92e3953c18186be95892f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod206bc097_eb65_4755_89c7_4e230efa5224.slice/crio-e2df41b4bb42791ff0ba32f0265b9446ec98394281e92e3953c18186be95892f.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:08:46 crc kubenswrapper[4672]: I0217 16:08:46.414504 4672 generic.go:334] "Generic (PLEG): container finished" podID="206bc097-eb65-4755-89c7-4e230efa5224" containerID="e2df41b4bb42791ff0ba32f0265b9446ec98394281e92e3953c18186be95892f" exitCode=0 Feb 17 16:08:46 crc kubenswrapper[4672]: I0217 16:08:46.414807 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vqqf" event={"ID":"206bc097-eb65-4755-89c7-4e230efa5224","Type":"ContainerDied","Data":"e2df41b4bb42791ff0ba32f0265b9446ec98394281e92e3953c18186be95892f"} Feb 17 16:08:46 crc kubenswrapper[4672]: I0217 16:08:46.424205 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxph5" event={"ID":"972d555a-9790-4a25-aa88-5ab896b52f5c","Type":"ContainerStarted","Data":"af6cf610d4161da22cdd1dfbb687ed3be5de4b5caaa1cdbe28f500ca3b8a2b3a"} Feb 17 16:08:47 crc kubenswrapper[4672]: I0217 16:08:47.431430 4672 generic.go:334] "Generic (PLEG): container finished" podID="972d555a-9790-4a25-aa88-5ab896b52f5c" containerID="af6cf610d4161da22cdd1dfbb687ed3be5de4b5caaa1cdbe28f500ca3b8a2b3a" exitCode=0 Feb 17 16:08:47 crc kubenswrapper[4672]: I0217 16:08:47.431470 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxph5" event={"ID":"972d555a-9790-4a25-aa88-5ab896b52f5c","Type":"ContainerDied","Data":"af6cf610d4161da22cdd1dfbb687ed3be5de4b5caaa1cdbe28f500ca3b8a2b3a"} Feb 17 16:08:49 crc kubenswrapper[4672]: I0217 16:08:49.443850 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vqqf" event={"ID":"206bc097-eb65-4755-89c7-4e230efa5224","Type":"ContainerStarted","Data":"a78c3d70c604424d27e5a711d4a7b65b0a0d2c306ba06ec7285834eadae3be5b"} Feb 17 16:08:49 crc kubenswrapper[4672]: I0217 16:08:49.445852 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc52r" event={"ID":"acbcb77a-c8a8-4ec1-80ab-727db7919906","Type":"ContainerStarted","Data":"f76e7862fd31c9ed323dee72891317aa2899598c674421c44395215078c97cd8"} Feb 17 16:08:49 crc kubenswrapper[4672]: I0217 16:08:49.448696 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bxph5" event={"ID":"972d555a-9790-4a25-aa88-5ab896b52f5c","Type":"ContainerStarted","Data":"06768e1efee32883bf430a78d392180ff76832c10f5f864baa0c73fe912f8bef"} Feb 17 16:08:49 crc kubenswrapper[4672]: I0217 16:08:49.480886 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9vqqf" podStartSLOduration=3.287415273 podStartE2EDuration="7.480861928s" podCreationTimestamp="2026-02-17 16:08:42 +0000 UTC" firstStartedPulling="2026-02-17 16:08:44.389142731 +0000 UTC m=+333.143231463" lastFinishedPulling="2026-02-17 16:08:48.582589386 +0000 UTC m=+337.336678118" observedRunningTime="2026-02-17 16:08:49.463727062 +0000 UTC m=+338.217815794" watchObservedRunningTime="2026-02-17 16:08:49.480861928 +0000 UTC m=+338.234950660" Feb 17 16:08:49 crc kubenswrapper[4672]: I0217 16:08:49.482207 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bxph5" podStartSLOduration=2.935135253 podStartE2EDuration="6.482200163s" podCreationTimestamp="2026-02-17 16:08:43 +0000 UTC" firstStartedPulling="2026-02-17 16:08:45.404483228 +0000 UTC m=+334.158571960" lastFinishedPulling="2026-02-17 16:08:48.951548138 +0000 UTC m=+337.705636870" observedRunningTime="2026-02-17 16:08:49.479018859 +0000 UTC m=+338.233107591" watchObservedRunningTime="2026-02-17 16:08:49.482200163 +0000 UTC m=+338.236288895" Feb 17 16:08:50 crc kubenswrapper[4672]: I0217 16:08:50.456268 4672 generic.go:334] "Generic (PLEG): container finished" podID="acbcb77a-c8a8-4ec1-80ab-727db7919906" containerID="f76e7862fd31c9ed323dee72891317aa2899598c674421c44395215078c97cd8" exitCode=0 Feb 17 16:08:50 crc kubenswrapper[4672]: I0217 16:08:50.456361 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc52r" event={"ID":"acbcb77a-c8a8-4ec1-80ab-727db7919906","Type":"ContainerDied","Data":"f76e7862fd31c9ed323dee72891317aa2899598c674421c44395215078c97cd8"} Feb 17 16:08:51 crc kubenswrapper[4672]: I0217 16:08:51.478459 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc52r" event={"ID":"acbcb77a-c8a8-4ec1-80ab-727db7919906","Type":"ContainerStarted","Data":"33109fecc432d66e06ba9781d2d4c72a9d6e92f5a7141f579da1f6462de2f290"} Feb 17 16:08:51 crc kubenswrapper[4672]: I0217 16:08:51.505841 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hc52r" podStartSLOduration=2.874454632 podStartE2EDuration="11.50582275s" podCreationTimestamp="2026-02-17 16:08:40 +0000 UTC" firstStartedPulling="2026-02-17 16:08:42.349092057 +0000 UTC m=+331.103180789" lastFinishedPulling="2026-02-17 16:08:50.980460175 +0000 UTC m=+339.734548907" observedRunningTime="2026-02-17 16:08:51.501162025 +0000 UTC m=+340.255250767" watchObservedRunningTime="2026-02-17 16:08:51.50582275 +0000 UTC m=+340.259911492" Feb 17 16:08:51 crc kubenswrapper[4672]: I0217 16:08:51.634131 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xgg8j" Feb 17 16:08:51 crc kubenswrapper[4672]: I0217 16:08:51.634466 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xgg8j" Feb 17 16:08:51 crc kubenswrapper[4672]: I0217 16:08:51.675919 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xgg8j" Feb 17 16:08:52 crc kubenswrapper[4672]: I0217 16:08:52.531797 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xgg8j" Feb 17 16:08:52 crc kubenswrapper[4672]: I0217 16:08:52.934005 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9vqqf" Feb 17 16:08:52 crc kubenswrapper[4672]: I0217 16:08:52.934048 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9vqqf" Feb 17 16:08:52 crc kubenswrapper[4672]: I0217 16:08:52.977948 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9vqqf" Feb 17 16:08:54 crc kubenswrapper[4672]: I0217 16:08:54.184549 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bxph5" Feb 17 16:08:54 crc kubenswrapper[4672]: I0217 16:08:54.184953 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bxph5" Feb 17 16:08:55 crc kubenswrapper[4672]: I0217 16:08:55.223261 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bxph5" podUID="972d555a-9790-4a25-aa88-5ab896b52f5c" containerName="registry-server" probeResult="failure" output=< Feb 17 16:08:55 crc kubenswrapper[4672]: timeout: failed to connect service ":50051" within 1s Feb 17 16:08:55 crc kubenswrapper[4672]: > Feb 17 16:08:57 crc kubenswrapper[4672]: I0217 16:08:57.566123 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:08:57 crc kubenswrapper[4672]: I0217 16:08:57.566561 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:09:01 crc kubenswrapper[4672]: I0217 16:09:01.139816 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hc52r" Feb 17 16:09:01 crc kubenswrapper[4672]: I0217 16:09:01.140770 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hc52r" Feb 17 16:09:01 crc kubenswrapper[4672]: I0217 16:09:01.181287 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hc52r" Feb 17 16:09:01 crc kubenswrapper[4672]: I0217 16:09:01.573964 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hc52r" Feb 17 16:09:02 crc kubenswrapper[4672]: I0217 16:09:02.995919 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9vqqf" Feb 17 16:09:04 crc kubenswrapper[4672]: I0217 16:09:04.238738 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bxph5" Feb 17 16:09:04 crc kubenswrapper[4672]: I0217 16:09:04.278431 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bxph5" Feb 17 16:09:09 crc kubenswrapper[4672]: I0217 16:09:09.560372 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" podUID="5f278b6e-e162-448c-b57c-b3e66a6b0e5e" containerName="registry" containerID="cri-o://c9a1429a2be36ea34dff64d7a9469eec05267cb713aa011ccaf924d7be19e709" gracePeriod=30 Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.004885 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.058311 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-registry-tls\") pod \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.058384 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-bound-sa-token\") pod \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.058470 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-registry-certificates\") pod \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.058562 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-installation-pull-secrets\") pod \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.058733 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.058770 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsv79\" (UniqueName: \"kubernetes.io/projected/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-kube-api-access-gsv79\") pod \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.058796 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-trusted-ca\") pod \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.058829 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-ca-trust-extracted\") pod \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\" (UID: \"5f278b6e-e162-448c-b57c-b3e66a6b0e5e\") " Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.059772 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "5f278b6e-e162-448c-b57c-b3e66a6b0e5e" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.060375 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "5f278b6e-e162-448c-b57c-b3e66a6b0e5e" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.066335 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "5f278b6e-e162-448c-b57c-b3e66a6b0e5e" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.066802 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "5f278b6e-e162-448c-b57c-b3e66a6b0e5e" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.067688 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "5f278b6e-e162-448c-b57c-b3e66a6b0e5e" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.068377 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-kube-api-access-gsv79" (OuterVolumeSpecName: "kube-api-access-gsv79") pod "5f278b6e-e162-448c-b57c-b3e66a6b0e5e" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e"). InnerVolumeSpecName "kube-api-access-gsv79". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.076074 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "5f278b6e-e162-448c-b57c-b3e66a6b0e5e" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.088099 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "5f278b6e-e162-448c-b57c-b3e66a6b0e5e" (UID: "5f278b6e-e162-448c-b57c-b3e66a6b0e5e"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.160336 4672 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.160392 4672 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.160405 4672 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.160418 4672 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.160434 4672 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.160446 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsv79\" (UniqueName: \"kubernetes.io/projected/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-kube-api-access-gsv79\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.160457 4672 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f278b6e-e162-448c-b57c-b3e66a6b0e5e-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.591660 4672 generic.go:334] "Generic (PLEG): container finished" podID="5f278b6e-e162-448c-b57c-b3e66a6b0e5e" containerID="c9a1429a2be36ea34dff64d7a9469eec05267cb713aa011ccaf924d7be19e709" exitCode=0 Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.591770 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.592099 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" event={"ID":"5f278b6e-e162-448c-b57c-b3e66a6b0e5e","Type":"ContainerDied","Data":"c9a1429a2be36ea34dff64d7a9469eec05267cb713aa011ccaf924d7be19e709"} Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.593133 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lnsj7" event={"ID":"5f278b6e-e162-448c-b57c-b3e66a6b0e5e","Type":"ContainerDied","Data":"552340a77d779e2fa3f0de31a19bbb26e8a3aac539c01f6da71fdf2824815137"} Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.593240 4672 scope.go:117] "RemoveContainer" containerID="c9a1429a2be36ea34dff64d7a9469eec05267cb713aa011ccaf924d7be19e709" Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.626406 4672 scope.go:117] "RemoveContainer" containerID="c9a1429a2be36ea34dff64d7a9469eec05267cb713aa011ccaf924d7be19e709" Feb 17 16:09:10 crc kubenswrapper[4672]: E0217 16:09:10.626956 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9a1429a2be36ea34dff64d7a9469eec05267cb713aa011ccaf924d7be19e709\": container with ID starting with c9a1429a2be36ea34dff64d7a9469eec05267cb713aa011ccaf924d7be19e709 not found: ID does not exist" containerID="c9a1429a2be36ea34dff64d7a9469eec05267cb713aa011ccaf924d7be19e709" Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.626993 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9a1429a2be36ea34dff64d7a9469eec05267cb713aa011ccaf924d7be19e709"} err="failed to get container status \"c9a1429a2be36ea34dff64d7a9469eec05267cb713aa011ccaf924d7be19e709\": rpc error: code = NotFound desc = could not find container \"c9a1429a2be36ea34dff64d7a9469eec05267cb713aa011ccaf924d7be19e709\": container with ID starting with c9a1429a2be36ea34dff64d7a9469eec05267cb713aa011ccaf924d7be19e709 not found: ID does not exist" Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.657874 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lnsj7"] Feb 17 16:09:10 crc kubenswrapper[4672]: I0217 16:09:10.664349 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lnsj7"] Feb 17 16:09:11 crc kubenswrapper[4672]: I0217 16:09:11.952882 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f278b6e-e162-448c-b57c-b3e66a6b0e5e" path="/var/lib/kubelet/pods/5f278b6e-e162-448c-b57c-b3e66a6b0e5e/volumes" Feb 17 16:09:27 crc kubenswrapper[4672]: I0217 16:09:27.566367 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:09:27 crc kubenswrapper[4672]: I0217 16:09:27.567140 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:09:57 crc kubenswrapper[4672]: I0217 16:09:57.566158 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:09:57 crc kubenswrapper[4672]: I0217 16:09:57.567005 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:09:57 crc kubenswrapper[4672]: I0217 16:09:57.567074 4672 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:09:57 crc kubenswrapper[4672]: I0217 16:09:57.567902 4672 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f3ebbc55d351841753f9bbb525ff0055c2fbedda4c7326b4b7118110b3bdaef"} pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:09:57 crc kubenswrapper[4672]: I0217 16:09:57.567995 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" containerID="cri-o://0f3ebbc55d351841753f9bbb525ff0055c2fbedda4c7326b4b7118110b3bdaef" gracePeriod=600 Feb 17 16:09:57 crc kubenswrapper[4672]: I0217 16:09:57.893680 4672 generic.go:334] "Generic (PLEG): container finished" podID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerID="0f3ebbc55d351841753f9bbb525ff0055c2fbedda4c7326b4b7118110b3bdaef" exitCode=0 Feb 17 16:09:57 crc kubenswrapper[4672]: I0217 16:09:57.893811 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerDied","Data":"0f3ebbc55d351841753f9bbb525ff0055c2fbedda4c7326b4b7118110b3bdaef"} Feb 17 16:09:57 crc kubenswrapper[4672]: I0217 16:09:57.895054 4672 scope.go:117] "RemoveContainer" containerID="796310e24dd456ebe7e3886fd47d09ecf942ee5939fc71da9839c3d89b4a45e1" Feb 17 16:09:59 crc kubenswrapper[4672]: I0217 16:09:59.908073 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerStarted","Data":"bab58c994d52018fa7903af25af1b3a89988c7cbe182c6c29193a105200dcb08"} Feb 17 16:12:12 crc kubenswrapper[4672]: I0217 16:12:12.248963 4672 scope.go:117] "RemoveContainer" containerID="8469d573b653a8806c853a1173e0645c56ba099dcf83caa84671c857c933e1b9" Feb 17 16:12:27 crc kubenswrapper[4672]: I0217 16:12:27.565853 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:12:27 crc kubenswrapper[4672]: I0217 16:12:27.567635 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:12:57 crc kubenswrapper[4672]: I0217 16:12:57.565946 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:12:57 crc kubenswrapper[4672]: I0217 16:12:57.566723 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:13:27 crc kubenswrapper[4672]: I0217 16:13:27.566329 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:13:27 crc kubenswrapper[4672]: I0217 16:13:27.567012 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:13:27 crc kubenswrapper[4672]: I0217 16:13:27.567191 4672 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:13:27 crc kubenswrapper[4672]: I0217 16:13:27.568698 4672 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bab58c994d52018fa7903af25af1b3a89988c7cbe182c6c29193a105200dcb08"} pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:13:27 crc kubenswrapper[4672]: I0217 16:13:27.568803 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" containerID="cri-o://bab58c994d52018fa7903af25af1b3a89988c7cbe182c6c29193a105200dcb08" gracePeriod=600 Feb 17 16:13:28 crc kubenswrapper[4672]: I0217 16:13:28.276746 4672 generic.go:334] "Generic (PLEG): container finished" podID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerID="bab58c994d52018fa7903af25af1b3a89988c7cbe182c6c29193a105200dcb08" exitCode=0 Feb 17 16:13:28 crc kubenswrapper[4672]: I0217 16:13:28.277088 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerDied","Data":"bab58c994d52018fa7903af25af1b3a89988c7cbe182c6c29193a105200dcb08"} Feb 17 16:13:28 crc kubenswrapper[4672]: I0217 16:13:28.277123 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerStarted","Data":"a296cbbb1d99319f19a06f749b112d1a27b0616f6d5daa613b86b37f30657f19"} Feb 17 16:13:28 crc kubenswrapper[4672]: I0217 16:13:28.277142 4672 scope.go:117] "RemoveContainer" containerID="0f3ebbc55d351841753f9bbb525ff0055c2fbedda4c7326b4b7118110b3bdaef" Feb 17 16:13:55 crc kubenswrapper[4672]: I0217 16:13:55.848817 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z"] Feb 17 16:13:55 crc kubenswrapper[4672]: E0217 16:13:55.849870 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f278b6e-e162-448c-b57c-b3e66a6b0e5e" containerName="registry" Feb 17 16:13:55 crc kubenswrapper[4672]: I0217 16:13:55.849899 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f278b6e-e162-448c-b57c-b3e66a6b0e5e" containerName="registry" Feb 17 16:13:55 crc kubenswrapper[4672]: I0217 16:13:55.850111 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f278b6e-e162-448c-b57c-b3e66a6b0e5e" containerName="registry" Feb 17 16:13:55 crc kubenswrapper[4672]: I0217 16:13:55.851587 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z" Feb 17 16:13:55 crc kubenswrapper[4672]: I0217 16:13:55.855337 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:13:55 crc kubenswrapper[4672]: I0217 16:13:55.867078 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z"] Feb 17 16:13:56 crc kubenswrapper[4672]: I0217 16:13:56.024973 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e724011-a0fa-4eb7-a10b-8199435d4478-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z\" (UID: \"6e724011-a0fa-4eb7-a10b-8199435d4478\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z" Feb 17 16:13:56 crc kubenswrapper[4672]: I0217 16:13:56.025296 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h89g5\" (UniqueName: \"kubernetes.io/projected/6e724011-a0fa-4eb7-a10b-8199435d4478-kube-api-access-h89g5\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z\" (UID: \"6e724011-a0fa-4eb7-a10b-8199435d4478\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z" Feb 17 16:13:56 crc kubenswrapper[4672]: I0217 16:13:56.025577 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e724011-a0fa-4eb7-a10b-8199435d4478-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z\" (UID: \"6e724011-a0fa-4eb7-a10b-8199435d4478\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z" Feb 17 16:13:56 crc kubenswrapper[4672]: I0217 16:13:56.126418 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e724011-a0fa-4eb7-a10b-8199435d4478-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z\" (UID: \"6e724011-a0fa-4eb7-a10b-8199435d4478\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z" Feb 17 16:13:56 crc kubenswrapper[4672]: I0217 16:13:56.126499 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e724011-a0fa-4eb7-a10b-8199435d4478-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z\" (UID: \"6e724011-a0fa-4eb7-a10b-8199435d4478\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z" Feb 17 16:13:56 crc kubenswrapper[4672]: I0217 16:13:56.126551 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h89g5\" (UniqueName: \"kubernetes.io/projected/6e724011-a0fa-4eb7-a10b-8199435d4478-kube-api-access-h89g5\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z\" (UID: \"6e724011-a0fa-4eb7-a10b-8199435d4478\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z" Feb 17 16:13:56 crc kubenswrapper[4672]: I0217 16:13:56.127408 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e724011-a0fa-4eb7-a10b-8199435d4478-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z\" (UID: \"6e724011-a0fa-4eb7-a10b-8199435d4478\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z" Feb 17 16:13:56 crc kubenswrapper[4672]: I0217 16:13:56.127491 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e724011-a0fa-4eb7-a10b-8199435d4478-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z\" (UID: \"6e724011-a0fa-4eb7-a10b-8199435d4478\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z" Feb 17 16:13:56 crc kubenswrapper[4672]: I0217 16:13:56.148228 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h89g5\" (UniqueName: \"kubernetes.io/projected/6e724011-a0fa-4eb7-a10b-8199435d4478-kube-api-access-h89g5\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z\" (UID: \"6e724011-a0fa-4eb7-a10b-8199435d4478\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z" Feb 17 16:13:56 crc kubenswrapper[4672]: I0217 16:13:56.186550 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z" Feb 17 16:13:56 crc kubenswrapper[4672]: I0217 16:13:56.497346 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z"] Feb 17 16:13:57 crc kubenswrapper[4672]: I0217 16:13:57.466987 4672 generic.go:334] "Generic (PLEG): container finished" podID="6e724011-a0fa-4eb7-a10b-8199435d4478" containerID="516fb9a56659b0c03eb31fb759fde7baa62250d69d23e546a54a176b91ac642a" exitCode=0 Feb 17 16:13:57 crc kubenswrapper[4672]: I0217 16:13:57.467060 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z" event={"ID":"6e724011-a0fa-4eb7-a10b-8199435d4478","Type":"ContainerDied","Data":"516fb9a56659b0c03eb31fb759fde7baa62250d69d23e546a54a176b91ac642a"} Feb 17 16:13:57 crc kubenswrapper[4672]: I0217 16:13:57.467796 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z" event={"ID":"6e724011-a0fa-4eb7-a10b-8199435d4478","Type":"ContainerStarted","Data":"b052a7cf2ff8982560fb8e8d2bf175a23b52de5b151e9981be500d0b64608f20"} Feb 17 16:13:57 crc kubenswrapper[4672]: I0217 16:13:57.470259 4672 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:13:59 crc kubenswrapper[4672]: I0217 16:13:59.485854 4672 generic.go:334] "Generic (PLEG): container finished" podID="6e724011-a0fa-4eb7-a10b-8199435d4478" containerID="93a5e09d2d1d10eb07afdb0d97df58ecadd7d314a8fc88ebdaf821fe48e2d218" exitCode=0 Feb 17 16:13:59 crc kubenswrapper[4672]: I0217 16:13:59.485963 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z" event={"ID":"6e724011-a0fa-4eb7-a10b-8199435d4478","Type":"ContainerDied","Data":"93a5e09d2d1d10eb07afdb0d97df58ecadd7d314a8fc88ebdaf821fe48e2d218"} Feb 17 16:14:00 crc kubenswrapper[4672]: I0217 16:14:00.494983 4672 generic.go:334] "Generic (PLEG): container finished" podID="6e724011-a0fa-4eb7-a10b-8199435d4478" containerID="0524320c1722658c72f3d5aa354e0a4c2685c41a5f5c1d828c2a945c26e3f681" exitCode=0 Feb 17 16:14:00 crc kubenswrapper[4672]: I0217 16:14:00.495111 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z" event={"ID":"6e724011-a0fa-4eb7-a10b-8199435d4478","Type":"ContainerDied","Data":"0524320c1722658c72f3d5aa354e0a4c2685c41a5f5c1d828c2a945c26e3f681"} Feb 17 16:14:01 crc kubenswrapper[4672]: I0217 16:14:01.758022 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z" Feb 17 16:14:01 crc kubenswrapper[4672]: I0217 16:14:01.903899 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e724011-a0fa-4eb7-a10b-8199435d4478-bundle\") pod \"6e724011-a0fa-4eb7-a10b-8199435d4478\" (UID: \"6e724011-a0fa-4eb7-a10b-8199435d4478\") " Feb 17 16:14:01 crc kubenswrapper[4672]: I0217 16:14:01.903975 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e724011-a0fa-4eb7-a10b-8199435d4478-util\") pod \"6e724011-a0fa-4eb7-a10b-8199435d4478\" (UID: \"6e724011-a0fa-4eb7-a10b-8199435d4478\") " Feb 17 16:14:01 crc kubenswrapper[4672]: I0217 16:14:01.904056 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h89g5\" (UniqueName: \"kubernetes.io/projected/6e724011-a0fa-4eb7-a10b-8199435d4478-kube-api-access-h89g5\") pod \"6e724011-a0fa-4eb7-a10b-8199435d4478\" (UID: \"6e724011-a0fa-4eb7-a10b-8199435d4478\") " Feb 17 16:14:01 crc kubenswrapper[4672]: I0217 16:14:01.907023 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e724011-a0fa-4eb7-a10b-8199435d4478-bundle" (OuterVolumeSpecName: "bundle") pod "6e724011-a0fa-4eb7-a10b-8199435d4478" (UID: "6e724011-a0fa-4eb7-a10b-8199435d4478"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:14:01 crc kubenswrapper[4672]: I0217 16:14:01.912022 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e724011-a0fa-4eb7-a10b-8199435d4478-kube-api-access-h89g5" (OuterVolumeSpecName: "kube-api-access-h89g5") pod "6e724011-a0fa-4eb7-a10b-8199435d4478" (UID: "6e724011-a0fa-4eb7-a10b-8199435d4478"). InnerVolumeSpecName "kube-api-access-h89g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:01 crc kubenswrapper[4672]: I0217 16:14:01.919074 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e724011-a0fa-4eb7-a10b-8199435d4478-util" (OuterVolumeSpecName: "util") pod "6e724011-a0fa-4eb7-a10b-8199435d4478" (UID: "6e724011-a0fa-4eb7-a10b-8199435d4478"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:14:02 crc kubenswrapper[4672]: I0217 16:14:02.005784 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h89g5\" (UniqueName: \"kubernetes.io/projected/6e724011-a0fa-4eb7-a10b-8199435d4478-kube-api-access-h89g5\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:02 crc kubenswrapper[4672]: I0217 16:14:02.005867 4672 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e724011-a0fa-4eb7-a10b-8199435d4478-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:02 crc kubenswrapper[4672]: I0217 16:14:02.005884 4672 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e724011-a0fa-4eb7-a10b-8199435d4478-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:02 crc kubenswrapper[4672]: I0217 16:14:02.511972 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z" event={"ID":"6e724011-a0fa-4eb7-a10b-8199435d4478","Type":"ContainerDied","Data":"b052a7cf2ff8982560fb8e8d2bf175a23b52de5b151e9981be500d0b64608f20"} Feb 17 16:14:02 crc kubenswrapper[4672]: I0217 16:14:02.512028 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b052a7cf2ff8982560fb8e8d2bf175a23b52de5b151e9981be500d0b64608f20" Feb 17 16:14:02 crc kubenswrapper[4672]: I0217 16:14:02.512084 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.228880 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4f9wc"] Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.229822 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovn-controller" containerID="cri-o://42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7" gracePeriod=30 Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.229962 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db" gracePeriod=30 Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.230041 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="sbdb" containerID="cri-o://24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4" gracePeriod=30 Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.229990 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="kube-rbac-proxy-node" containerID="cri-o://eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a" gracePeriod=30 Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.230014 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovn-acl-logging" containerID="cri-o://2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997" gracePeriod=30 Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.230318 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="nbdb" containerID="cri-o://0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16" gracePeriod=30 Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.230996 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="northd" containerID="cri-o://d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314" gracePeriod=30 Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.274838 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovnkube-controller" containerID="cri-o://01204ff3ef7dac68664104a29bdd8064f75c4fe495d66b88961534a94f68e9ae" gracePeriod=30 Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.563932 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5jjr2_edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe/kube-multus/2.log" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.565243 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5jjr2_edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe/kube-multus/1.log" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.565337 4672 generic.go:334] "Generic (PLEG): container finished" podID="edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe" containerID="397bf27fea3d27b5db56ccb8cc9ebd9e8401dd883e3c22d9d2e8f76a4f63c577" exitCode=2 Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.565440 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5jjr2" event={"ID":"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe","Type":"ContainerDied","Data":"397bf27fea3d27b5db56ccb8cc9ebd9e8401dd883e3c22d9d2e8f76a4f63c577"} Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.565526 4672 scope.go:117] "RemoveContainer" containerID="f7f95d42a206c5e9b8e4b546034635db87f5912e543fea24cccde60817511eaa" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.566058 4672 scope.go:117] "RemoveContainer" containerID="397bf27fea3d27b5db56ccb8cc9ebd9e8401dd883e3c22d9d2e8f76a4f63c577" Feb 17 16:14:07 crc kubenswrapper[4672]: E0217 16:14:07.566337 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-5jjr2_openshift-multus(edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe)\"" pod="openshift-multus/multus-5jjr2" podUID="edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.572078 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4f9wc_98a910a1-b5f0-4f34-9d76-6474c753e8e7/ovnkube-controller/3.log" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.587425 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4f9wc_98a910a1-b5f0-4f34-9d76-6474c753e8e7/ovn-acl-logging/0.log" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.588316 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4f9wc_98a910a1-b5f0-4f34-9d76-6474c753e8e7/ovn-controller/0.log" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.588849 4672 generic.go:334] "Generic (PLEG): container finished" podID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerID="01204ff3ef7dac68664104a29bdd8064f75c4fe495d66b88961534a94f68e9ae" exitCode=0 Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.588900 4672 generic.go:334] "Generic (PLEG): container finished" podID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerID="24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4" exitCode=0 Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.588908 4672 generic.go:334] "Generic (PLEG): container finished" podID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerID="0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16" exitCode=0 Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.588917 4672 generic.go:334] "Generic (PLEG): container finished" podID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerID="e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db" exitCode=0 Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.588923 4672 generic.go:334] "Generic (PLEG): container finished" podID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerID="eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a" exitCode=0 Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.588931 4672 generic.go:334] "Generic (PLEG): container finished" podID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerID="2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997" exitCode=143 Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.588941 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerDied","Data":"01204ff3ef7dac68664104a29bdd8064f75c4fe495d66b88961534a94f68e9ae"} Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.589018 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerDied","Data":"24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4"} Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.589054 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerDied","Data":"0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16"} Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.588963 4672 generic.go:334] "Generic (PLEG): container finished" podID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerID="42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7" exitCode=143 Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.589082 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerDied","Data":"e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db"} Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.589106 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerDied","Data":"eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a"} Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.589134 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerDied","Data":"2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997"} Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.589156 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerDied","Data":"42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7"} Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.598228 4672 scope.go:117] "RemoveContainer" containerID="432ab3a5ae33d1f4de114a70bbc405e9c0346cbd9c935aeac9e44d0586f569d1" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.603687 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4f9wc_98a910a1-b5f0-4f34-9d76-6474c753e8e7/ovn-acl-logging/0.log" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.607601 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4f9wc_98a910a1-b5f0-4f34-9d76-6474c753e8e7/ovn-controller/0.log" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.608385 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.687593 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h5dlz"] Feb 17 16:14:07 crc kubenswrapper[4672]: E0217 16:14:07.687801 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovn-acl-logging" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.687812 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovn-acl-logging" Feb 17 16:14:07 crc kubenswrapper[4672]: E0217 16:14:07.687825 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e724011-a0fa-4eb7-a10b-8199435d4478" containerName="util" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.687831 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e724011-a0fa-4eb7-a10b-8199435d4478" containerName="util" Feb 17 16:14:07 crc kubenswrapper[4672]: E0217 16:14:07.687838 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="nbdb" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.687845 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="nbdb" Feb 17 16:14:07 crc kubenswrapper[4672]: E0217 16:14:07.687854 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="sbdb" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.687859 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="sbdb" Feb 17 16:14:07 crc kubenswrapper[4672]: E0217 16:14:07.687869 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="kubecfg-setup" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.687875 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="kubecfg-setup" Feb 17 16:14:07 crc kubenswrapper[4672]: E0217 16:14:07.687881 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e724011-a0fa-4eb7-a10b-8199435d4478" containerName="pull" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.687886 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e724011-a0fa-4eb7-a10b-8199435d4478" containerName="pull" Feb 17 16:14:07 crc kubenswrapper[4672]: E0217 16:14:07.687893 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovnkube-controller" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.687900 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovnkube-controller" Feb 17 16:14:07 crc kubenswrapper[4672]: E0217 16:14:07.687906 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovnkube-controller" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.687912 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovnkube-controller" Feb 17 16:14:07 crc kubenswrapper[4672]: E0217 16:14:07.687920 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e724011-a0fa-4eb7-a10b-8199435d4478" containerName="extract" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.687927 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e724011-a0fa-4eb7-a10b-8199435d4478" containerName="extract" Feb 17 16:14:07 crc kubenswrapper[4672]: E0217 16:14:07.687935 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovnkube-controller" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.687941 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovnkube-controller" Feb 17 16:14:07 crc kubenswrapper[4672]: E0217 16:14:07.687949 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovnkube-controller" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.687955 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovnkube-controller" Feb 17 16:14:07 crc kubenswrapper[4672]: E0217 16:14:07.687964 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="kube-rbac-proxy-node" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.687969 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="kube-rbac-proxy-node" Feb 17 16:14:07 crc kubenswrapper[4672]: E0217 16:14:07.687976 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="northd" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.687981 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="northd" Feb 17 16:14:07 crc kubenswrapper[4672]: E0217 16:14:07.687989 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovn-controller" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.687995 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovn-controller" Feb 17 16:14:07 crc kubenswrapper[4672]: E0217 16:14:07.688004 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.688010 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.688089 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovn-controller" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.688098 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovnkube-controller" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.688105 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.688110 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovnkube-controller" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.688118 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovnkube-controller" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.688124 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e724011-a0fa-4eb7-a10b-8199435d4478" containerName="extract" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.688130 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="northd" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.688137 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="sbdb" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.688145 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="nbdb" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.688153 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovn-acl-logging" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.688161 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="kube-rbac-proxy-node" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.688169 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovnkube-controller" Feb 17 16:14:07 crc kubenswrapper[4672]: E0217 16:14:07.688259 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovnkube-controller" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.688266 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovnkube-controller" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.688377 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerName="ovnkube-controller" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.690146 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789151 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-cni-bin\") pod \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789203 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98a910a1-b5f0-4f34-9d76-6474c753e8e7-env-overrides\") pod \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789238 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-var-lib-openvswitch\") pod \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789257 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-kubelet\") pod \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789281 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789300 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-run-systemd\") pod \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789318 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-run-netns\") pod \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789307 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "98a910a1-b5f0-4f34-9d76-6474c753e8e7" (UID: "98a910a1-b5f0-4f34-9d76-6474c753e8e7"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789337 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98a910a1-b5f0-4f34-9d76-6474c753e8e7-ovnkube-script-lib\") pod \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789354 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-systemd-units\") pod \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789372 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98a910a1-b5f0-4f34-9d76-6474c753e8e7-ovnkube-config\") pod \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789369 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "98a910a1-b5f0-4f34-9d76-6474c753e8e7" (UID: "98a910a1-b5f0-4f34-9d76-6474c753e8e7"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789396 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-run-openvswitch\") pod \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789422 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t59bf\" (UniqueName: \"kubernetes.io/projected/98a910a1-b5f0-4f34-9d76-6474c753e8e7-kube-api-access-t59bf\") pod \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789447 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-cni-netd\") pod \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789471 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-node-log\") pod \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789496 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-log-socket\") pod \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789526 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-run-ovn\") pod \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789549 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-etc-openvswitch\") pod \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789563 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-slash\") pod \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789602 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98a910a1-b5f0-4f34-9d76-6474c753e8e7-ovn-node-metrics-cert\") pod \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789626 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-run-ovn-kubernetes\") pod \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\" (UID: \"98a910a1-b5f0-4f34-9d76-6474c753e8e7\") " Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789747 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-run-openvswitch\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789770 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-host-run-ovn-kubernetes\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789791 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-host-kubelet\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789814 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbx4t\" (UniqueName: \"kubernetes.io/projected/be5d000b-3c7e-4f76-b6da-448f7985f0cc-kube-api-access-bbx4t\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789837 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-run-systemd\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789855 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-host-cni-bin\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789871 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-host-slash\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789888 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be5d000b-3c7e-4f76-b6da-448f7985f0cc-ovnkube-config\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789903 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-etc-openvswitch\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789917 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-host-run-netns\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789936 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-var-lib-openvswitch\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789965 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-run-ovn\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789986 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be5d000b-3c7e-4f76-b6da-448f7985f0cc-env-overrides\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.790014 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.790037 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be5d000b-3c7e-4f76-b6da-448f7985f0cc-ovn-node-metrics-cert\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.790059 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-systemd-units\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.790077 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-node-log\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.790091 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-log-socket\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.790110 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-host-cni-netd\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.790131 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be5d000b-3c7e-4f76-b6da-448f7985f0cc-ovnkube-script-lib\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.790190 4672 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.790205 4672 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789745 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98a910a1-b5f0-4f34-9d76-6474c753e8e7-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "98a910a1-b5f0-4f34-9d76-6474c753e8e7" (UID: "98a910a1-b5f0-4f34-9d76-6474c753e8e7"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789767 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "98a910a1-b5f0-4f34-9d76-6474c753e8e7" (UID: "98a910a1-b5f0-4f34-9d76-6474c753e8e7"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789787 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "98a910a1-b5f0-4f34-9d76-6474c753e8e7" (UID: "98a910a1-b5f0-4f34-9d76-6474c753e8e7"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.789811 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-node-log" (OuterVolumeSpecName: "node-log") pod "98a910a1-b5f0-4f34-9d76-6474c753e8e7" (UID: "98a910a1-b5f0-4f34-9d76-6474c753e8e7"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.790285 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-log-socket" (OuterVolumeSpecName: "log-socket") pod "98a910a1-b5f0-4f34-9d76-6474c753e8e7" (UID: "98a910a1-b5f0-4f34-9d76-6474c753e8e7"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.790309 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "98a910a1-b5f0-4f34-9d76-6474c753e8e7" (UID: "98a910a1-b5f0-4f34-9d76-6474c753e8e7"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.790325 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "98a910a1-b5f0-4f34-9d76-6474c753e8e7" (UID: "98a910a1-b5f0-4f34-9d76-6474c753e8e7"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.790337 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-slash" (OuterVolumeSpecName: "host-slash") pod "98a910a1-b5f0-4f34-9d76-6474c753e8e7" (UID: "98a910a1-b5f0-4f34-9d76-6474c753e8e7"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.790501 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "98a910a1-b5f0-4f34-9d76-6474c753e8e7" (UID: "98a910a1-b5f0-4f34-9d76-6474c753e8e7"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.790735 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "98a910a1-b5f0-4f34-9d76-6474c753e8e7" (UID: "98a910a1-b5f0-4f34-9d76-6474c753e8e7"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.790776 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "98a910a1-b5f0-4f34-9d76-6474c753e8e7" (UID: "98a910a1-b5f0-4f34-9d76-6474c753e8e7"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.790794 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98a910a1-b5f0-4f34-9d76-6474c753e8e7-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "98a910a1-b5f0-4f34-9d76-6474c753e8e7" (UID: "98a910a1-b5f0-4f34-9d76-6474c753e8e7"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.790846 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98a910a1-b5f0-4f34-9d76-6474c753e8e7-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "98a910a1-b5f0-4f34-9d76-6474c753e8e7" (UID: "98a910a1-b5f0-4f34-9d76-6474c753e8e7"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.790858 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "98a910a1-b5f0-4f34-9d76-6474c753e8e7" (UID: "98a910a1-b5f0-4f34-9d76-6474c753e8e7"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.790881 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "98a910a1-b5f0-4f34-9d76-6474c753e8e7" (UID: "98a910a1-b5f0-4f34-9d76-6474c753e8e7"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.795030 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98a910a1-b5f0-4f34-9d76-6474c753e8e7-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "98a910a1-b5f0-4f34-9d76-6474c753e8e7" (UID: "98a910a1-b5f0-4f34-9d76-6474c753e8e7"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.798823 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98a910a1-b5f0-4f34-9d76-6474c753e8e7-kube-api-access-t59bf" (OuterVolumeSpecName: "kube-api-access-t59bf") pod "98a910a1-b5f0-4f34-9d76-6474c753e8e7" (UID: "98a910a1-b5f0-4f34-9d76-6474c753e8e7"). InnerVolumeSpecName "kube-api-access-t59bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.830561 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "98a910a1-b5f0-4f34-9d76-6474c753e8e7" (UID: "98a910a1-b5f0-4f34-9d76-6474c753e8e7"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891032 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-run-systemd\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891076 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-host-cni-bin\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891092 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-host-slash\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891112 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be5d000b-3c7e-4f76-b6da-448f7985f0cc-ovnkube-config\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891132 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-host-run-netns\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891147 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-etc-openvswitch\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891165 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-var-lib-openvswitch\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891189 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-run-ovn\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891192 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-run-systemd\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891213 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-host-slash\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891246 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-host-run-netns\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891205 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be5d000b-3c7e-4f76-b6da-448f7985f0cc-env-overrides\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891393 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-host-cni-bin\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891412 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-var-lib-openvswitch\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891428 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-etc-openvswitch\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891443 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-run-ovn\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891529 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891582 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be5d000b-3c7e-4f76-b6da-448f7985f0cc-ovn-node-metrics-cert\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891615 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-systemd-units\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891633 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-log-socket\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891656 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-node-log\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891683 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-host-cni-netd\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891729 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be5d000b-3c7e-4f76-b6da-448f7985f0cc-ovnkube-script-lib\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891777 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be5d000b-3c7e-4f76-b6da-448f7985f0cc-env-overrides\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891797 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-run-openvswitch\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891779 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-run-openvswitch\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891822 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891831 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-host-run-ovn-kubernetes\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891852 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-host-kubelet\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891873 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbx4t\" (UniqueName: \"kubernetes.io/projected/be5d000b-3c7e-4f76-b6da-448f7985f0cc-kube-api-access-bbx4t\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891916 4672 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891926 4672 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98a910a1-b5f0-4f34-9d76-6474c753e8e7-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891935 4672 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891944 4672 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891953 4672 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891961 4672 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891963 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be5d000b-3c7e-4f76-b6da-448f7985f0cc-ovnkube-config\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.891970 4672 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98a910a1-b5f0-4f34-9d76-6474c753e8e7-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.892014 4672 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.892029 4672 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98a910a1-b5f0-4f34-9d76-6474c753e8e7-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.892042 4672 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.892056 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t59bf\" (UniqueName: \"kubernetes.io/projected/98a910a1-b5f0-4f34-9d76-6474c753e8e7-kube-api-access-t59bf\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.892069 4672 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.892081 4672 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-node-log\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.892092 4672 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-log-socket\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.892105 4672 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.892117 4672 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.892129 4672 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98a910a1-b5f0-4f34-9d76-6474c753e8e7-host-slash\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.892144 4672 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98a910a1-b5f0-4f34-9d76-6474c753e8e7-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.892180 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-node-log\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.892211 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-systemd-units\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.892238 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-host-run-ovn-kubernetes\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.892244 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-log-socket\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.892243 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-host-cni-netd\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.892271 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be5d000b-3c7e-4f76-b6da-448f7985f0cc-host-kubelet\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.892780 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be5d000b-3c7e-4f76-b6da-448f7985f0cc-ovnkube-script-lib\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.896036 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be5d000b-3c7e-4f76-b6da-448f7985f0cc-ovn-node-metrics-cert\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:07 crc kubenswrapper[4672]: I0217 16:14:07.916747 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbx4t\" (UniqueName: \"kubernetes.io/projected/be5d000b-3c7e-4f76-b6da-448f7985f0cc-kube-api-access-bbx4t\") pod \"ovnkube-node-h5dlz\" (UID: \"be5d000b-3c7e-4f76-b6da-448f7985f0cc\") " pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:08 crc kubenswrapper[4672]: I0217 16:14:08.002632 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:08 crc kubenswrapper[4672]: W0217 16:14:08.022120 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe5d000b_3c7e_4f76_b6da_448f7985f0cc.slice/crio-62f390c87796c977216fc97531303191a6453bf6a8f44cfbd1faabb8d9dc9a28 WatchSource:0}: Error finding container 62f390c87796c977216fc97531303191a6453bf6a8f44cfbd1faabb8d9dc9a28: Status 404 returned error can't find the container with id 62f390c87796c977216fc97531303191a6453bf6a8f44cfbd1faabb8d9dc9a28 Feb 17 16:14:08 crc kubenswrapper[4672]: I0217 16:14:08.596578 4672 generic.go:334] "Generic (PLEG): container finished" podID="be5d000b-3c7e-4f76-b6da-448f7985f0cc" containerID="db0e50786a0f584cb6da8c1cf6bdc36e3f4b6a56010f71c477761a745dda2990" exitCode=0 Feb 17 16:14:08 crc kubenswrapper[4672]: I0217 16:14:08.597788 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" event={"ID":"be5d000b-3c7e-4f76-b6da-448f7985f0cc","Type":"ContainerDied","Data":"db0e50786a0f584cb6da8c1cf6bdc36e3f4b6a56010f71c477761a745dda2990"} Feb 17 16:14:08 crc kubenswrapper[4672]: I0217 16:14:08.597951 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" event={"ID":"be5d000b-3c7e-4f76-b6da-448f7985f0cc","Type":"ContainerStarted","Data":"62f390c87796c977216fc97531303191a6453bf6a8f44cfbd1faabb8d9dc9a28"} Feb 17 16:14:08 crc kubenswrapper[4672]: I0217 16:14:08.605313 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5jjr2_edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe/kube-multus/2.log" Feb 17 16:14:08 crc kubenswrapper[4672]: I0217 16:14:08.610286 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4f9wc_98a910a1-b5f0-4f34-9d76-6474c753e8e7/ovn-acl-logging/0.log" Feb 17 16:14:08 crc kubenswrapper[4672]: I0217 16:14:08.610978 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4f9wc_98a910a1-b5f0-4f34-9d76-6474c753e8e7/ovn-controller/0.log" Feb 17 16:14:08 crc kubenswrapper[4672]: I0217 16:14:08.611649 4672 generic.go:334] "Generic (PLEG): container finished" podID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" containerID="d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314" exitCode=0 Feb 17 16:14:08 crc kubenswrapper[4672]: I0217 16:14:08.611714 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerDied","Data":"d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314"} Feb 17 16:14:08 crc kubenswrapper[4672]: I0217 16:14:08.611760 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" event={"ID":"98a910a1-b5f0-4f34-9d76-6474c753e8e7","Type":"ContainerDied","Data":"41b2fda982128d8c218ff73b6e891ee27d3fd8ccd248cbe0532cdc1e1b626af4"} Feb 17 16:14:08 crc kubenswrapper[4672]: I0217 16:14:08.611772 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4f9wc" Feb 17 16:14:08 crc kubenswrapper[4672]: I0217 16:14:08.611792 4672 scope.go:117] "RemoveContainer" containerID="01204ff3ef7dac68664104a29bdd8064f75c4fe495d66b88961534a94f68e9ae" Feb 17 16:14:08 crc kubenswrapper[4672]: I0217 16:14:08.629923 4672 scope.go:117] "RemoveContainer" containerID="24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4" Feb 17 16:14:08 crc kubenswrapper[4672]: I0217 16:14:08.643098 4672 scope.go:117] "RemoveContainer" containerID="0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16" Feb 17 16:14:08 crc kubenswrapper[4672]: I0217 16:14:08.673613 4672 scope.go:117] "RemoveContainer" containerID="d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314" Feb 17 16:14:08 crc kubenswrapper[4672]: I0217 16:14:08.704688 4672 scope.go:117] "RemoveContainer" containerID="e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db" Feb 17 16:14:08 crc kubenswrapper[4672]: I0217 16:14:08.725730 4672 scope.go:117] "RemoveContainer" containerID="eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a" Feb 17 16:14:08 crc kubenswrapper[4672]: I0217 16:14:08.734478 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4f9wc"] Feb 17 16:14:08 crc kubenswrapper[4672]: I0217 16:14:08.737551 4672 scope.go:117] "RemoveContainer" containerID="2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997" Feb 17 16:14:08 crc kubenswrapper[4672]: I0217 16:14:08.739495 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4f9wc"] Feb 17 16:14:08 crc kubenswrapper[4672]: I0217 16:14:08.765569 4672 scope.go:117] "RemoveContainer" containerID="42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7" Feb 17 16:14:08 crc kubenswrapper[4672]: I0217 16:14:08.786766 4672 scope.go:117] "RemoveContainer" containerID="3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d" Feb 17 16:14:09 crc kubenswrapper[4672]: I0217 16:14:09.672116 4672 scope.go:117] "RemoveContainer" containerID="01204ff3ef7dac68664104a29bdd8064f75c4fe495d66b88961534a94f68e9ae" Feb 17 16:14:09 crc kubenswrapper[4672]: E0217 16:14:09.672493 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01204ff3ef7dac68664104a29bdd8064f75c4fe495d66b88961534a94f68e9ae\": container with ID starting with 01204ff3ef7dac68664104a29bdd8064f75c4fe495d66b88961534a94f68e9ae not found: ID does not exist" containerID="01204ff3ef7dac68664104a29bdd8064f75c4fe495d66b88961534a94f68e9ae" Feb 17 16:14:09 crc kubenswrapper[4672]: I0217 16:14:09.672544 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01204ff3ef7dac68664104a29bdd8064f75c4fe495d66b88961534a94f68e9ae"} err="failed to get container status \"01204ff3ef7dac68664104a29bdd8064f75c4fe495d66b88961534a94f68e9ae\": rpc error: code = NotFound desc = could not find container \"01204ff3ef7dac68664104a29bdd8064f75c4fe495d66b88961534a94f68e9ae\": container with ID starting with 01204ff3ef7dac68664104a29bdd8064f75c4fe495d66b88961534a94f68e9ae not found: ID does not exist" Feb 17 16:14:09 crc kubenswrapper[4672]: I0217 16:14:09.672570 4672 scope.go:117] "RemoveContainer" containerID="24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4" Feb 17 16:14:09 crc kubenswrapper[4672]: E0217 16:14:09.672805 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\": container with ID starting with 24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4 not found: ID does not exist" containerID="24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4" Feb 17 16:14:09 crc kubenswrapper[4672]: I0217 16:14:09.672826 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4"} err="failed to get container status \"24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\": rpc error: code = NotFound desc = could not find container \"24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4\": container with ID starting with 24931b90f0faa42a5320df38225b1fc1c4ba21ddb6b43c1ab84047c9178dfea4 not found: ID does not exist" Feb 17 16:14:09 crc kubenswrapper[4672]: I0217 16:14:09.672839 4672 scope.go:117] "RemoveContainer" containerID="0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16" Feb 17 16:14:09 crc kubenswrapper[4672]: E0217 16:14:09.673185 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\": container with ID starting with 0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16 not found: ID does not exist" containerID="0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16" Feb 17 16:14:09 crc kubenswrapper[4672]: I0217 16:14:09.673210 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16"} err="failed to get container status \"0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\": rpc error: code = NotFound desc = could not find container \"0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16\": container with ID starting with 0fbde5168a81766f8e318ce4ebfc055bce7e199abc47db20e3b1767e3fb49c16 not found: ID does not exist" Feb 17 16:14:09 crc kubenswrapper[4672]: I0217 16:14:09.673226 4672 scope.go:117] "RemoveContainer" containerID="d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314" Feb 17 16:14:09 crc kubenswrapper[4672]: E0217 16:14:09.673469 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\": container with ID starting with d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314 not found: ID does not exist" containerID="d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314" Feb 17 16:14:09 crc kubenswrapper[4672]: I0217 16:14:09.673493 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314"} err="failed to get container status \"d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\": rpc error: code = NotFound desc = could not find container \"d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314\": container with ID starting with d969b7db6e8da6d14b08bf6e462b846aeaa463703d040d8dee87e847f4fca314 not found: ID does not exist" Feb 17 16:14:09 crc kubenswrapper[4672]: I0217 16:14:09.673520 4672 scope.go:117] "RemoveContainer" containerID="e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db" Feb 17 16:14:09 crc kubenswrapper[4672]: E0217 16:14:09.673711 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\": container with ID starting with e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db not found: ID does not exist" containerID="e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db" Feb 17 16:14:09 crc kubenswrapper[4672]: I0217 16:14:09.673734 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db"} err="failed to get container status \"e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\": rpc error: code = NotFound desc = could not find container \"e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db\": container with ID starting with e0495a1c586c33fb22e3cff8faaf427f9183f30459e1c4e23d840487fa21c7db not found: ID does not exist" Feb 17 16:14:09 crc kubenswrapper[4672]: I0217 16:14:09.673750 4672 scope.go:117] "RemoveContainer" containerID="eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a" Feb 17 16:14:09 crc kubenswrapper[4672]: E0217 16:14:09.673931 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\": container with ID starting with eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a not found: ID does not exist" containerID="eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a" Feb 17 16:14:09 crc kubenswrapper[4672]: I0217 16:14:09.673961 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a"} err="failed to get container status \"eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\": rpc error: code = NotFound desc = could not find container \"eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a\": container with ID starting with eb856f7806f65441a26295986d6ee3b1dee692087510547ea5680d7600a5981a not found: ID does not exist" Feb 17 16:14:09 crc kubenswrapper[4672]: I0217 16:14:09.673973 4672 scope.go:117] "RemoveContainer" containerID="2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997" Feb 17 16:14:09 crc kubenswrapper[4672]: E0217 16:14:09.674155 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\": container with ID starting with 2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997 not found: ID does not exist" containerID="2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997" Feb 17 16:14:09 crc kubenswrapper[4672]: I0217 16:14:09.674173 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997"} err="failed to get container status \"2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\": rpc error: code = NotFound desc = could not find container \"2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997\": container with ID starting with 2a42ffc66b52e8db408035eb1e3fd03670217a0a1cabe42a972d0dfeb2308997 not found: ID does not exist" Feb 17 16:14:09 crc kubenswrapper[4672]: I0217 16:14:09.674192 4672 scope.go:117] "RemoveContainer" containerID="42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7" Feb 17 16:14:09 crc kubenswrapper[4672]: E0217 16:14:09.676646 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\": container with ID starting with 42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7 not found: ID does not exist" containerID="42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7" Feb 17 16:14:09 crc kubenswrapper[4672]: I0217 16:14:09.676669 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7"} err="failed to get container status \"42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\": rpc error: code = NotFound desc = could not find container \"42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7\": container with ID starting with 42df411df161c300edce4e00a51babea135433c68a188f56d438df2665f7a6b7 not found: ID does not exist" Feb 17 16:14:09 crc kubenswrapper[4672]: I0217 16:14:09.676683 4672 scope.go:117] "RemoveContainer" containerID="3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d" Feb 17 16:14:09 crc kubenswrapper[4672]: E0217 16:14:09.676884 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\": container with ID starting with 3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d not found: ID does not exist" containerID="3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d" Feb 17 16:14:09 crc kubenswrapper[4672]: I0217 16:14:09.676904 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d"} err="failed to get container status \"3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\": rpc error: code = NotFound desc = could not find container \"3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d\": container with ID starting with 3984b1057bc27fc0d60e7537687acbe854b09e1fc5b2cdfb66c4b45e0ac21e6d not found: ID does not exist" Feb 17 16:14:09 crc kubenswrapper[4672]: I0217 16:14:09.951329 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98a910a1-b5f0-4f34-9d76-6474c753e8e7" path="/var/lib/kubelet/pods/98a910a1-b5f0-4f34-9d76-6474c753e8e7/volumes" Feb 17 16:14:10 crc kubenswrapper[4672]: I0217 16:14:10.635986 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" event={"ID":"be5d000b-3c7e-4f76-b6da-448f7985f0cc","Type":"ContainerStarted","Data":"5e73d8b7392a9e54dabe16f3322afda52e279ae4814c8f5c910650e5a2915c49"} Feb 17 16:14:10 crc kubenswrapper[4672]: I0217 16:14:10.636291 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" event={"ID":"be5d000b-3c7e-4f76-b6da-448f7985f0cc","Type":"ContainerStarted","Data":"1b2a091b567bfaf84e73986d5b4ef544d4c955a7d8918eed5ef8e6c667bdf3f8"} Feb 17 16:14:10 crc kubenswrapper[4672]: I0217 16:14:10.636317 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" event={"ID":"be5d000b-3c7e-4f76-b6da-448f7985f0cc","Type":"ContainerStarted","Data":"7ccc636dd5b6a3359d72a591d2a0de391d5293b504aad7db1009442c8219a914"} Feb 17 16:14:10 crc kubenswrapper[4672]: I0217 16:14:10.636327 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" event={"ID":"be5d000b-3c7e-4f76-b6da-448f7985f0cc","Type":"ContainerStarted","Data":"b525a7c9e19287ceeb28a04e1f282b8e1a08a774cc592b575f9ad7231207b33a"} Feb 17 16:14:10 crc kubenswrapper[4672]: I0217 16:14:10.636336 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" event={"ID":"be5d000b-3c7e-4f76-b6da-448f7985f0cc","Type":"ContainerStarted","Data":"755ae43ab8bd133bcea8f7ce10964c9239ae5e68ab32f82633116e12e9ea0265"} Feb 17 16:14:10 crc kubenswrapper[4672]: I0217 16:14:10.636348 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" event={"ID":"be5d000b-3c7e-4f76-b6da-448f7985f0cc","Type":"ContainerStarted","Data":"bbdcac8eced99799ac096f2b5db783883cfdcf6b3d3f3258d61941b1479f12bb"} Feb 17 16:14:13 crc kubenswrapper[4672]: I0217 16:14:13.658489 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" event={"ID":"be5d000b-3c7e-4f76-b6da-448f7985f0cc","Type":"ContainerStarted","Data":"d3ed707cc669099c668ded0c9af9d67bb43abd40f0abdce52b707beaba8865a2"} Feb 17 16:14:13 crc kubenswrapper[4672]: I0217 16:14:13.698132 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6"] Feb 17 16:14:13 crc kubenswrapper[4672]: I0217 16:14:13.698759 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6" Feb 17 16:14:13 crc kubenswrapper[4672]: I0217 16:14:13.700755 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 17 16:14:13 crc kubenswrapper[4672]: I0217 16:14:13.700822 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 17 16:14:13 crc kubenswrapper[4672]: I0217 16:14:13.700840 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-tp79h" Feb 17 16:14:13 crc kubenswrapper[4672]: I0217 16:14:13.808939 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp"] Feb 17 16:14:13 crc kubenswrapper[4672]: I0217 16:14:13.809820 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" Feb 17 16:14:13 crc kubenswrapper[4672]: I0217 16:14:13.811951 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 17 16:14:13 crc kubenswrapper[4672]: I0217 16:14:13.812197 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-5bqpq" Feb 17 16:14:13 crc kubenswrapper[4672]: I0217 16:14:13.820261 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56"] Feb 17 16:14:13 crc kubenswrapper[4672]: I0217 16:14:13.820941 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" Feb 17 16:14:13 crc kubenswrapper[4672]: I0217 16:14:13.873529 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzvbb\" (UniqueName: \"kubernetes.io/projected/78d09a7b-94c0-4d04-a640-a67a065a6aff-kube-api-access-rzvbb\") pod \"obo-prometheus-operator-68bc856cb9-fkmc6\" (UID: \"78d09a7b-94c0-4d04-a640-a67a065a6aff\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6" Feb 17 16:14:13 crc kubenswrapper[4672]: I0217 16:14:13.907808 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-gkrbk"] Feb 17 16:14:13 crc kubenswrapper[4672]: I0217 16:14:13.908504 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" Feb 17 16:14:13 crc kubenswrapper[4672]: I0217 16:14:13.911493 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-22xrq" Feb 17 16:14:13 crc kubenswrapper[4672]: I0217 16:14:13.911571 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 17 16:14:13 crc kubenswrapper[4672]: I0217 16:14:13.974579 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/043e8cc1-abfc-4d57-89b8-4d26da7b8a83-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp\" (UID: \"043e8cc1-abfc-4d57-89b8-4d26da7b8a83\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" Feb 17 16:14:13 crc kubenswrapper[4672]: I0217 16:14:13.974657 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/de79ed15-243f-4c2a-a09f-b94c69734b33-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56\" (UID: \"de79ed15-243f-4c2a-a09f-b94c69734b33\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" Feb 17 16:14:13 crc kubenswrapper[4672]: I0217 16:14:13.974692 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/043e8cc1-abfc-4d57-89b8-4d26da7b8a83-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp\" (UID: \"043e8cc1-abfc-4d57-89b8-4d26da7b8a83\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" Feb 17 16:14:13 crc kubenswrapper[4672]: I0217 16:14:13.974723 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/de79ed15-243f-4c2a-a09f-b94c69734b33-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56\" (UID: \"de79ed15-243f-4c2a-a09f-b94c69734b33\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" Feb 17 16:14:13 crc kubenswrapper[4672]: I0217 16:14:13.974747 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzvbb\" (UniqueName: \"kubernetes.io/projected/78d09a7b-94c0-4d04-a640-a67a065a6aff-kube-api-access-rzvbb\") pod \"obo-prometheus-operator-68bc856cb9-fkmc6\" (UID: \"78d09a7b-94c0-4d04-a640-a67a065a6aff\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.001028 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzvbb\" (UniqueName: \"kubernetes.io/projected/78d09a7b-94c0-4d04-a640-a67a065a6aff-kube-api-access-rzvbb\") pod \"obo-prometheus-operator-68bc856cb9-fkmc6\" (UID: \"78d09a7b-94c0-4d04-a640-a67a065a6aff\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.013969 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.036217 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-f86bw"] Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.037134 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-f86bw" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.038988 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-mc7lz" Feb 17 16:14:14 crc kubenswrapper[4672]: E0217 16:14:14.044443 4672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fkmc6_openshift-operators_78d09a7b-94c0-4d04-a640-a67a065a6aff_0(45963655bfa0aeb5beaf538af72405dc7d0dc2f6645c691f8c5a6cf08b5f4ead): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:14:14 crc kubenswrapper[4672]: E0217 16:14:14.044503 4672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fkmc6_openshift-operators_78d09a7b-94c0-4d04-a640-a67a065a6aff_0(45963655bfa0aeb5beaf538af72405dc7d0dc2f6645c691f8c5a6cf08b5f4ead): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6" Feb 17 16:14:14 crc kubenswrapper[4672]: E0217 16:14:14.044539 4672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fkmc6_openshift-operators_78d09a7b-94c0-4d04-a640-a67a065a6aff_0(45963655bfa0aeb5beaf538af72405dc7d0dc2f6645c691f8c5a6cf08b5f4ead): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6" Feb 17 16:14:14 crc kubenswrapper[4672]: E0217 16:14:14.044579 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-fkmc6_openshift-operators(78d09a7b-94c0-4d04-a640-a67a065a6aff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-fkmc6_openshift-operators(78d09a7b-94c0-4d04-a640-a67a065a6aff)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fkmc6_openshift-operators_78d09a7b-94c0-4d04-a640-a67a065a6aff_0(45963655bfa0aeb5beaf538af72405dc7d0dc2f6645c691f8c5a6cf08b5f4ead): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6" podUID="78d09a7b-94c0-4d04-a640-a67a065a6aff" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.076234 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj8cr\" (UniqueName: \"kubernetes.io/projected/527318fe-5c99-481d-910e-0e0973f7748b-kube-api-access-fj8cr\") pod \"observability-operator-59bdc8b94-gkrbk\" (UID: \"527318fe-5c99-481d-910e-0e0973f7748b\") " pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.076289 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/043e8cc1-abfc-4d57-89b8-4d26da7b8a83-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp\" (UID: \"043e8cc1-abfc-4d57-89b8-4d26da7b8a83\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.076364 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/de79ed15-243f-4c2a-a09f-b94c69734b33-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56\" (UID: \"de79ed15-243f-4c2a-a09f-b94c69734b33\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.076401 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/043e8cc1-abfc-4d57-89b8-4d26da7b8a83-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp\" (UID: \"043e8cc1-abfc-4d57-89b8-4d26da7b8a83\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.076442 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/de79ed15-243f-4c2a-a09f-b94c69734b33-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56\" (UID: \"de79ed15-243f-4c2a-a09f-b94c69734b33\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.076522 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/527318fe-5c99-481d-910e-0e0973f7748b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-gkrbk\" (UID: \"527318fe-5c99-481d-910e-0e0973f7748b\") " pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.080565 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/043e8cc1-abfc-4d57-89b8-4d26da7b8a83-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp\" (UID: \"043e8cc1-abfc-4d57-89b8-4d26da7b8a83\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.081193 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/de79ed15-243f-4c2a-a09f-b94c69734b33-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56\" (UID: \"de79ed15-243f-4c2a-a09f-b94c69734b33\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.082459 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/de79ed15-243f-4c2a-a09f-b94c69734b33-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56\" (UID: \"de79ed15-243f-4c2a-a09f-b94c69734b33\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.087257 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/043e8cc1-abfc-4d57-89b8-4d26da7b8a83-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp\" (UID: \"043e8cc1-abfc-4d57-89b8-4d26da7b8a83\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.125789 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.138711 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" Feb 17 16:14:14 crc kubenswrapper[4672]: E0217 16:14:14.145663 4672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp_openshift-operators_043e8cc1-abfc-4d57-89b8-4d26da7b8a83_0(d8f8987172b5a7b8a278a5d825f994f287acc7145c88351b45709d38f9ebd814): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:14:14 crc kubenswrapper[4672]: E0217 16:14:14.145744 4672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp_openshift-operators_043e8cc1-abfc-4d57-89b8-4d26da7b8a83_0(d8f8987172b5a7b8a278a5d825f994f287acc7145c88351b45709d38f9ebd814): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" Feb 17 16:14:14 crc kubenswrapper[4672]: E0217 16:14:14.145780 4672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp_openshift-operators_043e8cc1-abfc-4d57-89b8-4d26da7b8a83_0(d8f8987172b5a7b8a278a5d825f994f287acc7145c88351b45709d38f9ebd814): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" Feb 17 16:14:14 crc kubenswrapper[4672]: E0217 16:14:14.145858 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp_openshift-operators(043e8cc1-abfc-4d57-89b8-4d26da7b8a83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp_openshift-operators(043e8cc1-abfc-4d57-89b8-4d26da7b8a83)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp_openshift-operators_043e8cc1-abfc-4d57-89b8-4d26da7b8a83_0(d8f8987172b5a7b8a278a5d825f994f287acc7145c88351b45709d38f9ebd814): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" podUID="043e8cc1-abfc-4d57-89b8-4d26da7b8a83" Feb 17 16:14:14 crc kubenswrapper[4672]: E0217 16:14:14.166823 4672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56_openshift-operators_de79ed15-243f-4c2a-a09f-b94c69734b33_0(13d867ee1366fecb9cf15317310984520e4d05bdf8212da0e4f0c6f8a4a69afe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:14:14 crc kubenswrapper[4672]: E0217 16:14:14.166902 4672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56_openshift-operators_de79ed15-243f-4c2a-a09f-b94c69734b33_0(13d867ee1366fecb9cf15317310984520e4d05bdf8212da0e4f0c6f8a4a69afe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" Feb 17 16:14:14 crc kubenswrapper[4672]: E0217 16:14:14.166934 4672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56_openshift-operators_de79ed15-243f-4c2a-a09f-b94c69734b33_0(13d867ee1366fecb9cf15317310984520e4d05bdf8212da0e4f0c6f8a4a69afe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" Feb 17 16:14:14 crc kubenswrapper[4672]: E0217 16:14:14.166999 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56_openshift-operators(de79ed15-243f-4c2a-a09f-b94c69734b33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56_openshift-operators(de79ed15-243f-4c2a-a09f-b94c69734b33)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56_openshift-operators_de79ed15-243f-4c2a-a09f-b94c69734b33_0(13d867ee1366fecb9cf15317310984520e4d05bdf8212da0e4f0c6f8a4a69afe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" podUID="de79ed15-243f-4c2a-a09f-b94c69734b33" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.177840 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bf60e2ef-36ff-47b1-94c3-58db8c9a4e40-openshift-service-ca\") pod \"perses-operator-5bf474d74f-f86bw\" (UID: \"bf60e2ef-36ff-47b1-94c3-58db8c9a4e40\") " pod="openshift-operators/perses-operator-5bf474d74f-f86bw" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.177935 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/527318fe-5c99-481d-910e-0e0973f7748b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-gkrbk\" (UID: \"527318fe-5c99-481d-910e-0e0973f7748b\") " pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.177973 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp8v7\" (UniqueName: \"kubernetes.io/projected/bf60e2ef-36ff-47b1-94c3-58db8c9a4e40-kube-api-access-fp8v7\") pod \"perses-operator-5bf474d74f-f86bw\" (UID: \"bf60e2ef-36ff-47b1-94c3-58db8c9a4e40\") " pod="openshift-operators/perses-operator-5bf474d74f-f86bw" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.178003 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj8cr\" (UniqueName: \"kubernetes.io/projected/527318fe-5c99-481d-910e-0e0973f7748b-kube-api-access-fj8cr\") pod \"observability-operator-59bdc8b94-gkrbk\" (UID: \"527318fe-5c99-481d-910e-0e0973f7748b\") " pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.181652 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/527318fe-5c99-481d-910e-0e0973f7748b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-gkrbk\" (UID: \"527318fe-5c99-481d-910e-0e0973f7748b\") " pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.209380 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj8cr\" (UniqueName: \"kubernetes.io/projected/527318fe-5c99-481d-910e-0e0973f7748b-kube-api-access-fj8cr\") pod \"observability-operator-59bdc8b94-gkrbk\" (UID: \"527318fe-5c99-481d-910e-0e0973f7748b\") " pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.223339 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" Feb 17 16:14:14 crc kubenswrapper[4672]: E0217 16:14:14.257950 4672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-gkrbk_openshift-operators_527318fe-5c99-481d-910e-0e0973f7748b_0(a06aec34ac494e08512ff582328f88a0b626c58634f4214587929e6baa36fd43): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:14:14 crc kubenswrapper[4672]: E0217 16:14:14.258007 4672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-gkrbk_openshift-operators_527318fe-5c99-481d-910e-0e0973f7748b_0(a06aec34ac494e08512ff582328f88a0b626c58634f4214587929e6baa36fd43): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" Feb 17 16:14:14 crc kubenswrapper[4672]: E0217 16:14:14.258030 4672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-gkrbk_openshift-operators_527318fe-5c99-481d-910e-0e0973f7748b_0(a06aec34ac494e08512ff582328f88a0b626c58634f4214587929e6baa36fd43): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" Feb 17 16:14:14 crc kubenswrapper[4672]: E0217 16:14:14.258077 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-gkrbk_openshift-operators(527318fe-5c99-481d-910e-0e0973f7748b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-gkrbk_openshift-operators(527318fe-5c99-481d-910e-0e0973f7748b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-gkrbk_openshift-operators_527318fe-5c99-481d-910e-0e0973f7748b_0(a06aec34ac494e08512ff582328f88a0b626c58634f4214587929e6baa36fd43): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" podUID="527318fe-5c99-481d-910e-0e0973f7748b" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.279017 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp8v7\" (UniqueName: \"kubernetes.io/projected/bf60e2ef-36ff-47b1-94c3-58db8c9a4e40-kube-api-access-fp8v7\") pod \"perses-operator-5bf474d74f-f86bw\" (UID: \"bf60e2ef-36ff-47b1-94c3-58db8c9a4e40\") " pod="openshift-operators/perses-operator-5bf474d74f-f86bw" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.279119 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bf60e2ef-36ff-47b1-94c3-58db8c9a4e40-openshift-service-ca\") pod \"perses-operator-5bf474d74f-f86bw\" (UID: \"bf60e2ef-36ff-47b1-94c3-58db8c9a4e40\") " pod="openshift-operators/perses-operator-5bf474d74f-f86bw" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.279964 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bf60e2ef-36ff-47b1-94c3-58db8c9a4e40-openshift-service-ca\") pod \"perses-operator-5bf474d74f-f86bw\" (UID: \"bf60e2ef-36ff-47b1-94c3-58db8c9a4e40\") " pod="openshift-operators/perses-operator-5bf474d74f-f86bw" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.293623 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp8v7\" (UniqueName: \"kubernetes.io/projected/bf60e2ef-36ff-47b1-94c3-58db8c9a4e40-kube-api-access-fp8v7\") pod \"perses-operator-5bf474d74f-f86bw\" (UID: \"bf60e2ef-36ff-47b1-94c3-58db8c9a4e40\") " pod="openshift-operators/perses-operator-5bf474d74f-f86bw" Feb 17 16:14:14 crc kubenswrapper[4672]: I0217 16:14:14.418946 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-f86bw" Feb 17 16:14:14 crc kubenswrapper[4672]: E0217 16:14:14.438907 4672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-f86bw_openshift-operators_bf60e2ef-36ff-47b1-94c3-58db8c9a4e40_0(6f7071537ac301426b4b3bb35a7f44bd97aebebefb9b61e0931e4819733cccb1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:14:14 crc kubenswrapper[4672]: E0217 16:14:14.438977 4672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-f86bw_openshift-operators_bf60e2ef-36ff-47b1-94c3-58db8c9a4e40_0(6f7071537ac301426b4b3bb35a7f44bd97aebebefb9b61e0931e4819733cccb1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-f86bw" Feb 17 16:14:14 crc kubenswrapper[4672]: E0217 16:14:14.439001 4672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-f86bw_openshift-operators_bf60e2ef-36ff-47b1-94c3-58db8c9a4e40_0(6f7071537ac301426b4b3bb35a7f44bd97aebebefb9b61e0931e4819733cccb1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-f86bw" Feb 17 16:14:14 crc kubenswrapper[4672]: E0217 16:14:14.439051 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-f86bw_openshift-operators(bf60e2ef-36ff-47b1-94c3-58db8c9a4e40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-f86bw_openshift-operators(bf60e2ef-36ff-47b1-94c3-58db8c9a4e40)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-f86bw_openshift-operators_bf60e2ef-36ff-47b1-94c3-58db8c9a4e40_0(6f7071537ac301426b4b3bb35a7f44bd97aebebefb9b61e0931e4819733cccb1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-f86bw" podUID="bf60e2ef-36ff-47b1-94c3-58db8c9a4e40" Feb 17 16:14:15 crc kubenswrapper[4672]: I0217 16:14:15.672685 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" event={"ID":"be5d000b-3c7e-4f76-b6da-448f7985f0cc","Type":"ContainerStarted","Data":"bd3a5f1e3b8c03dbbb89d8e980991d58352a8e4fe8dae2f722edbe55780d9c19"} Feb 17 16:14:15 crc kubenswrapper[4672]: I0217 16:14:15.673076 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:15 crc kubenswrapper[4672]: I0217 16:14:15.673093 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:15 crc kubenswrapper[4672]: I0217 16:14:15.673102 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:15 crc kubenswrapper[4672]: I0217 16:14:15.697304 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:15 crc kubenswrapper[4672]: I0217 16:14:15.697729 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:15 crc kubenswrapper[4672]: I0217 16:14:15.705287 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" podStartSLOduration=8.705270291 podStartE2EDuration="8.705270291s" podCreationTimestamp="2026-02-17 16:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:14:15.702023775 +0000 UTC m=+664.456112527" watchObservedRunningTime="2026-02-17 16:14:15.705270291 +0000 UTC m=+664.459359023" Feb 17 16:14:15 crc kubenswrapper[4672]: I0217 16:14:15.951223 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6"] Feb 17 16:14:15 crc kubenswrapper[4672]: I0217 16:14:15.951332 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6" Feb 17 16:14:15 crc kubenswrapper[4672]: I0217 16:14:15.951746 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6" Feb 17 16:14:15 crc kubenswrapper[4672]: I0217 16:14:15.964853 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56"] Feb 17 16:14:15 crc kubenswrapper[4672]: I0217 16:14:15.964968 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" Feb 17 16:14:15 crc kubenswrapper[4672]: I0217 16:14:15.965404 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" Feb 17 16:14:15 crc kubenswrapper[4672]: E0217 16:14:15.971281 4672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fkmc6_openshift-operators_78d09a7b-94c0-4d04-a640-a67a065a6aff_0(3a1f1084f6209608f40a0e3c2b663255c499f84264f7f7e08abb7784a2ce34ac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:14:15 crc kubenswrapper[4672]: E0217 16:14:15.971363 4672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fkmc6_openshift-operators_78d09a7b-94c0-4d04-a640-a67a065a6aff_0(3a1f1084f6209608f40a0e3c2b663255c499f84264f7f7e08abb7784a2ce34ac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6" Feb 17 16:14:15 crc kubenswrapper[4672]: E0217 16:14:15.971413 4672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fkmc6_openshift-operators_78d09a7b-94c0-4d04-a640-a67a065a6aff_0(3a1f1084f6209608f40a0e3c2b663255c499f84264f7f7e08abb7784a2ce34ac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6" Feb 17 16:14:15 crc kubenswrapper[4672]: E0217 16:14:15.971459 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-fkmc6_openshift-operators(78d09a7b-94c0-4d04-a640-a67a065a6aff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-fkmc6_openshift-operators(78d09a7b-94c0-4d04-a640-a67a065a6aff)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fkmc6_openshift-operators_78d09a7b-94c0-4d04-a640-a67a065a6aff_0(3a1f1084f6209608f40a0e3c2b663255c499f84264f7f7e08abb7784a2ce34ac): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6" podUID="78d09a7b-94c0-4d04-a640-a67a065a6aff" Feb 17 16:14:15 crc kubenswrapper[4672]: I0217 16:14:15.991306 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-gkrbk"] Feb 17 16:14:15 crc kubenswrapper[4672]: I0217 16:14:15.991471 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" Feb 17 16:14:15 crc kubenswrapper[4672]: I0217 16:14:15.991884 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" Feb 17 16:14:15 crc kubenswrapper[4672]: E0217 16:14:15.993750 4672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56_openshift-operators_de79ed15-243f-4c2a-a09f-b94c69734b33_0(cc34d561cdc0c65b8f5b111a9d2a639c814aefd13b9ac6824b160783b1f9450f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:14:15 crc kubenswrapper[4672]: E0217 16:14:15.993809 4672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56_openshift-operators_de79ed15-243f-4c2a-a09f-b94c69734b33_0(cc34d561cdc0c65b8f5b111a9d2a639c814aefd13b9ac6824b160783b1f9450f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" Feb 17 16:14:15 crc kubenswrapper[4672]: E0217 16:14:15.993828 4672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56_openshift-operators_de79ed15-243f-4c2a-a09f-b94c69734b33_0(cc34d561cdc0c65b8f5b111a9d2a639c814aefd13b9ac6824b160783b1f9450f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" Feb 17 16:14:15 crc kubenswrapper[4672]: E0217 16:14:15.993863 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56_openshift-operators(de79ed15-243f-4c2a-a09f-b94c69734b33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56_openshift-operators(de79ed15-243f-4c2a-a09f-b94c69734b33)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56_openshift-operators_de79ed15-243f-4c2a-a09f-b94c69734b33_0(cc34d561cdc0c65b8f5b111a9d2a639c814aefd13b9ac6824b160783b1f9450f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" podUID="de79ed15-243f-4c2a-a09f-b94c69734b33" Feb 17 16:14:15 crc kubenswrapper[4672]: I0217 16:14:15.995106 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-f86bw"] Feb 17 16:14:15 crc kubenswrapper[4672]: I0217 16:14:15.995189 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-f86bw" Feb 17 16:14:15 crc kubenswrapper[4672]: I0217 16:14:15.995456 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-f86bw" Feb 17 16:14:16 crc kubenswrapper[4672]: I0217 16:14:16.005467 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp"] Feb 17 16:14:16 crc kubenswrapper[4672]: I0217 16:14:16.005629 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" Feb 17 16:14:16 crc kubenswrapper[4672]: I0217 16:14:16.006083 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" Feb 17 16:14:16 crc kubenswrapper[4672]: E0217 16:14:16.029718 4672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-gkrbk_openshift-operators_527318fe-5c99-481d-910e-0e0973f7748b_0(a943ebbb2d5838d85b1fa298ec4d6e048d26cd3e085db31352a97b6d2624a428): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:14:16 crc kubenswrapper[4672]: E0217 16:14:16.029789 4672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-gkrbk_openshift-operators_527318fe-5c99-481d-910e-0e0973f7748b_0(a943ebbb2d5838d85b1fa298ec4d6e048d26cd3e085db31352a97b6d2624a428): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" Feb 17 16:14:16 crc kubenswrapper[4672]: E0217 16:14:16.029812 4672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-gkrbk_openshift-operators_527318fe-5c99-481d-910e-0e0973f7748b_0(a943ebbb2d5838d85b1fa298ec4d6e048d26cd3e085db31352a97b6d2624a428): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" Feb 17 16:14:16 crc kubenswrapper[4672]: E0217 16:14:16.029858 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-gkrbk_openshift-operators(527318fe-5c99-481d-910e-0e0973f7748b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-gkrbk_openshift-operators(527318fe-5c99-481d-910e-0e0973f7748b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-gkrbk_openshift-operators_527318fe-5c99-481d-910e-0e0973f7748b_0(a943ebbb2d5838d85b1fa298ec4d6e048d26cd3e085db31352a97b6d2624a428): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" podUID="527318fe-5c99-481d-910e-0e0973f7748b" Feb 17 16:14:16 crc kubenswrapper[4672]: E0217 16:14:16.063739 4672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp_openshift-operators_043e8cc1-abfc-4d57-89b8-4d26da7b8a83_0(25545f5419546713f6dbfe8af97a26fee12ef486d82aa890d3b36bfa77298eb8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:14:16 crc kubenswrapper[4672]: E0217 16:14:16.063818 4672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp_openshift-operators_043e8cc1-abfc-4d57-89b8-4d26da7b8a83_0(25545f5419546713f6dbfe8af97a26fee12ef486d82aa890d3b36bfa77298eb8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" Feb 17 16:14:16 crc kubenswrapper[4672]: E0217 16:14:16.063854 4672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp_openshift-operators_043e8cc1-abfc-4d57-89b8-4d26da7b8a83_0(25545f5419546713f6dbfe8af97a26fee12ef486d82aa890d3b36bfa77298eb8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" Feb 17 16:14:16 crc kubenswrapper[4672]: E0217 16:14:16.063903 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp_openshift-operators(043e8cc1-abfc-4d57-89b8-4d26da7b8a83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp_openshift-operators(043e8cc1-abfc-4d57-89b8-4d26da7b8a83)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp_openshift-operators_043e8cc1-abfc-4d57-89b8-4d26da7b8a83_0(25545f5419546713f6dbfe8af97a26fee12ef486d82aa890d3b36bfa77298eb8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" podUID="043e8cc1-abfc-4d57-89b8-4d26da7b8a83" Feb 17 16:14:16 crc kubenswrapper[4672]: E0217 16:14:16.076565 4672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-f86bw_openshift-operators_bf60e2ef-36ff-47b1-94c3-58db8c9a4e40_0(d8d0d95727c8aab6146d613e61f6295a02d24a2797047e617525fb16af612be7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:14:16 crc kubenswrapper[4672]: E0217 16:14:16.076628 4672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-f86bw_openshift-operators_bf60e2ef-36ff-47b1-94c3-58db8c9a4e40_0(d8d0d95727c8aab6146d613e61f6295a02d24a2797047e617525fb16af612be7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-f86bw" Feb 17 16:14:16 crc kubenswrapper[4672]: E0217 16:14:16.076653 4672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-f86bw_openshift-operators_bf60e2ef-36ff-47b1-94c3-58db8c9a4e40_0(d8d0d95727c8aab6146d613e61f6295a02d24a2797047e617525fb16af612be7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-f86bw" Feb 17 16:14:16 crc kubenswrapper[4672]: E0217 16:14:16.076695 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-f86bw_openshift-operators(bf60e2ef-36ff-47b1-94c3-58db8c9a4e40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-f86bw_openshift-operators(bf60e2ef-36ff-47b1-94c3-58db8c9a4e40)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-f86bw_openshift-operators_bf60e2ef-36ff-47b1-94c3-58db8c9a4e40_0(d8d0d95727c8aab6146d613e61f6295a02d24a2797047e617525fb16af612be7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-f86bw" podUID="bf60e2ef-36ff-47b1-94c3-58db8c9a4e40" Feb 17 16:14:19 crc kubenswrapper[4672]: I0217 16:14:19.944304 4672 scope.go:117] "RemoveContainer" containerID="397bf27fea3d27b5db56ccb8cc9ebd9e8401dd883e3c22d9d2e8f76a4f63c577" Feb 17 16:14:19 crc kubenswrapper[4672]: E0217 16:14:19.944883 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-5jjr2_openshift-multus(edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe)\"" pod="openshift-multus/multus-5jjr2" podUID="edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe" Feb 17 16:14:26 crc kubenswrapper[4672]: I0217 16:14:26.944835 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" Feb 17 16:14:26 crc kubenswrapper[4672]: I0217 16:14:26.945453 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" Feb 17 16:14:26 crc kubenswrapper[4672]: E0217 16:14:26.982099 4672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56_openshift-operators_de79ed15-243f-4c2a-a09f-b94c69734b33_0(67f04ae2ccaad3c097e51f071cac008e0331f2c102a0d0cada522f48baa12de8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:14:26 crc kubenswrapper[4672]: E0217 16:14:26.982472 4672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56_openshift-operators_de79ed15-243f-4c2a-a09f-b94c69734b33_0(67f04ae2ccaad3c097e51f071cac008e0331f2c102a0d0cada522f48baa12de8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" Feb 17 16:14:26 crc kubenswrapper[4672]: E0217 16:14:26.982534 4672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56_openshift-operators_de79ed15-243f-4c2a-a09f-b94c69734b33_0(67f04ae2ccaad3c097e51f071cac008e0331f2c102a0d0cada522f48baa12de8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" Feb 17 16:14:26 crc kubenswrapper[4672]: E0217 16:14:26.982606 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56_openshift-operators(de79ed15-243f-4c2a-a09f-b94c69734b33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56_openshift-operators(de79ed15-243f-4c2a-a09f-b94c69734b33)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56_openshift-operators_de79ed15-243f-4c2a-a09f-b94c69734b33_0(67f04ae2ccaad3c097e51f071cac008e0331f2c102a0d0cada522f48baa12de8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" podUID="de79ed15-243f-4c2a-a09f-b94c69734b33" Feb 17 16:14:27 crc kubenswrapper[4672]: I0217 16:14:27.944945 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" Feb 17 16:14:27 crc kubenswrapper[4672]: I0217 16:14:27.945411 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" Feb 17 16:14:27 crc kubenswrapper[4672]: E0217 16:14:27.984610 4672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp_openshift-operators_043e8cc1-abfc-4d57-89b8-4d26da7b8a83_0(fbb53d80d05e6b12dea2d11f80cbd7e26c9e258066822306cd5bd5d27f4ef2eb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:14:27 crc kubenswrapper[4672]: E0217 16:14:27.984709 4672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp_openshift-operators_043e8cc1-abfc-4d57-89b8-4d26da7b8a83_0(fbb53d80d05e6b12dea2d11f80cbd7e26c9e258066822306cd5bd5d27f4ef2eb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" Feb 17 16:14:27 crc kubenswrapper[4672]: E0217 16:14:27.984744 4672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp_openshift-operators_043e8cc1-abfc-4d57-89b8-4d26da7b8a83_0(fbb53d80d05e6b12dea2d11f80cbd7e26c9e258066822306cd5bd5d27f4ef2eb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" Feb 17 16:14:27 crc kubenswrapper[4672]: E0217 16:14:27.984817 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp_openshift-operators(043e8cc1-abfc-4d57-89b8-4d26da7b8a83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp_openshift-operators(043e8cc1-abfc-4d57-89b8-4d26da7b8a83)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp_openshift-operators_043e8cc1-abfc-4d57-89b8-4d26da7b8a83_0(fbb53d80d05e6b12dea2d11f80cbd7e26c9e258066822306cd5bd5d27f4ef2eb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" podUID="043e8cc1-abfc-4d57-89b8-4d26da7b8a83" Feb 17 16:14:28 crc kubenswrapper[4672]: I0217 16:14:28.944974 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6" Feb 17 16:14:28 crc kubenswrapper[4672]: I0217 16:14:28.945440 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6" Feb 17 16:14:28 crc kubenswrapper[4672]: E0217 16:14:28.964768 4672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fkmc6_openshift-operators_78d09a7b-94c0-4d04-a640-a67a065a6aff_0(12042b8d77f8f612c1a4f5525f3da1319dc6a0ad2281a41c9577a12dc3a310ad): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:14:28 crc kubenswrapper[4672]: E0217 16:14:28.964840 4672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fkmc6_openshift-operators_78d09a7b-94c0-4d04-a640-a67a065a6aff_0(12042b8d77f8f612c1a4f5525f3da1319dc6a0ad2281a41c9577a12dc3a310ad): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6" Feb 17 16:14:28 crc kubenswrapper[4672]: E0217 16:14:28.964861 4672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fkmc6_openshift-operators_78d09a7b-94c0-4d04-a640-a67a065a6aff_0(12042b8d77f8f612c1a4f5525f3da1319dc6a0ad2281a41c9577a12dc3a310ad): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6" Feb 17 16:14:28 crc kubenswrapper[4672]: E0217 16:14:28.964908 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-fkmc6_openshift-operators(78d09a7b-94c0-4d04-a640-a67a065a6aff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-fkmc6_openshift-operators(78d09a7b-94c0-4d04-a640-a67a065a6aff)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fkmc6_openshift-operators_78d09a7b-94c0-4d04-a640-a67a065a6aff_0(12042b8d77f8f612c1a4f5525f3da1319dc6a0ad2281a41c9577a12dc3a310ad): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6" podUID="78d09a7b-94c0-4d04-a640-a67a065a6aff" Feb 17 16:14:30 crc kubenswrapper[4672]: I0217 16:14:30.944174 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" Feb 17 16:14:30 crc kubenswrapper[4672]: I0217 16:14:30.944391 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-f86bw" Feb 17 16:14:30 crc kubenswrapper[4672]: I0217 16:14:30.944873 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" Feb 17 16:14:30 crc kubenswrapper[4672]: I0217 16:14:30.944918 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-f86bw" Feb 17 16:14:30 crc kubenswrapper[4672]: E0217 16:14:30.978929 4672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-f86bw_openshift-operators_bf60e2ef-36ff-47b1-94c3-58db8c9a4e40_0(047698b46d293ffe6bf7d6a76bdc73cf38cf5800f2382af2732cf067045adb73): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:14:30 crc kubenswrapper[4672]: E0217 16:14:30.979001 4672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-f86bw_openshift-operators_bf60e2ef-36ff-47b1-94c3-58db8c9a4e40_0(047698b46d293ffe6bf7d6a76bdc73cf38cf5800f2382af2732cf067045adb73): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-f86bw" Feb 17 16:14:30 crc kubenswrapper[4672]: E0217 16:14:30.979030 4672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-f86bw_openshift-operators_bf60e2ef-36ff-47b1-94c3-58db8c9a4e40_0(047698b46d293ffe6bf7d6a76bdc73cf38cf5800f2382af2732cf067045adb73): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-f86bw" Feb 17 16:14:30 crc kubenswrapper[4672]: E0217 16:14:30.979084 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-f86bw_openshift-operators(bf60e2ef-36ff-47b1-94c3-58db8c9a4e40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-f86bw_openshift-operators(bf60e2ef-36ff-47b1-94c3-58db8c9a4e40)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-f86bw_openshift-operators_bf60e2ef-36ff-47b1-94c3-58db8c9a4e40_0(047698b46d293ffe6bf7d6a76bdc73cf38cf5800f2382af2732cf067045adb73): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-f86bw" podUID="bf60e2ef-36ff-47b1-94c3-58db8c9a4e40" Feb 17 16:14:30 crc kubenswrapper[4672]: E0217 16:14:30.988401 4672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-gkrbk_openshift-operators_527318fe-5c99-481d-910e-0e0973f7748b_0(01ea982d84751e3eb35b47ea75f28678dbe66756efa67b9cde2334297e243261): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:14:30 crc kubenswrapper[4672]: E0217 16:14:30.988549 4672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-gkrbk_openshift-operators_527318fe-5c99-481d-910e-0e0973f7748b_0(01ea982d84751e3eb35b47ea75f28678dbe66756efa67b9cde2334297e243261): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" Feb 17 16:14:30 crc kubenswrapper[4672]: E0217 16:14:30.988602 4672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-gkrbk_openshift-operators_527318fe-5c99-481d-910e-0e0973f7748b_0(01ea982d84751e3eb35b47ea75f28678dbe66756efa67b9cde2334297e243261): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" Feb 17 16:14:30 crc kubenswrapper[4672]: E0217 16:14:30.988682 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-gkrbk_openshift-operators(527318fe-5c99-481d-910e-0e0973f7748b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-gkrbk_openshift-operators(527318fe-5c99-481d-910e-0e0973f7748b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-gkrbk_openshift-operators_527318fe-5c99-481d-910e-0e0973f7748b_0(01ea982d84751e3eb35b47ea75f28678dbe66756efa67b9cde2334297e243261): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" podUID="527318fe-5c99-481d-910e-0e0973f7748b" Feb 17 16:14:32 crc kubenswrapper[4672]: I0217 16:14:32.945669 4672 scope.go:117] "RemoveContainer" containerID="397bf27fea3d27b5db56ccb8cc9ebd9e8401dd883e3c22d9d2e8f76a4f63c577" Feb 17 16:14:33 crc kubenswrapper[4672]: I0217 16:14:33.788288 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5jjr2_edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe/kube-multus/2.log" Feb 17 16:14:33 crc kubenswrapper[4672]: I0217 16:14:33.788598 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5jjr2" event={"ID":"edaf690d-34d9-4b32-8a3e-8f5cd3df2bfe","Type":"ContainerStarted","Data":"66826c07f38d28f931374d3a1cfc117740d6bcef62474e4d8182dfa2404e3f4e"} Feb 17 16:14:38 crc kubenswrapper[4672]: I0217 16:14:38.045264 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h5dlz" Feb 17 16:14:38 crc kubenswrapper[4672]: I0217 16:14:38.944185 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" Feb 17 16:14:38 crc kubenswrapper[4672]: I0217 16:14:38.944828 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" Feb 17 16:14:39 crc kubenswrapper[4672]: I0217 16:14:39.379255 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56"] Feb 17 16:14:39 crc kubenswrapper[4672]: I0217 16:14:39.825853 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" event={"ID":"de79ed15-243f-4c2a-a09f-b94c69734b33","Type":"ContainerStarted","Data":"af269fb26de3c4401771f3a0c1946141e7361e21cb0db036c66d3a24b99197bc"} Feb 17 16:14:40 crc kubenswrapper[4672]: I0217 16:14:40.944767 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6" Feb 17 16:14:40 crc kubenswrapper[4672]: I0217 16:14:40.945727 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6" Feb 17 16:14:41 crc kubenswrapper[4672]: I0217 16:14:41.249246 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6"] Feb 17 16:14:41 crc kubenswrapper[4672]: W0217 16:14:41.274255 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78d09a7b_94c0_4d04_a640_a67a065a6aff.slice/crio-d260b466b7193645338da202c6593d7b43e872711cb381dd6b2892b4d918ec0a WatchSource:0}: Error finding container d260b466b7193645338da202c6593d7b43e872711cb381dd6b2892b4d918ec0a: Status 404 returned error can't find the container with id d260b466b7193645338da202c6593d7b43e872711cb381dd6b2892b4d918ec0a Feb 17 16:14:41 crc kubenswrapper[4672]: I0217 16:14:41.846224 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6" event={"ID":"78d09a7b-94c0-4d04-a640-a67a065a6aff","Type":"ContainerStarted","Data":"d260b466b7193645338da202c6593d7b43e872711cb381dd6b2892b4d918ec0a"} Feb 17 16:14:42 crc kubenswrapper[4672]: I0217 16:14:42.944504 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" Feb 17 16:14:42 crc kubenswrapper[4672]: I0217 16:14:42.945194 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" Feb 17 16:14:43 crc kubenswrapper[4672]: I0217 16:14:43.734094 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp"] Feb 17 16:14:43 crc kubenswrapper[4672]: W0217 16:14:43.744836 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod043e8cc1_abfc_4d57_89b8_4d26da7b8a83.slice/crio-e774d19e0fee848f6d084ed4b9682a18b057f90da9a935d00f02f1c9c6e9e17f WatchSource:0}: Error finding container e774d19e0fee848f6d084ed4b9682a18b057f90da9a935d00f02f1c9c6e9e17f: Status 404 returned error can't find the container with id e774d19e0fee848f6d084ed4b9682a18b057f90da9a935d00f02f1c9c6e9e17f Feb 17 16:14:43 crc kubenswrapper[4672]: I0217 16:14:43.865153 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" event={"ID":"de79ed15-243f-4c2a-a09f-b94c69734b33","Type":"ContainerStarted","Data":"b9421f1700405ea4eb9c5d1751ce65186b81dfc9b18f77a7ed5e2cf3c001f7d9"} Feb 17 16:14:43 crc kubenswrapper[4672]: I0217 16:14:43.866776 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" event={"ID":"043e8cc1-abfc-4d57-89b8-4d26da7b8a83","Type":"ContainerStarted","Data":"e774d19e0fee848f6d084ed4b9682a18b057f90da9a935d00f02f1c9c6e9e17f"} Feb 17 16:14:43 crc kubenswrapper[4672]: I0217 16:14:43.900184 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56" podStartSLOduration=26.962432551 podStartE2EDuration="30.900161541s" podCreationTimestamp="2026-02-17 16:14:13 +0000 UTC" firstStartedPulling="2026-02-17 16:14:39.39428046 +0000 UTC m=+688.148369192" lastFinishedPulling="2026-02-17 16:14:43.33200945 +0000 UTC m=+692.086098182" observedRunningTime="2026-02-17 16:14:43.894001617 +0000 UTC m=+692.648090359" watchObservedRunningTime="2026-02-17 16:14:43.900161541 +0000 UTC m=+692.654250293" Feb 17 16:14:43 crc kubenswrapper[4672]: I0217 16:14:43.944840 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" Feb 17 16:14:43 crc kubenswrapper[4672]: I0217 16:14:43.945389 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" Feb 17 16:14:44 crc kubenswrapper[4672]: I0217 16:14:44.157319 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-gkrbk"] Feb 17 16:14:44 crc kubenswrapper[4672]: I0217 16:14:44.874403 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" event={"ID":"043e8cc1-abfc-4d57-89b8-4d26da7b8a83","Type":"ContainerStarted","Data":"0957b7f1ccf3cd396654deb42964606130db01440c9387afee937ceae19b716c"} Feb 17 16:14:44 crc kubenswrapper[4672]: I0217 16:14:44.876112 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" event={"ID":"527318fe-5c99-481d-910e-0e0973f7748b","Type":"ContainerStarted","Data":"1106049dfa1438e0d9511ed2eb20e5c119521f5085e2e0175e38713a30035781"} Feb 17 16:14:44 crc kubenswrapper[4672]: I0217 16:14:44.897469 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp" podStartSLOduration=31.89744361 podStartE2EDuration="31.89744361s" podCreationTimestamp="2026-02-17 16:14:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:14:44.89484077 +0000 UTC m=+693.648929502" watchObservedRunningTime="2026-02-17 16:14:44.89744361 +0000 UTC m=+693.651532342" Feb 17 16:14:44 crc kubenswrapper[4672]: I0217 16:14:44.944728 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-f86bw" Feb 17 16:14:44 crc kubenswrapper[4672]: I0217 16:14:44.945329 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-f86bw" Feb 17 16:14:45 crc kubenswrapper[4672]: I0217 16:14:45.146495 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-f86bw"] Feb 17 16:14:45 crc kubenswrapper[4672]: I0217 16:14:45.883721 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-f86bw" event={"ID":"bf60e2ef-36ff-47b1-94c3-58db8c9a4e40","Type":"ContainerStarted","Data":"d130ebf11dd6524f26d890a49bd6553c1c8c6f045b8272186f314564bb1d271a"} Feb 17 16:14:46 crc kubenswrapper[4672]: I0217 16:14:46.897884 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6" event={"ID":"78d09a7b-94c0-4d04-a640-a67a065a6aff","Type":"ContainerStarted","Data":"1b29ac9f04bd32da5db7c25cf086460013ab438650aae7226e54c9db4060d025"} Feb 17 16:14:46 crc kubenswrapper[4672]: I0217 16:14:46.926742 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkmc6" podStartSLOduration=29.613286849 podStartE2EDuration="33.926719584s" podCreationTimestamp="2026-02-17 16:14:13 +0000 UTC" firstStartedPulling="2026-02-17 16:14:41.277139764 +0000 UTC m=+690.031228496" lastFinishedPulling="2026-02-17 16:14:45.590572499 +0000 UTC m=+694.344661231" observedRunningTime="2026-02-17 16:14:46.921597288 +0000 UTC m=+695.675686020" watchObservedRunningTime="2026-02-17 16:14:46.926719584 +0000 UTC m=+695.680808316" Feb 17 16:14:50 crc kubenswrapper[4672]: I0217 16:14:50.926015 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-f86bw" event={"ID":"bf60e2ef-36ff-47b1-94c3-58db8c9a4e40","Type":"ContainerStarted","Data":"a0ed3b6484de0f115e1649f1b3919e8da1e5c564a77e941bc25d078298e7e0c4"} Feb 17 16:14:50 crc kubenswrapper[4672]: I0217 16:14:50.926799 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-f86bw" Feb 17 16:14:50 crc kubenswrapper[4672]: I0217 16:14:50.928653 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" event={"ID":"527318fe-5c99-481d-910e-0e0973f7748b","Type":"ContainerStarted","Data":"10dbd03c335cf5e1610a55dcba5bfa177dd93502b798831a94246d569d563eea"} Feb 17 16:14:50 crc kubenswrapper[4672]: I0217 16:14:50.929646 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" Feb 17 16:14:50 crc kubenswrapper[4672]: I0217 16:14:50.931110 4672 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-gkrbk container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.19:8081/healthz\": dial tcp 10.217.0.19:8081: connect: connection refused" start-of-body= Feb 17 16:14:50 crc kubenswrapper[4672]: I0217 16:14:50.931161 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" podUID="527318fe-5c99-481d-910e-0e0973f7748b" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.19:8081/healthz\": dial tcp 10.217.0.19:8081: connect: connection refused" Feb 17 16:14:50 crc kubenswrapper[4672]: I0217 16:14:50.945092 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-f86bw" podStartSLOduration=31.45092296 podStartE2EDuration="36.945069041s" podCreationTimestamp="2026-02-17 16:14:14 +0000 UTC" firstStartedPulling="2026-02-17 16:14:45.168781746 +0000 UTC m=+693.922870478" lastFinishedPulling="2026-02-17 16:14:50.662927807 +0000 UTC m=+699.417016559" observedRunningTime="2026-02-17 16:14:50.942686088 +0000 UTC m=+699.696774850" watchObservedRunningTime="2026-02-17 16:14:50.945069041 +0000 UTC m=+699.699157803" Feb 17 16:14:50 crc kubenswrapper[4672]: I0217 16:14:50.967153 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" podStartSLOduration=31.433709411 podStartE2EDuration="37.967130889s" podCreationTimestamp="2026-02-17 16:14:13 +0000 UTC" firstStartedPulling="2026-02-17 16:14:44.163894795 +0000 UTC m=+692.917983527" lastFinishedPulling="2026-02-17 16:14:50.697316273 +0000 UTC m=+699.451405005" observedRunningTime="2026-02-17 16:14:50.962764643 +0000 UTC m=+699.716853375" watchObservedRunningTime="2026-02-17 16:14:50.967130889 +0000 UTC m=+699.721219661" Feb 17 16:14:51 crc kubenswrapper[4672]: I0217 16:14:51.960030 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-gkrbk" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.116679 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-db29w"] Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.120418 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-db29w" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.125244 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.125320 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.125367 4672 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-4kcvj" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.133418 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-r7xjf"] Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.134492 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-r7xjf" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.137006 4672 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-hmjhg" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.152260 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-db29w"] Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.159836 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-r7xjf"] Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.177593 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-8grm4"] Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.178584 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-8grm4" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.180702 4672 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-fk24f" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.182355 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2qp8\" (UniqueName: \"kubernetes.io/projected/99c98563-db8b-4849-b06e-6d7bf6a08b69-kube-api-access-d2qp8\") pod \"cert-manager-858654f9db-r7xjf\" (UID: \"99c98563-db8b-4849-b06e-6d7bf6a08b69\") " pod="cert-manager/cert-manager-858654f9db-r7xjf" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.182431 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hjnl\" (UniqueName: \"kubernetes.io/projected/75f66eec-6844-429c-8168-33db45850fd9-kube-api-access-4hjnl\") pod \"cert-manager-cainjector-cf98fcc89-db29w\" (UID: \"75f66eec-6844-429c-8168-33db45850fd9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-db29w" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.183710 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-8grm4"] Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.210982 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-k4s4t"] Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.212005 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-k4s4t" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.213788 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.215699 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-k4s4t"] Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.225061 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.283160 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tznq\" (UniqueName: \"kubernetes.io/projected/e34e9fd6-0f58-4f41-a4ae-39f88ff43fac-kube-api-access-7tznq\") pod \"cert-manager-webhook-687f57d79b-8grm4\" (UID: \"e34e9fd6-0f58-4f41-a4ae-39f88ff43fac\") " pod="cert-manager/cert-manager-webhook-687f57d79b-8grm4" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.283218 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx8kx\" (UniqueName: \"kubernetes.io/projected/64bc792d-4f6e-45f7-948d-5b879a249534-kube-api-access-gx8kx\") pod \"collect-profiles-29522415-k4s4t\" (UID: \"64bc792d-4f6e-45f7-948d-5b879a249534\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-k4s4t" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.283249 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2qp8\" (UniqueName: \"kubernetes.io/projected/99c98563-db8b-4849-b06e-6d7bf6a08b69-kube-api-access-d2qp8\") pod \"cert-manager-858654f9db-r7xjf\" (UID: \"99c98563-db8b-4849-b06e-6d7bf6a08b69\") " pod="cert-manager/cert-manager-858654f9db-r7xjf" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.283286 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64bc792d-4f6e-45f7-948d-5b879a249534-secret-volume\") pod \"collect-profiles-29522415-k4s4t\" (UID: \"64bc792d-4f6e-45f7-948d-5b879a249534\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-k4s4t" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.283318 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hjnl\" (UniqueName: \"kubernetes.io/projected/75f66eec-6844-429c-8168-33db45850fd9-kube-api-access-4hjnl\") pod \"cert-manager-cainjector-cf98fcc89-db29w\" (UID: \"75f66eec-6844-429c-8168-33db45850fd9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-db29w" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.283496 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64bc792d-4f6e-45f7-948d-5b879a249534-config-volume\") pod \"collect-profiles-29522415-k4s4t\" (UID: \"64bc792d-4f6e-45f7-948d-5b879a249534\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-k4s4t" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.304163 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2qp8\" (UniqueName: \"kubernetes.io/projected/99c98563-db8b-4849-b06e-6d7bf6a08b69-kube-api-access-d2qp8\") pod \"cert-manager-858654f9db-r7xjf\" (UID: \"99c98563-db8b-4849-b06e-6d7bf6a08b69\") " pod="cert-manager/cert-manager-858654f9db-r7xjf" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.305083 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hjnl\" (UniqueName: \"kubernetes.io/projected/75f66eec-6844-429c-8168-33db45850fd9-kube-api-access-4hjnl\") pod \"cert-manager-cainjector-cf98fcc89-db29w\" (UID: \"75f66eec-6844-429c-8168-33db45850fd9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-db29w" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.385017 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64bc792d-4f6e-45f7-948d-5b879a249534-secret-volume\") pod \"collect-profiles-29522415-k4s4t\" (UID: \"64bc792d-4f6e-45f7-948d-5b879a249534\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-k4s4t" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.385134 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64bc792d-4f6e-45f7-948d-5b879a249534-config-volume\") pod \"collect-profiles-29522415-k4s4t\" (UID: \"64bc792d-4f6e-45f7-948d-5b879a249534\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-k4s4t" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.385204 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tznq\" (UniqueName: \"kubernetes.io/projected/e34e9fd6-0f58-4f41-a4ae-39f88ff43fac-kube-api-access-7tznq\") pod \"cert-manager-webhook-687f57d79b-8grm4\" (UID: \"e34e9fd6-0f58-4f41-a4ae-39f88ff43fac\") " pod="cert-manager/cert-manager-webhook-687f57d79b-8grm4" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.385227 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx8kx\" (UniqueName: \"kubernetes.io/projected/64bc792d-4f6e-45f7-948d-5b879a249534-kube-api-access-gx8kx\") pod \"collect-profiles-29522415-k4s4t\" (UID: \"64bc792d-4f6e-45f7-948d-5b879a249534\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-k4s4t" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.386907 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64bc792d-4f6e-45f7-948d-5b879a249534-config-volume\") pod \"collect-profiles-29522415-k4s4t\" (UID: \"64bc792d-4f6e-45f7-948d-5b879a249534\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-k4s4t" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.388757 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64bc792d-4f6e-45f7-948d-5b879a249534-secret-volume\") pod \"collect-profiles-29522415-k4s4t\" (UID: \"64bc792d-4f6e-45f7-948d-5b879a249534\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-k4s4t" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.401005 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tznq\" (UniqueName: \"kubernetes.io/projected/e34e9fd6-0f58-4f41-a4ae-39f88ff43fac-kube-api-access-7tznq\") pod \"cert-manager-webhook-687f57d79b-8grm4\" (UID: \"e34e9fd6-0f58-4f41-a4ae-39f88ff43fac\") " pod="cert-manager/cert-manager-webhook-687f57d79b-8grm4" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.405829 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx8kx\" (UniqueName: \"kubernetes.io/projected/64bc792d-4f6e-45f7-948d-5b879a249534-kube-api-access-gx8kx\") pod \"collect-profiles-29522415-k4s4t\" (UID: \"64bc792d-4f6e-45f7-948d-5b879a249534\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-k4s4t" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.435775 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-db29w" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.453161 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-r7xjf" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.491202 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-8grm4" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.530802 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-k4s4t" Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.737732 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-r7xjf"] Feb 17 16:15:00 crc kubenswrapper[4672]: W0217 16:15:00.742216 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99c98563_db8b_4849_b06e_6d7bf6a08b69.slice/crio-3059e0cb7a4b793dedde8d457c7a8868cd53862caa65f1ef2a7b5a52e4c899cf WatchSource:0}: Error finding container 3059e0cb7a4b793dedde8d457c7a8868cd53862caa65f1ef2a7b5a52e4c899cf: Status 404 returned error can't find the container with id 3059e0cb7a4b793dedde8d457c7a8868cd53862caa65f1ef2a7b5a52e4c899cf Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.778833 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-8grm4"] Feb 17 16:15:00 crc kubenswrapper[4672]: W0217 16:15:00.781240 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode34e9fd6_0f58_4f41_a4ae_39f88ff43fac.slice/crio-368106b34431d6215105acc01c7bc0d494e1fc8beee61532e84c76f938444bb9 WatchSource:0}: Error finding container 368106b34431d6215105acc01c7bc0d494e1fc8beee61532e84c76f938444bb9: Status 404 returned error can't find the container with id 368106b34431d6215105acc01c7bc0d494e1fc8beee61532e84c76f938444bb9 Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.813271 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-k4s4t"] Feb 17 16:15:00 crc kubenswrapper[4672]: W0217 16:15:00.818589 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64bc792d_4f6e_45f7_948d_5b879a249534.slice/crio-ab21d821d0c551d52c4c5194068c67a348607396b75cb0cdd27ade6ae19ee7d1 WatchSource:0}: Error finding container ab21d821d0c551d52c4c5194068c67a348607396b75cb0cdd27ade6ae19ee7d1: Status 404 returned error can't find the container with id ab21d821d0c551d52c4c5194068c67a348607396b75cb0cdd27ade6ae19ee7d1 Feb 17 16:15:00 crc kubenswrapper[4672]: I0217 16:15:00.871189 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-db29w"] Feb 17 16:15:00 crc kubenswrapper[4672]: W0217 16:15:00.876057 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75f66eec_6844_429c_8168_33db45850fd9.slice/crio-e37c4569f46f234da8a7b16a79f5ed6911a358f140d8181b4b3992aaf2179787 WatchSource:0}: Error finding container e37c4569f46f234da8a7b16a79f5ed6911a358f140d8181b4b3992aaf2179787: Status 404 returned error can't find the container with id e37c4569f46f234da8a7b16a79f5ed6911a358f140d8181b4b3992aaf2179787 Feb 17 16:15:01 crc kubenswrapper[4672]: I0217 16:15:01.007465 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-r7xjf" event={"ID":"99c98563-db8b-4849-b06e-6d7bf6a08b69","Type":"ContainerStarted","Data":"3059e0cb7a4b793dedde8d457c7a8868cd53862caa65f1ef2a7b5a52e4c899cf"} Feb 17 16:15:01 crc kubenswrapper[4672]: I0217 16:15:01.009001 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-8grm4" event={"ID":"e34e9fd6-0f58-4f41-a4ae-39f88ff43fac","Type":"ContainerStarted","Data":"368106b34431d6215105acc01c7bc0d494e1fc8beee61532e84c76f938444bb9"} Feb 17 16:15:01 crc kubenswrapper[4672]: I0217 16:15:01.010859 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-db29w" event={"ID":"75f66eec-6844-429c-8168-33db45850fd9","Type":"ContainerStarted","Data":"e37c4569f46f234da8a7b16a79f5ed6911a358f140d8181b4b3992aaf2179787"} Feb 17 16:15:01 crc kubenswrapper[4672]: I0217 16:15:01.012473 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-k4s4t" event={"ID":"64bc792d-4f6e-45f7-948d-5b879a249534","Type":"ContainerStarted","Data":"966bd3b083f2fe4aa3a60d5243e3ae215223140b40519d3eb2d4b9d06efbf9f4"} Feb 17 16:15:01 crc kubenswrapper[4672]: I0217 16:15:01.012549 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-k4s4t" event={"ID":"64bc792d-4f6e-45f7-948d-5b879a249534","Type":"ContainerStarted","Data":"ab21d821d0c551d52c4c5194068c67a348607396b75cb0cdd27ade6ae19ee7d1"} Feb 17 16:15:01 crc kubenswrapper[4672]: I0217 16:15:01.045262 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-k4s4t" podStartSLOduration=1.045238959 podStartE2EDuration="1.045238959s" podCreationTimestamp="2026-02-17 16:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:01.040736339 +0000 UTC m=+709.794825081" watchObservedRunningTime="2026-02-17 16:15:01.045238959 +0000 UTC m=+709.799327691" Feb 17 16:15:02 crc kubenswrapper[4672]: I0217 16:15:02.022809 4672 generic.go:334] "Generic (PLEG): container finished" podID="64bc792d-4f6e-45f7-948d-5b879a249534" containerID="966bd3b083f2fe4aa3a60d5243e3ae215223140b40519d3eb2d4b9d06efbf9f4" exitCode=0 Feb 17 16:15:02 crc kubenswrapper[4672]: I0217 16:15:02.023185 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-k4s4t" event={"ID":"64bc792d-4f6e-45f7-948d-5b879a249534","Type":"ContainerDied","Data":"966bd3b083f2fe4aa3a60d5243e3ae215223140b40519d3eb2d4b9d06efbf9f4"} Feb 17 16:15:03 crc kubenswrapper[4672]: I0217 16:15:03.451990 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-k4s4t" Feb 17 16:15:03 crc kubenswrapper[4672]: I0217 16:15:03.530316 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx8kx\" (UniqueName: \"kubernetes.io/projected/64bc792d-4f6e-45f7-948d-5b879a249534-kube-api-access-gx8kx\") pod \"64bc792d-4f6e-45f7-948d-5b879a249534\" (UID: \"64bc792d-4f6e-45f7-948d-5b879a249534\") " Feb 17 16:15:03 crc kubenswrapper[4672]: I0217 16:15:03.530723 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64bc792d-4f6e-45f7-948d-5b879a249534-config-volume\") pod \"64bc792d-4f6e-45f7-948d-5b879a249534\" (UID: \"64bc792d-4f6e-45f7-948d-5b879a249534\") " Feb 17 16:15:03 crc kubenswrapper[4672]: I0217 16:15:03.530807 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64bc792d-4f6e-45f7-948d-5b879a249534-secret-volume\") pod \"64bc792d-4f6e-45f7-948d-5b879a249534\" (UID: \"64bc792d-4f6e-45f7-948d-5b879a249534\") " Feb 17 16:15:03 crc kubenswrapper[4672]: I0217 16:15:03.531189 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64bc792d-4f6e-45f7-948d-5b879a249534-config-volume" (OuterVolumeSpecName: "config-volume") pod "64bc792d-4f6e-45f7-948d-5b879a249534" (UID: "64bc792d-4f6e-45f7-948d-5b879a249534"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:03 crc kubenswrapper[4672]: I0217 16:15:03.538789 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64bc792d-4f6e-45f7-948d-5b879a249534-kube-api-access-gx8kx" (OuterVolumeSpecName: "kube-api-access-gx8kx") pod "64bc792d-4f6e-45f7-948d-5b879a249534" (UID: "64bc792d-4f6e-45f7-948d-5b879a249534"). InnerVolumeSpecName "kube-api-access-gx8kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:03 crc kubenswrapper[4672]: I0217 16:15:03.541301 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64bc792d-4f6e-45f7-948d-5b879a249534-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "64bc792d-4f6e-45f7-948d-5b879a249534" (UID: "64bc792d-4f6e-45f7-948d-5b879a249534"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:03 crc kubenswrapper[4672]: I0217 16:15:03.632344 4672 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64bc792d-4f6e-45f7-948d-5b879a249534-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:03 crc kubenswrapper[4672]: I0217 16:15:03.632381 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx8kx\" (UniqueName: \"kubernetes.io/projected/64bc792d-4f6e-45f7-948d-5b879a249534-kube-api-access-gx8kx\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:03 crc kubenswrapper[4672]: I0217 16:15:03.632395 4672 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64bc792d-4f6e-45f7-948d-5b879a249534-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:04 crc kubenswrapper[4672]: I0217 16:15:04.035118 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-k4s4t" event={"ID":"64bc792d-4f6e-45f7-948d-5b879a249534","Type":"ContainerDied","Data":"ab21d821d0c551d52c4c5194068c67a348607396b75cb0cdd27ade6ae19ee7d1"} Feb 17 16:15:04 crc kubenswrapper[4672]: I0217 16:15:04.035166 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab21d821d0c551d52c4c5194068c67a348607396b75cb0cdd27ade6ae19ee7d1" Feb 17 16:15:04 crc kubenswrapper[4672]: I0217 16:15:04.035224 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-k4s4t" Feb 17 16:15:04 crc kubenswrapper[4672]: I0217 16:15:04.420948 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-f86bw" Feb 17 16:15:05 crc kubenswrapper[4672]: I0217 16:15:05.044297 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-db29w" event={"ID":"75f66eec-6844-429c-8168-33db45850fd9","Type":"ContainerStarted","Data":"c9907c8c3f46c7bc2af4d508cda03b25561f49716c38e8e65792c8c485243430"} Feb 17 16:15:05 crc kubenswrapper[4672]: I0217 16:15:05.048144 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-r7xjf" event={"ID":"99c98563-db8b-4849-b06e-6d7bf6a08b69","Type":"ContainerStarted","Data":"f52bbb2b95e0d294d6acbcd38ebce3f002734dd99beb643cb88500dff3dd4315"} Feb 17 16:15:05 crc kubenswrapper[4672]: I0217 16:15:05.089377 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-db29w" podStartSLOduration=1.804788908 podStartE2EDuration="5.089357832s" podCreationTimestamp="2026-02-17 16:15:00 +0000 UTC" firstStartedPulling="2026-02-17 16:15:00.878734325 +0000 UTC m=+709.632823057" lastFinishedPulling="2026-02-17 16:15:04.163303229 +0000 UTC m=+712.917391981" observedRunningTime="2026-02-17 16:15:05.065651511 +0000 UTC m=+713.819740283" watchObservedRunningTime="2026-02-17 16:15:05.089357832 +0000 UTC m=+713.843446564" Feb 17 16:15:05 crc kubenswrapper[4672]: I0217 16:15:05.089605 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-r7xjf" podStartSLOduration=1.726703808 podStartE2EDuration="5.089599859s" podCreationTimestamp="2026-02-17 16:15:00 +0000 UTC" firstStartedPulling="2026-02-17 16:15:00.745011203 +0000 UTC m=+709.499099945" lastFinishedPulling="2026-02-17 16:15:04.107907264 +0000 UTC m=+712.861995996" observedRunningTime="2026-02-17 16:15:05.089057194 +0000 UTC m=+713.843145966" watchObservedRunningTime="2026-02-17 16:15:05.089599859 +0000 UTC m=+713.843688601" Feb 17 16:15:06 crc kubenswrapper[4672]: I0217 16:15:06.055469 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-8grm4" event={"ID":"e34e9fd6-0f58-4f41-a4ae-39f88ff43fac","Type":"ContainerStarted","Data":"325c6de50ad835bc8dcb06942a387ff40cd611a88b9b8d06ef533fa1c1fd14b3"} Feb 17 16:15:06 crc kubenswrapper[4672]: I0217 16:15:06.056406 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-8grm4" Feb 17 16:15:06 crc kubenswrapper[4672]: I0217 16:15:06.074540 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-8grm4" podStartSLOduration=1.349014522 podStartE2EDuration="6.074501572s" podCreationTimestamp="2026-02-17 16:15:00 +0000 UTC" firstStartedPulling="2026-02-17 16:15:00.784008832 +0000 UTC m=+709.538097564" lastFinishedPulling="2026-02-17 16:15:05.509495882 +0000 UTC m=+714.263584614" observedRunningTime="2026-02-17 16:15:06.072312413 +0000 UTC m=+714.826401145" watchObservedRunningTime="2026-02-17 16:15:06.074501572 +0000 UTC m=+714.828590304" Feb 17 16:15:10 crc kubenswrapper[4672]: I0217 16:15:10.494768 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-8grm4" Feb 17 16:15:27 crc kubenswrapper[4672]: I0217 16:15:27.566059 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:15:27 crc kubenswrapper[4672]: I0217 16:15:27.567688 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:15:33 crc kubenswrapper[4672]: I0217 16:15:33.756391 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks"] Feb 17 16:15:33 crc kubenswrapper[4672]: E0217 16:15:33.758447 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64bc792d-4f6e-45f7-948d-5b879a249534" containerName="collect-profiles" Feb 17 16:15:33 crc kubenswrapper[4672]: I0217 16:15:33.758482 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="64bc792d-4f6e-45f7-948d-5b879a249534" containerName="collect-profiles" Feb 17 16:15:33 crc kubenswrapper[4672]: I0217 16:15:33.758806 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="64bc792d-4f6e-45f7-948d-5b879a249534" containerName="collect-profiles" Feb 17 16:15:33 crc kubenswrapper[4672]: I0217 16:15:33.760452 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks" Feb 17 16:15:33 crc kubenswrapper[4672]: I0217 16:15:33.762885 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:15:33 crc kubenswrapper[4672]: I0217 16:15:33.767374 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks"] Feb 17 16:15:33 crc kubenswrapper[4672]: I0217 16:15:33.847478 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvrsn\" (UniqueName: \"kubernetes.io/projected/d7125f42-e466-4a0e-af16-ed09a82f07be-kube-api-access-cvrsn\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks\" (UID: \"d7125f42-e466-4a0e-af16-ed09a82f07be\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks" Feb 17 16:15:33 crc kubenswrapper[4672]: I0217 16:15:33.847563 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7125f42-e466-4a0e-af16-ed09a82f07be-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks\" (UID: \"d7125f42-e466-4a0e-af16-ed09a82f07be\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks" Feb 17 16:15:33 crc kubenswrapper[4672]: I0217 16:15:33.847600 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7125f42-e466-4a0e-af16-ed09a82f07be-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks\" (UID: \"d7125f42-e466-4a0e-af16-ed09a82f07be\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks" Feb 17 16:15:33 crc kubenswrapper[4672]: I0217 16:15:33.948257 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvrsn\" (UniqueName: \"kubernetes.io/projected/d7125f42-e466-4a0e-af16-ed09a82f07be-kube-api-access-cvrsn\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks\" (UID: \"d7125f42-e466-4a0e-af16-ed09a82f07be\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks" Feb 17 16:15:33 crc kubenswrapper[4672]: I0217 16:15:33.948298 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7125f42-e466-4a0e-af16-ed09a82f07be-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks\" (UID: \"d7125f42-e466-4a0e-af16-ed09a82f07be\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks" Feb 17 16:15:33 crc kubenswrapper[4672]: I0217 16:15:33.948322 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7125f42-e466-4a0e-af16-ed09a82f07be-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks\" (UID: \"d7125f42-e466-4a0e-af16-ed09a82f07be\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks" Feb 17 16:15:33 crc kubenswrapper[4672]: I0217 16:15:33.948768 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7125f42-e466-4a0e-af16-ed09a82f07be-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks\" (UID: \"d7125f42-e466-4a0e-af16-ed09a82f07be\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks" Feb 17 16:15:33 crc kubenswrapper[4672]: I0217 16:15:33.948818 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7125f42-e466-4a0e-af16-ed09a82f07be-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks\" (UID: \"d7125f42-e466-4a0e-af16-ed09a82f07be\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks" Feb 17 16:15:33 crc kubenswrapper[4672]: I0217 16:15:33.968812 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvrsn\" (UniqueName: \"kubernetes.io/projected/d7125f42-e466-4a0e-af16-ed09a82f07be-kube-api-access-cvrsn\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks\" (UID: \"d7125f42-e466-4a0e-af16-ed09a82f07be\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks" Feb 17 16:15:34 crc kubenswrapper[4672]: I0217 16:15:34.106056 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks" Feb 17 16:15:34 crc kubenswrapper[4672]: I0217 16:15:34.403579 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks"] Feb 17 16:15:35 crc kubenswrapper[4672]: I0217 16:15:35.247399 4672 generic.go:334] "Generic (PLEG): container finished" podID="d7125f42-e466-4a0e-af16-ed09a82f07be" containerID="0d3e84f17c820a3cb3d340e707c429d73b670d1b271bbd57dc037f789d45cc85" exitCode=0 Feb 17 16:15:35 crc kubenswrapper[4672]: I0217 16:15:35.247450 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks" event={"ID":"d7125f42-e466-4a0e-af16-ed09a82f07be","Type":"ContainerDied","Data":"0d3e84f17c820a3cb3d340e707c429d73b670d1b271bbd57dc037f789d45cc85"} Feb 17 16:15:35 crc kubenswrapper[4672]: I0217 16:15:35.247475 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks" event={"ID":"d7125f42-e466-4a0e-af16-ed09a82f07be","Type":"ContainerStarted","Data":"c739e001b508c47aa88382e106db61b7f094b43e68b69973330443e754aa8cee"} Feb 17 16:15:35 crc kubenswrapper[4672]: I0217 16:15:35.744895 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 17 16:15:35 crc kubenswrapper[4672]: I0217 16:15:35.746209 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 17 16:15:35 crc kubenswrapper[4672]: I0217 16:15:35.749177 4672 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-lhggt" Feb 17 16:15:35 crc kubenswrapper[4672]: I0217 16:15:35.750013 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 17 16:15:35 crc kubenswrapper[4672]: I0217 16:15:35.750962 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 17 16:15:35 crc kubenswrapper[4672]: I0217 16:15:35.758685 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 17 16:15:35 crc kubenswrapper[4672]: I0217 16:15:35.787804 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv6tc\" (UniqueName: \"kubernetes.io/projected/2ebea714-f8cb-4129-b9e1-643f76e48ced-kube-api-access-gv6tc\") pod \"minio\" (UID: \"2ebea714-f8cb-4129-b9e1-643f76e48ced\") " pod="minio-dev/minio" Feb 17 16:15:35 crc kubenswrapper[4672]: I0217 16:15:35.787963 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-71224500-50a6-45b9-987f-6104264826f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71224500-50a6-45b9-987f-6104264826f4\") pod \"minio\" (UID: \"2ebea714-f8cb-4129-b9e1-643f76e48ced\") " pod="minio-dev/minio" Feb 17 16:15:35 crc kubenswrapper[4672]: I0217 16:15:35.889470 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-71224500-50a6-45b9-987f-6104264826f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71224500-50a6-45b9-987f-6104264826f4\") pod \"minio\" (UID: \"2ebea714-f8cb-4129-b9e1-643f76e48ced\") " pod="minio-dev/minio" Feb 17 16:15:35 crc kubenswrapper[4672]: I0217 16:15:35.889961 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv6tc\" (UniqueName: \"kubernetes.io/projected/2ebea714-f8cb-4129-b9e1-643f76e48ced-kube-api-access-gv6tc\") pod \"minio\" (UID: \"2ebea714-f8cb-4129-b9e1-643f76e48ced\") " pod="minio-dev/minio" Feb 17 16:15:35 crc kubenswrapper[4672]: I0217 16:15:35.894002 4672 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:35 crc kubenswrapper[4672]: I0217 16:15:35.894066 4672 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-71224500-50a6-45b9-987f-6104264826f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71224500-50a6-45b9-987f-6104264826f4\") pod \"minio\" (UID: \"2ebea714-f8cb-4129-b9e1-643f76e48ced\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3d47c403e4b6e150e2095636b0a709eff9dcce8a6ce818da751dbf1e2200caa4/globalmount\"" pod="minio-dev/minio" Feb 17 16:15:35 crc kubenswrapper[4672]: I0217 16:15:35.923110 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv6tc\" (UniqueName: \"kubernetes.io/projected/2ebea714-f8cb-4129-b9e1-643f76e48ced-kube-api-access-gv6tc\") pod \"minio\" (UID: \"2ebea714-f8cb-4129-b9e1-643f76e48ced\") " pod="minio-dev/minio" Feb 17 16:15:35 crc kubenswrapper[4672]: I0217 16:15:35.935126 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-71224500-50a6-45b9-987f-6104264826f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71224500-50a6-45b9-987f-6104264826f4\") pod \"minio\" (UID: \"2ebea714-f8cb-4129-b9e1-643f76e48ced\") " pod="minio-dev/minio" Feb 17 16:15:36 crc kubenswrapper[4672]: I0217 16:15:36.081380 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 17 16:15:36 crc kubenswrapper[4672]: I0217 16:15:36.393280 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 17 16:15:37 crc kubenswrapper[4672]: I0217 16:15:37.267445 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"2ebea714-f8cb-4129-b9e1-643f76e48ced","Type":"ContainerStarted","Data":"480c00226c0d8d5ad88b4653d7de8da54212aa088422ed78dd1bb4ee4c490b08"} Feb 17 16:15:38 crc kubenswrapper[4672]: I0217 16:15:38.279339 4672 generic.go:334] "Generic (PLEG): container finished" podID="d7125f42-e466-4a0e-af16-ed09a82f07be" containerID="99e001c82e0793b3ccecea8b65365fc6418d9d0bb103cb94c935e604864e0520" exitCode=0 Feb 17 16:15:38 crc kubenswrapper[4672]: I0217 16:15:38.279544 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks" event={"ID":"d7125f42-e466-4a0e-af16-ed09a82f07be","Type":"ContainerDied","Data":"99e001c82e0793b3ccecea8b65365fc6418d9d0bb103cb94c935e604864e0520"} Feb 17 16:15:40 crc kubenswrapper[4672]: I0217 16:15:40.299648 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks" event={"ID":"d7125f42-e466-4a0e-af16-ed09a82f07be","Type":"ContainerStarted","Data":"f22ce4ef1fb43a7ab5fa359682c403966221758b22379ea6596ccc0d5a7a847b"} Feb 17 16:15:40 crc kubenswrapper[4672]: I0217 16:15:40.326222 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks" podStartSLOduration=5.16993508 podStartE2EDuration="7.326192795s" podCreationTimestamp="2026-02-17 16:15:33 +0000 UTC" firstStartedPulling="2026-02-17 16:15:35.24934119 +0000 UTC m=+744.003429922" lastFinishedPulling="2026-02-17 16:15:37.405598875 +0000 UTC m=+746.159687637" observedRunningTime="2026-02-17 16:15:40.323021941 +0000 UTC m=+749.077110733" watchObservedRunningTime="2026-02-17 16:15:40.326192795 +0000 UTC m=+749.080281617" Feb 17 16:15:41 crc kubenswrapper[4672]: I0217 16:15:41.308926 4672 generic.go:334] "Generic (PLEG): container finished" podID="d7125f42-e466-4a0e-af16-ed09a82f07be" containerID="f22ce4ef1fb43a7ab5fa359682c403966221758b22379ea6596ccc0d5a7a847b" exitCode=0 Feb 17 16:15:41 crc kubenswrapper[4672]: I0217 16:15:41.309053 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks" event={"ID":"d7125f42-e466-4a0e-af16-ed09a82f07be","Type":"ContainerDied","Data":"f22ce4ef1fb43a7ab5fa359682c403966221758b22379ea6596ccc0d5a7a847b"} Feb 17 16:15:41 crc kubenswrapper[4672]: I0217 16:15:41.310778 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"2ebea714-f8cb-4129-b9e1-643f76e48ced","Type":"ContainerStarted","Data":"1d33cb215953648265ea8cd74bbc24cddd874b38b0021b4d5f407a5820926d98"} Feb 17 16:15:41 crc kubenswrapper[4672]: I0217 16:15:41.352018 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.366134959 podStartE2EDuration="8.351991929s" podCreationTimestamp="2026-02-17 16:15:33 +0000 UTC" firstStartedPulling="2026-02-17 16:15:36.405231337 +0000 UTC m=+745.159320109" lastFinishedPulling="2026-02-17 16:15:40.391088347 +0000 UTC m=+749.145177079" observedRunningTime="2026-02-17 16:15:41.346540954 +0000 UTC m=+750.100629686" watchObservedRunningTime="2026-02-17 16:15:41.351991929 +0000 UTC m=+750.106080691" Feb 17 16:15:42 crc kubenswrapper[4672]: I0217 16:15:42.587203 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks" Feb 17 16:15:42 crc kubenswrapper[4672]: I0217 16:15:42.686571 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7125f42-e466-4a0e-af16-ed09a82f07be-bundle\") pod \"d7125f42-e466-4a0e-af16-ed09a82f07be\" (UID: \"d7125f42-e466-4a0e-af16-ed09a82f07be\") " Feb 17 16:15:42 crc kubenswrapper[4672]: I0217 16:15:42.686939 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7125f42-e466-4a0e-af16-ed09a82f07be-util\") pod \"d7125f42-e466-4a0e-af16-ed09a82f07be\" (UID: \"d7125f42-e466-4a0e-af16-ed09a82f07be\") " Feb 17 16:15:42 crc kubenswrapper[4672]: I0217 16:15:42.686970 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvrsn\" (UniqueName: \"kubernetes.io/projected/d7125f42-e466-4a0e-af16-ed09a82f07be-kube-api-access-cvrsn\") pod \"d7125f42-e466-4a0e-af16-ed09a82f07be\" (UID: \"d7125f42-e466-4a0e-af16-ed09a82f07be\") " Feb 17 16:15:42 crc kubenswrapper[4672]: I0217 16:15:42.688271 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7125f42-e466-4a0e-af16-ed09a82f07be-bundle" (OuterVolumeSpecName: "bundle") pod "d7125f42-e466-4a0e-af16-ed09a82f07be" (UID: "d7125f42-e466-4a0e-af16-ed09a82f07be"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:42 crc kubenswrapper[4672]: I0217 16:15:42.695673 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7125f42-e466-4a0e-af16-ed09a82f07be-kube-api-access-cvrsn" (OuterVolumeSpecName: "kube-api-access-cvrsn") pod "d7125f42-e466-4a0e-af16-ed09a82f07be" (UID: "d7125f42-e466-4a0e-af16-ed09a82f07be"). InnerVolumeSpecName "kube-api-access-cvrsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:42 crc kubenswrapper[4672]: I0217 16:15:42.706382 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7125f42-e466-4a0e-af16-ed09a82f07be-util" (OuterVolumeSpecName: "util") pod "d7125f42-e466-4a0e-af16-ed09a82f07be" (UID: "d7125f42-e466-4a0e-af16-ed09a82f07be"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:42 crc kubenswrapper[4672]: I0217 16:15:42.788710 4672 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7125f42-e466-4a0e-af16-ed09a82f07be-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:42 crc kubenswrapper[4672]: I0217 16:15:42.788965 4672 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7125f42-e466-4a0e-af16-ed09a82f07be-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:42 crc kubenswrapper[4672]: I0217 16:15:42.789044 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvrsn\" (UniqueName: \"kubernetes.io/projected/d7125f42-e466-4a0e-af16-ed09a82f07be-kube-api-access-cvrsn\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:43 crc kubenswrapper[4672]: I0217 16:15:43.325833 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks" event={"ID":"d7125f42-e466-4a0e-af16-ed09a82f07be","Type":"ContainerDied","Data":"c739e001b508c47aa88382e106db61b7f094b43e68b69973330443e754aa8cee"} Feb 17 16:15:43 crc kubenswrapper[4672]: I0217 16:15:43.325894 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c739e001b508c47aa88382e106db61b7f094b43e68b69973330443e754aa8cee" Feb 17 16:15:43 crc kubenswrapper[4672]: I0217 16:15:43.325925 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks" Feb 17 16:15:46 crc kubenswrapper[4672]: I0217 16:15:46.107012 4672 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.566441 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2"] Feb 17 16:15:48 crc kubenswrapper[4672]: E0217 16:15:48.567014 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7125f42-e466-4a0e-af16-ed09a82f07be" containerName="extract" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.567028 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7125f42-e466-4a0e-af16-ed09a82f07be" containerName="extract" Feb 17 16:15:48 crc kubenswrapper[4672]: E0217 16:15:48.567039 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7125f42-e466-4a0e-af16-ed09a82f07be" containerName="util" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.567045 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7125f42-e466-4a0e-af16-ed09a82f07be" containerName="util" Feb 17 16:15:48 crc kubenswrapper[4672]: E0217 16:15:48.567052 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7125f42-e466-4a0e-af16-ed09a82f07be" containerName="pull" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.567057 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7125f42-e466-4a0e-af16-ed09a82f07be" containerName="pull" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.567156 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7125f42-e466-4a0e-af16-ed09a82f07be" containerName="extract" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.567676 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.577052 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.577065 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.577497 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-g9pvh" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.585235 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.591233 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.599304 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/de56b787-401f-4eea-b171-484eb364fbe8-manager-config\") pod \"loki-operator-controller-manager-5698c87bb7-6twv2\" (UID: \"de56b787-401f-4eea-b171-484eb364fbe8\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.599371 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/de56b787-401f-4eea-b171-484eb364fbe8-webhook-cert\") pod \"loki-operator-controller-manager-5698c87bb7-6twv2\" (UID: \"de56b787-401f-4eea-b171-484eb364fbe8\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.599398 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/de56b787-401f-4eea-b171-484eb364fbe8-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5698c87bb7-6twv2\" (UID: \"de56b787-401f-4eea-b171-484eb364fbe8\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.599568 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/de56b787-401f-4eea-b171-484eb364fbe8-apiservice-cert\") pod \"loki-operator-controller-manager-5698c87bb7-6twv2\" (UID: \"de56b787-401f-4eea-b171-484eb364fbe8\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.599795 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb4pk\" (UniqueName: \"kubernetes.io/projected/de56b787-401f-4eea-b171-484eb364fbe8-kube-api-access-jb4pk\") pod \"loki-operator-controller-manager-5698c87bb7-6twv2\" (UID: \"de56b787-401f-4eea-b171-484eb364fbe8\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.604413 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.609214 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2"] Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.700412 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/de56b787-401f-4eea-b171-484eb364fbe8-webhook-cert\") pod \"loki-operator-controller-manager-5698c87bb7-6twv2\" (UID: \"de56b787-401f-4eea-b171-484eb364fbe8\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.700465 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/de56b787-401f-4eea-b171-484eb364fbe8-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5698c87bb7-6twv2\" (UID: \"de56b787-401f-4eea-b171-484eb364fbe8\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.700491 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/de56b787-401f-4eea-b171-484eb364fbe8-apiservice-cert\") pod \"loki-operator-controller-manager-5698c87bb7-6twv2\" (UID: \"de56b787-401f-4eea-b171-484eb364fbe8\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.700556 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb4pk\" (UniqueName: \"kubernetes.io/projected/de56b787-401f-4eea-b171-484eb364fbe8-kube-api-access-jb4pk\") pod \"loki-operator-controller-manager-5698c87bb7-6twv2\" (UID: \"de56b787-401f-4eea-b171-484eb364fbe8\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.700594 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/de56b787-401f-4eea-b171-484eb364fbe8-manager-config\") pod \"loki-operator-controller-manager-5698c87bb7-6twv2\" (UID: \"de56b787-401f-4eea-b171-484eb364fbe8\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.701555 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/de56b787-401f-4eea-b171-484eb364fbe8-manager-config\") pod \"loki-operator-controller-manager-5698c87bb7-6twv2\" (UID: \"de56b787-401f-4eea-b171-484eb364fbe8\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.706100 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/de56b787-401f-4eea-b171-484eb364fbe8-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5698c87bb7-6twv2\" (UID: \"de56b787-401f-4eea-b171-484eb364fbe8\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.710035 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/de56b787-401f-4eea-b171-484eb364fbe8-webhook-cert\") pod \"loki-operator-controller-manager-5698c87bb7-6twv2\" (UID: \"de56b787-401f-4eea-b171-484eb364fbe8\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.724088 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/de56b787-401f-4eea-b171-484eb364fbe8-apiservice-cert\") pod \"loki-operator-controller-manager-5698c87bb7-6twv2\" (UID: \"de56b787-401f-4eea-b171-484eb364fbe8\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.738580 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb4pk\" (UniqueName: \"kubernetes.io/projected/de56b787-401f-4eea-b171-484eb364fbe8-kube-api-access-jb4pk\") pod \"loki-operator-controller-manager-5698c87bb7-6twv2\" (UID: \"de56b787-401f-4eea-b171-484eb364fbe8\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2" Feb 17 16:15:48 crc kubenswrapper[4672]: I0217 16:15:48.882100 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2" Feb 17 16:15:49 crc kubenswrapper[4672]: I0217 16:15:49.080982 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2"] Feb 17 16:15:49 crc kubenswrapper[4672]: W0217 16:15:49.089139 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde56b787_401f_4eea_b171_484eb364fbe8.slice/crio-1546a00a3da5d0b691e8131519944b749e9b4ecef643df515586afd8263ee6da WatchSource:0}: Error finding container 1546a00a3da5d0b691e8131519944b749e9b4ecef643df515586afd8263ee6da: Status 404 returned error can't find the container with id 1546a00a3da5d0b691e8131519944b749e9b4ecef643df515586afd8263ee6da Feb 17 16:15:49 crc kubenswrapper[4672]: I0217 16:15:49.363302 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2" event={"ID":"de56b787-401f-4eea-b171-484eb364fbe8","Type":"ContainerStarted","Data":"1546a00a3da5d0b691e8131519944b749e9b4ecef643df515586afd8263ee6da"} Feb 17 16:15:51 crc kubenswrapper[4672]: I0217 16:15:51.171365 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cslcd"] Feb 17 16:15:51 crc kubenswrapper[4672]: I0217 16:15:51.172580 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cslcd" Feb 17 16:15:51 crc kubenswrapper[4672]: I0217 16:15:51.180627 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cslcd"] Feb 17 16:15:51 crc kubenswrapper[4672]: I0217 16:15:51.230985 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d30a35f-aaac-40fe-8d13-c25c0a775e8e-catalog-content\") pod \"redhat-operators-cslcd\" (UID: \"5d30a35f-aaac-40fe-8d13-c25c0a775e8e\") " pod="openshift-marketplace/redhat-operators-cslcd" Feb 17 16:15:51 crc kubenswrapper[4672]: I0217 16:15:51.231033 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d30a35f-aaac-40fe-8d13-c25c0a775e8e-utilities\") pod \"redhat-operators-cslcd\" (UID: \"5d30a35f-aaac-40fe-8d13-c25c0a775e8e\") " pod="openshift-marketplace/redhat-operators-cslcd" Feb 17 16:15:51 crc kubenswrapper[4672]: I0217 16:15:51.231057 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmgd8\" (UniqueName: \"kubernetes.io/projected/5d30a35f-aaac-40fe-8d13-c25c0a775e8e-kube-api-access-hmgd8\") pod \"redhat-operators-cslcd\" (UID: \"5d30a35f-aaac-40fe-8d13-c25c0a775e8e\") " pod="openshift-marketplace/redhat-operators-cslcd" Feb 17 16:15:51 crc kubenswrapper[4672]: I0217 16:15:51.331391 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d30a35f-aaac-40fe-8d13-c25c0a775e8e-catalog-content\") pod \"redhat-operators-cslcd\" (UID: \"5d30a35f-aaac-40fe-8d13-c25c0a775e8e\") " pod="openshift-marketplace/redhat-operators-cslcd" Feb 17 16:15:51 crc kubenswrapper[4672]: I0217 16:15:51.331439 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d30a35f-aaac-40fe-8d13-c25c0a775e8e-utilities\") pod \"redhat-operators-cslcd\" (UID: \"5d30a35f-aaac-40fe-8d13-c25c0a775e8e\") " pod="openshift-marketplace/redhat-operators-cslcd" Feb 17 16:15:51 crc kubenswrapper[4672]: I0217 16:15:51.331464 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmgd8\" (UniqueName: \"kubernetes.io/projected/5d30a35f-aaac-40fe-8d13-c25c0a775e8e-kube-api-access-hmgd8\") pod \"redhat-operators-cslcd\" (UID: \"5d30a35f-aaac-40fe-8d13-c25c0a775e8e\") " pod="openshift-marketplace/redhat-operators-cslcd" Feb 17 16:15:51 crc kubenswrapper[4672]: I0217 16:15:51.332134 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d30a35f-aaac-40fe-8d13-c25c0a775e8e-catalog-content\") pod \"redhat-operators-cslcd\" (UID: \"5d30a35f-aaac-40fe-8d13-c25c0a775e8e\") " pod="openshift-marketplace/redhat-operators-cslcd" Feb 17 16:15:51 crc kubenswrapper[4672]: I0217 16:15:51.332341 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d30a35f-aaac-40fe-8d13-c25c0a775e8e-utilities\") pod \"redhat-operators-cslcd\" (UID: \"5d30a35f-aaac-40fe-8d13-c25c0a775e8e\") " pod="openshift-marketplace/redhat-operators-cslcd" Feb 17 16:15:51 crc kubenswrapper[4672]: I0217 16:15:51.352340 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmgd8\" (UniqueName: \"kubernetes.io/projected/5d30a35f-aaac-40fe-8d13-c25c0a775e8e-kube-api-access-hmgd8\") pod \"redhat-operators-cslcd\" (UID: \"5d30a35f-aaac-40fe-8d13-c25c0a775e8e\") " pod="openshift-marketplace/redhat-operators-cslcd" Feb 17 16:15:51 crc kubenswrapper[4672]: I0217 16:15:51.503619 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cslcd" Feb 17 16:15:53 crc kubenswrapper[4672]: I0217 16:15:53.922811 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cslcd"] Feb 17 16:15:54 crc kubenswrapper[4672]: I0217 16:15:54.391790 4672 generic.go:334] "Generic (PLEG): container finished" podID="5d30a35f-aaac-40fe-8d13-c25c0a775e8e" containerID="be9247f848c6f49d122a8f655ad4bce22705b7b6b35c93a62312a4a08fd67f70" exitCode=0 Feb 17 16:15:54 crc kubenswrapper[4672]: I0217 16:15:54.391893 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cslcd" event={"ID":"5d30a35f-aaac-40fe-8d13-c25c0a775e8e","Type":"ContainerDied","Data":"be9247f848c6f49d122a8f655ad4bce22705b7b6b35c93a62312a4a08fd67f70"} Feb 17 16:15:54 crc kubenswrapper[4672]: I0217 16:15:54.391954 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cslcd" event={"ID":"5d30a35f-aaac-40fe-8d13-c25c0a775e8e","Type":"ContainerStarted","Data":"71598c4d250ffd9b7eca15c7a4bea5abf6d3717741d2f7395926f35363324636"} Feb 17 16:15:54 crc kubenswrapper[4672]: I0217 16:15:54.394818 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2" event={"ID":"de56b787-401f-4eea-b171-484eb364fbe8","Type":"ContainerStarted","Data":"abdf67c9c98c24c902ead610c56d02f8f2b7bed71a5d65f9d6d290441d933d74"} Feb 17 16:15:55 crc kubenswrapper[4672]: I0217 16:15:55.403695 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cslcd" event={"ID":"5d30a35f-aaac-40fe-8d13-c25c0a775e8e","Type":"ContainerStarted","Data":"ae0f8d79b2ef19e7ca1e94d3c6c95a17816ddb141e5c128244ca88e32c8840ad"} Feb 17 16:15:56 crc kubenswrapper[4672]: I0217 16:15:56.413422 4672 generic.go:334] "Generic (PLEG): container finished" podID="5d30a35f-aaac-40fe-8d13-c25c0a775e8e" containerID="ae0f8d79b2ef19e7ca1e94d3c6c95a17816ddb141e5c128244ca88e32c8840ad" exitCode=0 Feb 17 16:15:56 crc kubenswrapper[4672]: I0217 16:15:56.413481 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cslcd" event={"ID":"5d30a35f-aaac-40fe-8d13-c25c0a775e8e","Type":"ContainerDied","Data":"ae0f8d79b2ef19e7ca1e94d3c6c95a17816ddb141e5c128244ca88e32c8840ad"} Feb 17 16:15:57 crc kubenswrapper[4672]: I0217 16:15:57.565659 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:15:57 crc kubenswrapper[4672]: I0217 16:15:57.565995 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:16:00 crc kubenswrapper[4672]: I0217 16:16:00.444263 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2" event={"ID":"de56b787-401f-4eea-b171-484eb364fbe8","Type":"ContainerStarted","Data":"650fb7c7d96c7e3ab7e980c1363f00492b574b7ace67fc0b86ab4d61d645a63c"} Feb 17 16:16:00 crc kubenswrapper[4672]: I0217 16:16:00.444755 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2" Feb 17 16:16:00 crc kubenswrapper[4672]: I0217 16:16:00.446475 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2" Feb 17 16:16:00 crc kubenswrapper[4672]: I0217 16:16:00.447998 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cslcd" event={"ID":"5d30a35f-aaac-40fe-8d13-c25c0a775e8e","Type":"ContainerStarted","Data":"6a5da48fdb8a67fd12f3c00b92321d1cfef0ee10e4e18d063fe302caa03f2745"} Feb 17 16:16:00 crc kubenswrapper[4672]: I0217 16:16:00.501741 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-5698c87bb7-6twv2" podStartSLOduration=1.774100921 podStartE2EDuration="12.501717101s" podCreationTimestamp="2026-02-17 16:15:48 +0000 UTC" firstStartedPulling="2026-02-17 16:15:49.090612992 +0000 UTC m=+757.844701724" lastFinishedPulling="2026-02-17 16:15:59.818229142 +0000 UTC m=+768.572317904" observedRunningTime="2026-02-17 16:16:00.479144492 +0000 UTC m=+769.233233234" watchObservedRunningTime="2026-02-17 16:16:00.501717101 +0000 UTC m=+769.255805833" Feb 17 16:16:00 crc kubenswrapper[4672]: I0217 16:16:00.526393 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cslcd" podStartSLOduration=4.134312366 podStartE2EDuration="9.526376226s" podCreationTimestamp="2026-02-17 16:15:51 +0000 UTC" firstStartedPulling="2026-02-17 16:15:54.394269356 +0000 UTC m=+763.148358098" lastFinishedPulling="2026-02-17 16:15:59.786333216 +0000 UTC m=+768.540421958" observedRunningTime="2026-02-17 16:16:00.52617409 +0000 UTC m=+769.280262822" watchObservedRunningTime="2026-02-17 16:16:00.526376226 +0000 UTC m=+769.280464958" Feb 17 16:16:01 crc kubenswrapper[4672]: I0217 16:16:01.504176 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cslcd" Feb 17 16:16:01 crc kubenswrapper[4672]: I0217 16:16:01.504260 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cslcd" Feb 17 16:16:02 crc kubenswrapper[4672]: I0217 16:16:02.568078 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cslcd" podUID="5d30a35f-aaac-40fe-8d13-c25c0a775e8e" containerName="registry-server" probeResult="failure" output=< Feb 17 16:16:02 crc kubenswrapper[4672]: timeout: failed to connect service ":50051" within 1s Feb 17 16:16:02 crc kubenswrapper[4672]: > Feb 17 16:16:11 crc kubenswrapper[4672]: I0217 16:16:11.557834 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cslcd" Feb 17 16:16:11 crc kubenswrapper[4672]: I0217 16:16:11.641967 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cslcd" Feb 17 16:16:11 crc kubenswrapper[4672]: I0217 16:16:11.807981 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cslcd"] Feb 17 16:16:13 crc kubenswrapper[4672]: I0217 16:16:13.353967 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cslcd" podUID="5d30a35f-aaac-40fe-8d13-c25c0a775e8e" containerName="registry-server" containerID="cri-o://6a5da48fdb8a67fd12f3c00b92321d1cfef0ee10e4e18d063fe302caa03f2745" gracePeriod=2 Feb 17 16:16:13 crc kubenswrapper[4672]: I0217 16:16:13.838108 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cslcd" Feb 17 16:16:13 crc kubenswrapper[4672]: I0217 16:16:13.916079 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d30a35f-aaac-40fe-8d13-c25c0a775e8e-catalog-content\") pod \"5d30a35f-aaac-40fe-8d13-c25c0a775e8e\" (UID: \"5d30a35f-aaac-40fe-8d13-c25c0a775e8e\") " Feb 17 16:16:13 crc kubenswrapper[4672]: I0217 16:16:13.916472 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d30a35f-aaac-40fe-8d13-c25c0a775e8e-utilities\") pod \"5d30a35f-aaac-40fe-8d13-c25c0a775e8e\" (UID: \"5d30a35f-aaac-40fe-8d13-c25c0a775e8e\") " Feb 17 16:16:13 crc kubenswrapper[4672]: I0217 16:16:13.916791 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmgd8\" (UniqueName: \"kubernetes.io/projected/5d30a35f-aaac-40fe-8d13-c25c0a775e8e-kube-api-access-hmgd8\") pod \"5d30a35f-aaac-40fe-8d13-c25c0a775e8e\" (UID: \"5d30a35f-aaac-40fe-8d13-c25c0a775e8e\") " Feb 17 16:16:13 crc kubenswrapper[4672]: I0217 16:16:13.918410 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d30a35f-aaac-40fe-8d13-c25c0a775e8e-utilities" (OuterVolumeSpecName: "utilities") pod "5d30a35f-aaac-40fe-8d13-c25c0a775e8e" (UID: "5d30a35f-aaac-40fe-8d13-c25c0a775e8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:13 crc kubenswrapper[4672]: I0217 16:16:13.925327 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d30a35f-aaac-40fe-8d13-c25c0a775e8e-kube-api-access-hmgd8" (OuterVolumeSpecName: "kube-api-access-hmgd8") pod "5d30a35f-aaac-40fe-8d13-c25c0a775e8e" (UID: "5d30a35f-aaac-40fe-8d13-c25c0a775e8e"). InnerVolumeSpecName "kube-api-access-hmgd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.018378 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmgd8\" (UniqueName: \"kubernetes.io/projected/5d30a35f-aaac-40fe-8d13-c25c0a775e8e-kube-api-access-hmgd8\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.018452 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d30a35f-aaac-40fe-8d13-c25c0a775e8e-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.127926 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d30a35f-aaac-40fe-8d13-c25c0a775e8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d30a35f-aaac-40fe-8d13-c25c0a775e8e" (UID: "5d30a35f-aaac-40fe-8d13-c25c0a775e8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.217800 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2zm9m"] Feb 17 16:16:14 crc kubenswrapper[4672]: E0217 16:16:14.218051 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d30a35f-aaac-40fe-8d13-c25c0a775e8e" containerName="extract-content" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.218065 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d30a35f-aaac-40fe-8d13-c25c0a775e8e" containerName="extract-content" Feb 17 16:16:14 crc kubenswrapper[4672]: E0217 16:16:14.218090 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d30a35f-aaac-40fe-8d13-c25c0a775e8e" containerName="extract-utilities" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.218098 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d30a35f-aaac-40fe-8d13-c25c0a775e8e" containerName="extract-utilities" Feb 17 16:16:14 crc kubenswrapper[4672]: E0217 16:16:14.218113 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d30a35f-aaac-40fe-8d13-c25c0a775e8e" containerName="registry-server" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.218120 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d30a35f-aaac-40fe-8d13-c25c0a775e8e" containerName="registry-server" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.218266 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d30a35f-aaac-40fe-8d13-c25c0a775e8e" containerName="registry-server" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.219173 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zm9m" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.224599 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d30a35f-aaac-40fe-8d13-c25c0a775e8e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.242868 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zm9m"] Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.325862 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mt94\" (UniqueName: \"kubernetes.io/projected/00d16014-9225-4eb6-8965-6772249b069d-kube-api-access-8mt94\") pod \"redhat-marketplace-2zm9m\" (UID: \"00d16014-9225-4eb6-8965-6772249b069d\") " pod="openshift-marketplace/redhat-marketplace-2zm9m" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.326009 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d16014-9225-4eb6-8965-6772249b069d-utilities\") pod \"redhat-marketplace-2zm9m\" (UID: \"00d16014-9225-4eb6-8965-6772249b069d\") " pod="openshift-marketplace/redhat-marketplace-2zm9m" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.326037 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d16014-9225-4eb6-8965-6772249b069d-catalog-content\") pod \"redhat-marketplace-2zm9m\" (UID: \"00d16014-9225-4eb6-8965-6772249b069d\") " pod="openshift-marketplace/redhat-marketplace-2zm9m" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.362739 4672 generic.go:334] "Generic (PLEG): container finished" podID="5d30a35f-aaac-40fe-8d13-c25c0a775e8e" containerID="6a5da48fdb8a67fd12f3c00b92321d1cfef0ee10e4e18d063fe302caa03f2745" exitCode=0 Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.362795 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cslcd" event={"ID":"5d30a35f-aaac-40fe-8d13-c25c0a775e8e","Type":"ContainerDied","Data":"6a5da48fdb8a67fd12f3c00b92321d1cfef0ee10e4e18d063fe302caa03f2745"} Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.362831 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cslcd" event={"ID":"5d30a35f-aaac-40fe-8d13-c25c0a775e8e","Type":"ContainerDied","Data":"71598c4d250ffd9b7eca15c7a4bea5abf6d3717741d2f7395926f35363324636"} Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.362862 4672 scope.go:117] "RemoveContainer" containerID="6a5da48fdb8a67fd12f3c00b92321d1cfef0ee10e4e18d063fe302caa03f2745" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.363064 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cslcd" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.389658 4672 scope.go:117] "RemoveContainer" containerID="ae0f8d79b2ef19e7ca1e94d3c6c95a17816ddb141e5c128244ca88e32c8840ad" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.390563 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cslcd"] Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.394725 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cslcd"] Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.408197 4672 scope.go:117] "RemoveContainer" containerID="be9247f848c6f49d122a8f655ad4bce22705b7b6b35c93a62312a4a08fd67f70" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.421340 4672 scope.go:117] "RemoveContainer" containerID="6a5da48fdb8a67fd12f3c00b92321d1cfef0ee10e4e18d063fe302caa03f2745" Feb 17 16:16:14 crc kubenswrapper[4672]: E0217 16:16:14.424356 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a5da48fdb8a67fd12f3c00b92321d1cfef0ee10e4e18d063fe302caa03f2745\": container with ID starting with 6a5da48fdb8a67fd12f3c00b92321d1cfef0ee10e4e18d063fe302caa03f2745 not found: ID does not exist" containerID="6a5da48fdb8a67fd12f3c00b92321d1cfef0ee10e4e18d063fe302caa03f2745" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.424390 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a5da48fdb8a67fd12f3c00b92321d1cfef0ee10e4e18d063fe302caa03f2745"} err="failed to get container status \"6a5da48fdb8a67fd12f3c00b92321d1cfef0ee10e4e18d063fe302caa03f2745\": rpc error: code = NotFound desc = could not find container \"6a5da48fdb8a67fd12f3c00b92321d1cfef0ee10e4e18d063fe302caa03f2745\": container with ID starting with 6a5da48fdb8a67fd12f3c00b92321d1cfef0ee10e4e18d063fe302caa03f2745 not found: ID does not exist" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.424415 4672 scope.go:117] "RemoveContainer" containerID="ae0f8d79b2ef19e7ca1e94d3c6c95a17816ddb141e5c128244ca88e32c8840ad" Feb 17 16:16:14 crc kubenswrapper[4672]: E0217 16:16:14.425709 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae0f8d79b2ef19e7ca1e94d3c6c95a17816ddb141e5c128244ca88e32c8840ad\": container with ID starting with ae0f8d79b2ef19e7ca1e94d3c6c95a17816ddb141e5c128244ca88e32c8840ad not found: ID does not exist" containerID="ae0f8d79b2ef19e7ca1e94d3c6c95a17816ddb141e5c128244ca88e32c8840ad" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.425759 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae0f8d79b2ef19e7ca1e94d3c6c95a17816ddb141e5c128244ca88e32c8840ad"} err="failed to get container status \"ae0f8d79b2ef19e7ca1e94d3c6c95a17816ddb141e5c128244ca88e32c8840ad\": rpc error: code = NotFound desc = could not find container \"ae0f8d79b2ef19e7ca1e94d3c6c95a17816ddb141e5c128244ca88e32c8840ad\": container with ID starting with ae0f8d79b2ef19e7ca1e94d3c6c95a17816ddb141e5c128244ca88e32c8840ad not found: ID does not exist" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.425786 4672 scope.go:117] "RemoveContainer" containerID="be9247f848c6f49d122a8f655ad4bce22705b7b6b35c93a62312a4a08fd67f70" Feb 17 16:16:14 crc kubenswrapper[4672]: E0217 16:16:14.426459 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be9247f848c6f49d122a8f655ad4bce22705b7b6b35c93a62312a4a08fd67f70\": container with ID starting with be9247f848c6f49d122a8f655ad4bce22705b7b6b35c93a62312a4a08fd67f70 not found: ID does not exist" containerID="be9247f848c6f49d122a8f655ad4bce22705b7b6b35c93a62312a4a08fd67f70" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.426503 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be9247f848c6f49d122a8f655ad4bce22705b7b6b35c93a62312a4a08fd67f70"} err="failed to get container status \"be9247f848c6f49d122a8f655ad4bce22705b7b6b35c93a62312a4a08fd67f70\": rpc error: code = NotFound desc = could not find container \"be9247f848c6f49d122a8f655ad4bce22705b7b6b35c93a62312a4a08fd67f70\": container with ID starting with be9247f848c6f49d122a8f655ad4bce22705b7b6b35c93a62312a4a08fd67f70 not found: ID does not exist" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.427732 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d16014-9225-4eb6-8965-6772249b069d-utilities\") pod \"redhat-marketplace-2zm9m\" (UID: \"00d16014-9225-4eb6-8965-6772249b069d\") " pod="openshift-marketplace/redhat-marketplace-2zm9m" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.427762 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d16014-9225-4eb6-8965-6772249b069d-catalog-content\") pod \"redhat-marketplace-2zm9m\" (UID: \"00d16014-9225-4eb6-8965-6772249b069d\") " pod="openshift-marketplace/redhat-marketplace-2zm9m" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.427811 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mt94\" (UniqueName: \"kubernetes.io/projected/00d16014-9225-4eb6-8965-6772249b069d-kube-api-access-8mt94\") pod \"redhat-marketplace-2zm9m\" (UID: \"00d16014-9225-4eb6-8965-6772249b069d\") " pod="openshift-marketplace/redhat-marketplace-2zm9m" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.428318 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d16014-9225-4eb6-8965-6772249b069d-utilities\") pod \"redhat-marketplace-2zm9m\" (UID: \"00d16014-9225-4eb6-8965-6772249b069d\") " pod="openshift-marketplace/redhat-marketplace-2zm9m" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.428594 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d16014-9225-4eb6-8965-6772249b069d-catalog-content\") pod \"redhat-marketplace-2zm9m\" (UID: \"00d16014-9225-4eb6-8965-6772249b069d\") " pod="openshift-marketplace/redhat-marketplace-2zm9m" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.444412 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mt94\" (UniqueName: \"kubernetes.io/projected/00d16014-9225-4eb6-8965-6772249b069d-kube-api-access-8mt94\") pod \"redhat-marketplace-2zm9m\" (UID: \"00d16014-9225-4eb6-8965-6772249b069d\") " pod="openshift-marketplace/redhat-marketplace-2zm9m" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.541978 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zm9m" Feb 17 16:16:14 crc kubenswrapper[4672]: I0217 16:16:14.752163 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zm9m"] Feb 17 16:16:15 crc kubenswrapper[4672]: I0217 16:16:15.373096 4672 generic.go:334] "Generic (PLEG): container finished" podID="00d16014-9225-4eb6-8965-6772249b069d" containerID="8872a1fe5c2a3369aa8ca8b278212449ecb6d44706fcf09270b08746a8838cec" exitCode=0 Feb 17 16:16:15 crc kubenswrapper[4672]: I0217 16:16:15.373177 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zm9m" event={"ID":"00d16014-9225-4eb6-8965-6772249b069d","Type":"ContainerDied","Data":"8872a1fe5c2a3369aa8ca8b278212449ecb6d44706fcf09270b08746a8838cec"} Feb 17 16:16:15 crc kubenswrapper[4672]: I0217 16:16:15.373620 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zm9m" event={"ID":"00d16014-9225-4eb6-8965-6772249b069d","Type":"ContainerStarted","Data":"83dae64685b6e4723772b8abb47304d741d09a8799f66b48f35acab3cc69c9fe"} Feb 17 16:16:15 crc kubenswrapper[4672]: I0217 16:16:15.959824 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d30a35f-aaac-40fe-8d13-c25c0a775e8e" path="/var/lib/kubelet/pods/5d30a35f-aaac-40fe-8d13-c25c0a775e8e/volumes" Feb 17 16:16:16 crc kubenswrapper[4672]: I0217 16:16:16.383428 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zm9m" event={"ID":"00d16014-9225-4eb6-8965-6772249b069d","Type":"ContainerStarted","Data":"575a2e8d84edb27d10164f5e6a9f1f4bd5bde3e8169f7e61941a5af3bc43535c"} Feb 17 16:16:17 crc kubenswrapper[4672]: I0217 16:16:17.393395 4672 generic.go:334] "Generic (PLEG): container finished" podID="00d16014-9225-4eb6-8965-6772249b069d" containerID="575a2e8d84edb27d10164f5e6a9f1f4bd5bde3e8169f7e61941a5af3bc43535c" exitCode=0 Feb 17 16:16:17 crc kubenswrapper[4672]: I0217 16:16:17.393483 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zm9m" event={"ID":"00d16014-9225-4eb6-8965-6772249b069d","Type":"ContainerDied","Data":"575a2e8d84edb27d10164f5e6a9f1f4bd5bde3e8169f7e61941a5af3bc43535c"} Feb 17 16:16:18 crc kubenswrapper[4672]: I0217 16:16:18.404182 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zm9m" event={"ID":"00d16014-9225-4eb6-8965-6772249b069d","Type":"ContainerStarted","Data":"ca10367c017442e0df1130b904f2ec59c24ca3dbfacab4fe10da8e64194fef65"} Feb 17 16:16:18 crc kubenswrapper[4672]: I0217 16:16:18.426921 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2zm9m" podStartSLOduration=1.861656468 podStartE2EDuration="4.426895646s" podCreationTimestamp="2026-02-17 16:16:14 +0000 UTC" firstStartedPulling="2026-02-17 16:16:15.37565226 +0000 UTC m=+784.129741032" lastFinishedPulling="2026-02-17 16:16:17.940891428 +0000 UTC m=+786.694980210" observedRunningTime="2026-02-17 16:16:18.426869575 +0000 UTC m=+787.180958327" watchObservedRunningTime="2026-02-17 16:16:18.426895646 +0000 UTC m=+787.180984388" Feb 17 16:16:24 crc kubenswrapper[4672]: I0217 16:16:24.542803 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2zm9m" Feb 17 16:16:24 crc kubenswrapper[4672]: I0217 16:16:24.546505 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2zm9m" Feb 17 16:16:24 crc kubenswrapper[4672]: I0217 16:16:24.611080 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2zm9m" Feb 17 16:16:25 crc kubenswrapper[4672]: I0217 16:16:25.507942 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2zm9m" Feb 17 16:16:25 crc kubenswrapper[4672]: I0217 16:16:25.573234 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zm9m"] Feb 17 16:16:27 crc kubenswrapper[4672]: I0217 16:16:27.472288 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2zm9m" podUID="00d16014-9225-4eb6-8965-6772249b069d" containerName="registry-server" containerID="cri-o://ca10367c017442e0df1130b904f2ec59c24ca3dbfacab4fe10da8e64194fef65" gracePeriod=2 Feb 17 16:16:27 crc kubenswrapper[4672]: I0217 16:16:27.566326 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:16:27 crc kubenswrapper[4672]: I0217 16:16:27.566414 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:16:27 crc kubenswrapper[4672]: I0217 16:16:27.566480 4672 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:16:27 crc kubenswrapper[4672]: I0217 16:16:27.567494 4672 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a296cbbb1d99319f19a06f749b112d1a27b0616f6d5daa613b86b37f30657f19"} pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:16:27 crc kubenswrapper[4672]: I0217 16:16:27.567672 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" containerID="cri-o://a296cbbb1d99319f19a06f749b112d1a27b0616f6d5daa613b86b37f30657f19" gracePeriod=600 Feb 17 16:16:27 crc kubenswrapper[4672]: I0217 16:16:27.936322 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zm9m" Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.049250 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d16014-9225-4eb6-8965-6772249b069d-utilities\") pod \"00d16014-9225-4eb6-8965-6772249b069d\" (UID: \"00d16014-9225-4eb6-8965-6772249b069d\") " Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.049303 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mt94\" (UniqueName: \"kubernetes.io/projected/00d16014-9225-4eb6-8965-6772249b069d-kube-api-access-8mt94\") pod \"00d16014-9225-4eb6-8965-6772249b069d\" (UID: \"00d16014-9225-4eb6-8965-6772249b069d\") " Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.049380 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d16014-9225-4eb6-8965-6772249b069d-catalog-content\") pod \"00d16014-9225-4eb6-8965-6772249b069d\" (UID: \"00d16014-9225-4eb6-8965-6772249b069d\") " Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.050442 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00d16014-9225-4eb6-8965-6772249b069d-utilities" (OuterVolumeSpecName: "utilities") pod "00d16014-9225-4eb6-8965-6772249b069d" (UID: "00d16014-9225-4eb6-8965-6772249b069d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.058628 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00d16014-9225-4eb6-8965-6772249b069d-kube-api-access-8mt94" (OuterVolumeSpecName: "kube-api-access-8mt94") pod "00d16014-9225-4eb6-8965-6772249b069d" (UID: "00d16014-9225-4eb6-8965-6772249b069d"). InnerVolumeSpecName "kube-api-access-8mt94". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.080971 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00d16014-9225-4eb6-8965-6772249b069d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00d16014-9225-4eb6-8965-6772249b069d" (UID: "00d16014-9225-4eb6-8965-6772249b069d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.150909 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d16014-9225-4eb6-8965-6772249b069d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.150958 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mt94\" (UniqueName: \"kubernetes.io/projected/00d16014-9225-4eb6-8965-6772249b069d-kube-api-access-8mt94\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.150979 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d16014-9225-4eb6-8965-6772249b069d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.479383 4672 generic.go:334] "Generic (PLEG): container finished" podID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerID="a296cbbb1d99319f19a06f749b112d1a27b0616f6d5daa613b86b37f30657f19" exitCode=0 Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.479462 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerDied","Data":"a296cbbb1d99319f19a06f749b112d1a27b0616f6d5daa613b86b37f30657f19"} Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.479797 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerStarted","Data":"15aa63f02ee4cd25df0940b558fcaa7bcd640deeb41ec99378884cac7403f757"} Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.479814 4672 scope.go:117] "RemoveContainer" containerID="bab58c994d52018fa7903af25af1b3a89988c7cbe182c6c29193a105200dcb08" Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.482945 4672 generic.go:334] "Generic (PLEG): container finished" podID="00d16014-9225-4eb6-8965-6772249b069d" containerID="ca10367c017442e0df1130b904f2ec59c24ca3dbfacab4fe10da8e64194fef65" exitCode=0 Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.482999 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zm9m" event={"ID":"00d16014-9225-4eb6-8965-6772249b069d","Type":"ContainerDied","Data":"ca10367c017442e0df1130b904f2ec59c24ca3dbfacab4fe10da8e64194fef65"} Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.483040 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zm9m" event={"ID":"00d16014-9225-4eb6-8965-6772249b069d","Type":"ContainerDied","Data":"83dae64685b6e4723772b8abb47304d741d09a8799f66b48f35acab3cc69c9fe"} Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.483080 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zm9m" Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.519227 4672 scope.go:117] "RemoveContainer" containerID="ca10367c017442e0df1130b904f2ec59c24ca3dbfacab4fe10da8e64194fef65" Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.531115 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zm9m"] Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.543746 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zm9m"] Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.545880 4672 scope.go:117] "RemoveContainer" containerID="575a2e8d84edb27d10164f5e6a9f1f4bd5bde3e8169f7e61941a5af3bc43535c" Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.560933 4672 scope.go:117] "RemoveContainer" containerID="8872a1fe5c2a3369aa8ca8b278212449ecb6d44706fcf09270b08746a8838cec" Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.578206 4672 scope.go:117] "RemoveContainer" containerID="ca10367c017442e0df1130b904f2ec59c24ca3dbfacab4fe10da8e64194fef65" Feb 17 16:16:28 crc kubenswrapper[4672]: E0217 16:16:28.578647 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca10367c017442e0df1130b904f2ec59c24ca3dbfacab4fe10da8e64194fef65\": container with ID starting with ca10367c017442e0df1130b904f2ec59c24ca3dbfacab4fe10da8e64194fef65 not found: ID does not exist" containerID="ca10367c017442e0df1130b904f2ec59c24ca3dbfacab4fe10da8e64194fef65" Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.578682 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca10367c017442e0df1130b904f2ec59c24ca3dbfacab4fe10da8e64194fef65"} err="failed to get container status \"ca10367c017442e0df1130b904f2ec59c24ca3dbfacab4fe10da8e64194fef65\": rpc error: code = NotFound desc = could not find container \"ca10367c017442e0df1130b904f2ec59c24ca3dbfacab4fe10da8e64194fef65\": container with ID starting with ca10367c017442e0df1130b904f2ec59c24ca3dbfacab4fe10da8e64194fef65 not found: ID does not exist" Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.578704 4672 scope.go:117] "RemoveContainer" containerID="575a2e8d84edb27d10164f5e6a9f1f4bd5bde3e8169f7e61941a5af3bc43535c" Feb 17 16:16:28 crc kubenswrapper[4672]: E0217 16:16:28.579176 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"575a2e8d84edb27d10164f5e6a9f1f4bd5bde3e8169f7e61941a5af3bc43535c\": container with ID starting with 575a2e8d84edb27d10164f5e6a9f1f4bd5bde3e8169f7e61941a5af3bc43535c not found: ID does not exist" containerID="575a2e8d84edb27d10164f5e6a9f1f4bd5bde3e8169f7e61941a5af3bc43535c" Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.579197 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"575a2e8d84edb27d10164f5e6a9f1f4bd5bde3e8169f7e61941a5af3bc43535c"} err="failed to get container status \"575a2e8d84edb27d10164f5e6a9f1f4bd5bde3e8169f7e61941a5af3bc43535c\": rpc error: code = NotFound desc = could not find container \"575a2e8d84edb27d10164f5e6a9f1f4bd5bde3e8169f7e61941a5af3bc43535c\": container with ID starting with 575a2e8d84edb27d10164f5e6a9f1f4bd5bde3e8169f7e61941a5af3bc43535c not found: ID does not exist" Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.579209 4672 scope.go:117] "RemoveContainer" containerID="8872a1fe5c2a3369aa8ca8b278212449ecb6d44706fcf09270b08746a8838cec" Feb 17 16:16:28 crc kubenswrapper[4672]: E0217 16:16:28.579429 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8872a1fe5c2a3369aa8ca8b278212449ecb6d44706fcf09270b08746a8838cec\": container with ID starting with 8872a1fe5c2a3369aa8ca8b278212449ecb6d44706fcf09270b08746a8838cec not found: ID does not exist" containerID="8872a1fe5c2a3369aa8ca8b278212449ecb6d44706fcf09270b08746a8838cec" Feb 17 16:16:28 crc kubenswrapper[4672]: I0217 16:16:28.579451 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8872a1fe5c2a3369aa8ca8b278212449ecb6d44706fcf09270b08746a8838cec"} err="failed to get container status \"8872a1fe5c2a3369aa8ca8b278212449ecb6d44706fcf09270b08746a8838cec\": rpc error: code = NotFound desc = could not find container \"8872a1fe5c2a3369aa8ca8b278212449ecb6d44706fcf09270b08746a8838cec\": container with ID starting with 8872a1fe5c2a3369aa8ca8b278212449ecb6d44706fcf09270b08746a8838cec not found: ID does not exist" Feb 17 16:16:29 crc kubenswrapper[4672]: I0217 16:16:29.957810 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00d16014-9225-4eb6-8965-6772249b069d" path="/var/lib/kubelet/pods/00d16014-9225-4eb6-8965-6772249b069d/volumes" Feb 17 16:16:30 crc kubenswrapper[4672]: I0217 16:16:30.909690 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b"] Feb 17 16:16:30 crc kubenswrapper[4672]: E0217 16:16:30.910010 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d16014-9225-4eb6-8965-6772249b069d" containerName="registry-server" Feb 17 16:16:30 crc kubenswrapper[4672]: I0217 16:16:30.910032 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d16014-9225-4eb6-8965-6772249b069d" containerName="registry-server" Feb 17 16:16:30 crc kubenswrapper[4672]: E0217 16:16:30.910064 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d16014-9225-4eb6-8965-6772249b069d" containerName="extract-content" Feb 17 16:16:30 crc kubenswrapper[4672]: I0217 16:16:30.910076 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d16014-9225-4eb6-8965-6772249b069d" containerName="extract-content" Feb 17 16:16:30 crc kubenswrapper[4672]: E0217 16:16:30.910095 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d16014-9225-4eb6-8965-6772249b069d" containerName="extract-utilities" Feb 17 16:16:30 crc kubenswrapper[4672]: I0217 16:16:30.910108 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d16014-9225-4eb6-8965-6772249b069d" containerName="extract-utilities" Feb 17 16:16:30 crc kubenswrapper[4672]: I0217 16:16:30.910361 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="00d16014-9225-4eb6-8965-6772249b069d" containerName="registry-server" Feb 17 16:16:30 crc kubenswrapper[4672]: I0217 16:16:30.911707 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b" Feb 17 16:16:30 crc kubenswrapper[4672]: I0217 16:16:30.914220 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:16:30 crc kubenswrapper[4672]: I0217 16:16:30.930985 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b"] Feb 17 16:16:30 crc kubenswrapper[4672]: I0217 16:16:30.983640 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ztn2\" (UniqueName: \"kubernetes.io/projected/100d404a-cfba-4360-bd59-0d74afc68e40-kube-api-access-8ztn2\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b\" (UID: \"100d404a-cfba-4360-bd59-0d74afc68e40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b" Feb 17 16:16:30 crc kubenswrapper[4672]: I0217 16:16:30.983732 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/100d404a-cfba-4360-bd59-0d74afc68e40-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b\" (UID: \"100d404a-cfba-4360-bd59-0d74afc68e40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b" Feb 17 16:16:30 crc kubenswrapper[4672]: I0217 16:16:30.983758 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/100d404a-cfba-4360-bd59-0d74afc68e40-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b\" (UID: \"100d404a-cfba-4360-bd59-0d74afc68e40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b" Feb 17 16:16:31 crc kubenswrapper[4672]: I0217 16:16:31.084923 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ztn2\" (UniqueName: \"kubernetes.io/projected/100d404a-cfba-4360-bd59-0d74afc68e40-kube-api-access-8ztn2\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b\" (UID: \"100d404a-cfba-4360-bd59-0d74afc68e40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b" Feb 17 16:16:31 crc kubenswrapper[4672]: I0217 16:16:31.085034 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/100d404a-cfba-4360-bd59-0d74afc68e40-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b\" (UID: \"100d404a-cfba-4360-bd59-0d74afc68e40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b" Feb 17 16:16:31 crc kubenswrapper[4672]: I0217 16:16:31.085074 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/100d404a-cfba-4360-bd59-0d74afc68e40-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b\" (UID: \"100d404a-cfba-4360-bd59-0d74afc68e40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b" Feb 17 16:16:31 crc kubenswrapper[4672]: I0217 16:16:31.085524 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/100d404a-cfba-4360-bd59-0d74afc68e40-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b\" (UID: \"100d404a-cfba-4360-bd59-0d74afc68e40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b" Feb 17 16:16:31 crc kubenswrapper[4672]: I0217 16:16:31.085583 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/100d404a-cfba-4360-bd59-0d74afc68e40-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b\" (UID: \"100d404a-cfba-4360-bd59-0d74afc68e40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b" Feb 17 16:16:31 crc kubenswrapper[4672]: I0217 16:16:31.111291 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ztn2\" (UniqueName: \"kubernetes.io/projected/100d404a-cfba-4360-bd59-0d74afc68e40-kube-api-access-8ztn2\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b\" (UID: \"100d404a-cfba-4360-bd59-0d74afc68e40\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b" Feb 17 16:16:31 crc kubenswrapper[4672]: I0217 16:16:31.229201 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b" Feb 17 16:16:31 crc kubenswrapper[4672]: I0217 16:16:31.710370 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b"] Feb 17 16:16:32 crc kubenswrapper[4672]: I0217 16:16:32.519087 4672 generic.go:334] "Generic (PLEG): container finished" podID="100d404a-cfba-4360-bd59-0d74afc68e40" containerID="97236b6bc8499433da73d63cbd9d0b25386980cca89823d44268e088a8e8b311" exitCode=0 Feb 17 16:16:32 crc kubenswrapper[4672]: I0217 16:16:32.519342 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b" event={"ID":"100d404a-cfba-4360-bd59-0d74afc68e40","Type":"ContainerDied","Data":"97236b6bc8499433da73d63cbd9d0b25386980cca89823d44268e088a8e8b311"} Feb 17 16:16:32 crc kubenswrapper[4672]: I0217 16:16:32.519618 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b" event={"ID":"100d404a-cfba-4360-bd59-0d74afc68e40","Type":"ContainerStarted","Data":"e8382ffe6ae9d81af224cbf9f0f903abbd683a0765f9fe8595fecc0b9b3a545b"} Feb 17 16:16:34 crc kubenswrapper[4672]: I0217 16:16:34.539798 4672 generic.go:334] "Generic (PLEG): container finished" podID="100d404a-cfba-4360-bd59-0d74afc68e40" containerID="20c88a7756d30f2cd696b1505a21a767bdb479dbf204f4995c9eef00e3f8b5e1" exitCode=0 Feb 17 16:16:34 crc kubenswrapper[4672]: I0217 16:16:34.539957 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b" event={"ID":"100d404a-cfba-4360-bd59-0d74afc68e40","Type":"ContainerDied","Data":"20c88a7756d30f2cd696b1505a21a767bdb479dbf204f4995c9eef00e3f8b5e1"} Feb 17 16:16:35 crc kubenswrapper[4672]: I0217 16:16:35.548970 4672 generic.go:334] "Generic (PLEG): container finished" podID="100d404a-cfba-4360-bd59-0d74afc68e40" containerID="c702d07dc1cd2e3b842b9b60a7e84d6b07f05dd111b42fe02334f6534fac3842" exitCode=0 Feb 17 16:16:35 crc kubenswrapper[4672]: I0217 16:16:35.549016 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b" event={"ID":"100d404a-cfba-4360-bd59-0d74afc68e40","Type":"ContainerDied","Data":"c702d07dc1cd2e3b842b9b60a7e84d6b07f05dd111b42fe02334f6534fac3842"} Feb 17 16:16:36 crc kubenswrapper[4672]: I0217 16:16:36.826775 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b" Feb 17 16:16:36 crc kubenswrapper[4672]: I0217 16:16:36.966608 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/100d404a-cfba-4360-bd59-0d74afc68e40-bundle\") pod \"100d404a-cfba-4360-bd59-0d74afc68e40\" (UID: \"100d404a-cfba-4360-bd59-0d74afc68e40\") " Feb 17 16:16:36 crc kubenswrapper[4672]: I0217 16:16:36.966678 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ztn2\" (UniqueName: \"kubernetes.io/projected/100d404a-cfba-4360-bd59-0d74afc68e40-kube-api-access-8ztn2\") pod \"100d404a-cfba-4360-bd59-0d74afc68e40\" (UID: \"100d404a-cfba-4360-bd59-0d74afc68e40\") " Feb 17 16:16:36 crc kubenswrapper[4672]: I0217 16:16:36.966758 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/100d404a-cfba-4360-bd59-0d74afc68e40-util\") pod \"100d404a-cfba-4360-bd59-0d74afc68e40\" (UID: \"100d404a-cfba-4360-bd59-0d74afc68e40\") " Feb 17 16:16:36 crc kubenswrapper[4672]: I0217 16:16:36.967799 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/100d404a-cfba-4360-bd59-0d74afc68e40-bundle" (OuterVolumeSpecName: "bundle") pod "100d404a-cfba-4360-bd59-0d74afc68e40" (UID: "100d404a-cfba-4360-bd59-0d74afc68e40"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:36 crc kubenswrapper[4672]: I0217 16:16:36.972121 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/100d404a-cfba-4360-bd59-0d74afc68e40-kube-api-access-8ztn2" (OuterVolumeSpecName: "kube-api-access-8ztn2") pod "100d404a-cfba-4360-bd59-0d74afc68e40" (UID: "100d404a-cfba-4360-bd59-0d74afc68e40"). InnerVolumeSpecName "kube-api-access-8ztn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:36 crc kubenswrapper[4672]: I0217 16:16:36.995437 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/100d404a-cfba-4360-bd59-0d74afc68e40-util" (OuterVolumeSpecName: "util") pod "100d404a-cfba-4360-bd59-0d74afc68e40" (UID: "100d404a-cfba-4360-bd59-0d74afc68e40"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:37 crc kubenswrapper[4672]: I0217 16:16:37.067945 4672 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/100d404a-cfba-4360-bd59-0d74afc68e40-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:37 crc kubenswrapper[4672]: I0217 16:16:37.067996 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ztn2\" (UniqueName: \"kubernetes.io/projected/100d404a-cfba-4360-bd59-0d74afc68e40-kube-api-access-8ztn2\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:37 crc kubenswrapper[4672]: I0217 16:16:37.068019 4672 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/100d404a-cfba-4360-bd59-0d74afc68e40-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:37 crc kubenswrapper[4672]: I0217 16:16:37.562557 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b" event={"ID":"100d404a-cfba-4360-bd59-0d74afc68e40","Type":"ContainerDied","Data":"e8382ffe6ae9d81af224cbf9f0f903abbd683a0765f9fe8595fecc0b9b3a545b"} Feb 17 16:16:37 crc kubenswrapper[4672]: I0217 16:16:37.562603 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8382ffe6ae9d81af224cbf9f0f903abbd683a0765f9fe8595fecc0b9b3a545b" Feb 17 16:16:37 crc kubenswrapper[4672]: I0217 16:16:37.562644 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b" Feb 17 16:16:40 crc kubenswrapper[4672]: I0217 16:16:40.666233 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-cpp57"] Feb 17 16:16:40 crc kubenswrapper[4672]: E0217 16:16:40.666790 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100d404a-cfba-4360-bd59-0d74afc68e40" containerName="pull" Feb 17 16:16:40 crc kubenswrapper[4672]: I0217 16:16:40.666804 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="100d404a-cfba-4360-bd59-0d74afc68e40" containerName="pull" Feb 17 16:16:40 crc kubenswrapper[4672]: E0217 16:16:40.666816 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100d404a-cfba-4360-bd59-0d74afc68e40" containerName="util" Feb 17 16:16:40 crc kubenswrapper[4672]: I0217 16:16:40.666824 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="100d404a-cfba-4360-bd59-0d74afc68e40" containerName="util" Feb 17 16:16:40 crc kubenswrapper[4672]: E0217 16:16:40.666841 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100d404a-cfba-4360-bd59-0d74afc68e40" containerName="extract" Feb 17 16:16:40 crc kubenswrapper[4672]: I0217 16:16:40.666848 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="100d404a-cfba-4360-bd59-0d74afc68e40" containerName="extract" Feb 17 16:16:40 crc kubenswrapper[4672]: I0217 16:16:40.666987 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="100d404a-cfba-4360-bd59-0d74afc68e40" containerName="extract" Feb 17 16:16:40 crc kubenswrapper[4672]: I0217 16:16:40.667467 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-cpp57" Feb 17 16:16:40 crc kubenswrapper[4672]: I0217 16:16:40.670571 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-dh6hx" Feb 17 16:16:40 crc kubenswrapper[4672]: I0217 16:16:40.670744 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 17 16:16:40 crc kubenswrapper[4672]: I0217 16:16:40.673970 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 17 16:16:40 crc kubenswrapper[4672]: I0217 16:16:40.690490 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-cpp57"] Feb 17 16:16:40 crc kubenswrapper[4672]: I0217 16:16:40.717004 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hcjb\" (UniqueName: \"kubernetes.io/projected/38e0f7d0-a9d3-42f8-b1d9-fd4ef6a8e413-kube-api-access-2hcjb\") pod \"nmstate-operator-694c9596b7-cpp57\" (UID: \"38e0f7d0-a9d3-42f8-b1d9-fd4ef6a8e413\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-cpp57" Feb 17 16:16:40 crc kubenswrapper[4672]: I0217 16:16:40.818463 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hcjb\" (UniqueName: \"kubernetes.io/projected/38e0f7d0-a9d3-42f8-b1d9-fd4ef6a8e413-kube-api-access-2hcjb\") pod \"nmstate-operator-694c9596b7-cpp57\" (UID: \"38e0f7d0-a9d3-42f8-b1d9-fd4ef6a8e413\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-cpp57" Feb 17 16:16:40 crc kubenswrapper[4672]: I0217 16:16:40.842707 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hcjb\" (UniqueName: \"kubernetes.io/projected/38e0f7d0-a9d3-42f8-b1d9-fd4ef6a8e413-kube-api-access-2hcjb\") pod \"nmstate-operator-694c9596b7-cpp57\" (UID: \"38e0f7d0-a9d3-42f8-b1d9-fd4ef6a8e413\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-cpp57" Feb 17 16:16:40 crc kubenswrapper[4672]: I0217 16:16:40.989558 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-cpp57" Feb 17 16:16:41 crc kubenswrapper[4672]: I0217 16:16:41.244169 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-cpp57"] Feb 17 16:16:41 crc kubenswrapper[4672]: I0217 16:16:41.594063 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-cpp57" event={"ID":"38e0f7d0-a9d3-42f8-b1d9-fd4ef6a8e413","Type":"ContainerStarted","Data":"c716250a5ac7d1e138ca95c136182c8f2c44cf4b1a6239c7bdc863ec153ec6c2"} Feb 17 16:16:43 crc kubenswrapper[4672]: I0217 16:16:43.609374 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-cpp57" event={"ID":"38e0f7d0-a9d3-42f8-b1d9-fd4ef6a8e413","Type":"ContainerStarted","Data":"38eca9af856b38d9456aabf201ca25326326238ed70dc8966b1c2ecbf85851b9"} Feb 17 16:16:43 crc kubenswrapper[4672]: I0217 16:16:43.641996 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-cpp57" podStartSLOduration=1.575233317 podStartE2EDuration="3.641968855s" podCreationTimestamp="2026-02-17 16:16:40 +0000 UTC" firstStartedPulling="2026-02-17 16:16:41.257977567 +0000 UTC m=+810.012066299" lastFinishedPulling="2026-02-17 16:16:43.324713105 +0000 UTC m=+812.078801837" observedRunningTime="2026-02-17 16:16:43.634409814 +0000 UTC m=+812.388498586" watchObservedRunningTime="2026-02-17 16:16:43.641968855 +0000 UTC m=+812.396057627" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.537159 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-49cvf"] Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.538086 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-49cvf" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.539482 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-c48ml" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.548552 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-49cvf"] Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.558636 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-q4kl8"] Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.559302 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-q4kl8" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.560888 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.565835 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-db8bf"] Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.566819 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-db8bf" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.570177 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-q4kl8"] Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.654378 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hfn6n"] Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.655026 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hfn6n" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.657261 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.657543 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.657549 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-744f8" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.667139 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/76662e89-70bf-4e3e-8fd4-df5f7af9c24f-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-q4kl8\" (UID: \"76662e89-70bf-4e3e-8fd4-df5f7af9c24f\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-q4kl8" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.667186 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54zg2\" (UniqueName: \"kubernetes.io/projected/19ebc984-d273-4d9e-9801-5e6b8d2c99b5-kube-api-access-54zg2\") pod \"nmstate-handler-db8bf\" (UID: \"19ebc984-d273-4d9e-9801-5e6b8d2c99b5\") " pod="openshift-nmstate/nmstate-handler-db8bf" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.667210 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/19ebc984-d273-4d9e-9801-5e6b8d2c99b5-ovs-socket\") pod \"nmstate-handler-db8bf\" (UID: \"19ebc984-d273-4d9e-9801-5e6b8d2c99b5\") " pod="openshift-nmstate/nmstate-handler-db8bf" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.667231 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/19ebc984-d273-4d9e-9801-5e6b8d2c99b5-dbus-socket\") pod \"nmstate-handler-db8bf\" (UID: \"19ebc984-d273-4d9e-9801-5e6b8d2c99b5\") " pod="openshift-nmstate/nmstate-handler-db8bf" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.667256 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csgf5\" (UniqueName: \"kubernetes.io/projected/150f899e-0d70-4d0b-8021-82aedb51ea0c-kube-api-access-csgf5\") pod \"nmstate-metrics-58c85c668d-49cvf\" (UID: \"150f899e-0d70-4d0b-8021-82aedb51ea0c\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-49cvf" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.667280 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/19ebc984-d273-4d9e-9801-5e6b8d2c99b5-nmstate-lock\") pod \"nmstate-handler-db8bf\" (UID: \"19ebc984-d273-4d9e-9801-5e6b8d2c99b5\") " pod="openshift-nmstate/nmstate-handler-db8bf" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.667314 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8k6v\" (UniqueName: \"kubernetes.io/projected/76662e89-70bf-4e3e-8fd4-df5f7af9c24f-kube-api-access-f8k6v\") pod \"nmstate-webhook-866bcb46dc-q4kl8\" (UID: \"76662e89-70bf-4e3e-8fd4-df5f7af9c24f\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-q4kl8" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.698682 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hfn6n"] Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.773125 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/76662e89-70bf-4e3e-8fd4-df5f7af9c24f-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-q4kl8\" (UID: \"76662e89-70bf-4e3e-8fd4-df5f7af9c24f\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-q4kl8" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.773209 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54zg2\" (UniqueName: \"kubernetes.io/projected/19ebc984-d273-4d9e-9801-5e6b8d2c99b5-kube-api-access-54zg2\") pod \"nmstate-handler-db8bf\" (UID: \"19ebc984-d273-4d9e-9801-5e6b8d2c99b5\") " pod="openshift-nmstate/nmstate-handler-db8bf" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.773250 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/19ebc984-d273-4d9e-9801-5e6b8d2c99b5-ovs-socket\") pod \"nmstate-handler-db8bf\" (UID: \"19ebc984-d273-4d9e-9801-5e6b8d2c99b5\") " pod="openshift-nmstate/nmstate-handler-db8bf" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.773294 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/19ebc984-d273-4d9e-9801-5e6b8d2c99b5-dbus-socket\") pod \"nmstate-handler-db8bf\" (UID: \"19ebc984-d273-4d9e-9801-5e6b8d2c99b5\") " pod="openshift-nmstate/nmstate-handler-db8bf" Feb 17 16:16:44 crc kubenswrapper[4672]: E0217 16:16:44.773319 4672 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.773343 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csgf5\" (UniqueName: \"kubernetes.io/projected/150f899e-0d70-4d0b-8021-82aedb51ea0c-kube-api-access-csgf5\") pod \"nmstate-metrics-58c85c668d-49cvf\" (UID: \"150f899e-0d70-4d0b-8021-82aedb51ea0c\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-49cvf" Feb 17 16:16:44 crc kubenswrapper[4672]: E0217 16:16:44.773393 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76662e89-70bf-4e3e-8fd4-df5f7af9c24f-tls-key-pair podName:76662e89-70bf-4e3e-8fd4-df5f7af9c24f nodeName:}" failed. No retries permitted until 2026-02-17 16:16:45.273373911 +0000 UTC m=+814.027462643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/76662e89-70bf-4e3e-8fd4-df5f7af9c24f-tls-key-pair") pod "nmstate-webhook-866bcb46dc-q4kl8" (UID: "76662e89-70bf-4e3e-8fd4-df5f7af9c24f") : secret "openshift-nmstate-webhook" not found Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.773421 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/19ebc984-d273-4d9e-9801-5e6b8d2c99b5-nmstate-lock\") pod \"nmstate-handler-db8bf\" (UID: \"19ebc984-d273-4d9e-9801-5e6b8d2c99b5\") " pod="openshift-nmstate/nmstate-handler-db8bf" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.773436 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/19ebc984-d273-4d9e-9801-5e6b8d2c99b5-ovs-socket\") pod \"nmstate-handler-db8bf\" (UID: \"19ebc984-d273-4d9e-9801-5e6b8d2c99b5\") " pod="openshift-nmstate/nmstate-handler-db8bf" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.773486 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/19ebc984-d273-4d9e-9801-5e6b8d2c99b5-nmstate-lock\") pod \"nmstate-handler-db8bf\" (UID: \"19ebc984-d273-4d9e-9801-5e6b8d2c99b5\") " pod="openshift-nmstate/nmstate-handler-db8bf" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.773455 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/24078e98-6c8d-4bb5-a40f-2042ad57c490-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-hfn6n\" (UID: \"24078e98-6c8d-4bb5-a40f-2042ad57c490\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hfn6n" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.773577 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2mxx\" (UniqueName: \"kubernetes.io/projected/24078e98-6c8d-4bb5-a40f-2042ad57c490-kube-api-access-v2mxx\") pod \"nmstate-console-plugin-5c78fc5d65-hfn6n\" (UID: \"24078e98-6c8d-4bb5-a40f-2042ad57c490\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hfn6n" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.773608 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8k6v\" (UniqueName: \"kubernetes.io/projected/76662e89-70bf-4e3e-8fd4-df5f7af9c24f-kube-api-access-f8k6v\") pod \"nmstate-webhook-866bcb46dc-q4kl8\" (UID: \"76662e89-70bf-4e3e-8fd4-df5f7af9c24f\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-q4kl8" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.773619 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/19ebc984-d273-4d9e-9801-5e6b8d2c99b5-dbus-socket\") pod \"nmstate-handler-db8bf\" (UID: \"19ebc984-d273-4d9e-9801-5e6b8d2c99b5\") " pod="openshift-nmstate/nmstate-handler-db8bf" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.773718 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/24078e98-6c8d-4bb5-a40f-2042ad57c490-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-hfn6n\" (UID: \"24078e98-6c8d-4bb5-a40f-2042ad57c490\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hfn6n" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.791990 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54zg2\" (UniqueName: \"kubernetes.io/projected/19ebc984-d273-4d9e-9801-5e6b8d2c99b5-kube-api-access-54zg2\") pod \"nmstate-handler-db8bf\" (UID: \"19ebc984-d273-4d9e-9801-5e6b8d2c99b5\") " pod="openshift-nmstate/nmstate-handler-db8bf" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.792074 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csgf5\" (UniqueName: \"kubernetes.io/projected/150f899e-0d70-4d0b-8021-82aedb51ea0c-kube-api-access-csgf5\") pod \"nmstate-metrics-58c85c668d-49cvf\" (UID: \"150f899e-0d70-4d0b-8021-82aedb51ea0c\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-49cvf" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.815954 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8k6v\" (UniqueName: \"kubernetes.io/projected/76662e89-70bf-4e3e-8fd4-df5f7af9c24f-kube-api-access-f8k6v\") pod \"nmstate-webhook-866bcb46dc-q4kl8\" (UID: \"76662e89-70bf-4e3e-8fd4-df5f7af9c24f\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-q4kl8" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.853360 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-49cvf" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.875119 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/24078e98-6c8d-4bb5-a40f-2042ad57c490-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-hfn6n\" (UID: \"24078e98-6c8d-4bb5-a40f-2042ad57c490\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hfn6n" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.875177 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2mxx\" (UniqueName: \"kubernetes.io/projected/24078e98-6c8d-4bb5-a40f-2042ad57c490-kube-api-access-v2mxx\") pod \"nmstate-console-plugin-5c78fc5d65-hfn6n\" (UID: \"24078e98-6c8d-4bb5-a40f-2042ad57c490\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hfn6n" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.875229 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/24078e98-6c8d-4bb5-a40f-2042ad57c490-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-hfn6n\" (UID: \"24078e98-6c8d-4bb5-a40f-2042ad57c490\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hfn6n" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.876310 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/24078e98-6c8d-4bb5-a40f-2042ad57c490-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-hfn6n\" (UID: \"24078e98-6c8d-4bb5-a40f-2042ad57c490\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hfn6n" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.885053 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/24078e98-6c8d-4bb5-a40f-2042ad57c490-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-hfn6n\" (UID: \"24078e98-6c8d-4bb5-a40f-2042ad57c490\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hfn6n" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.885325 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-db8bf" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.906692 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-59898c54fd-qhrd2"] Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.907411 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.911198 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2mxx\" (UniqueName: \"kubernetes.io/projected/24078e98-6c8d-4bb5-a40f-2042ad57c490-kube-api-access-v2mxx\") pod \"nmstate-console-plugin-5c78fc5d65-hfn6n\" (UID: \"24078e98-6c8d-4bb5-a40f-2042ad57c490\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hfn6n" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.928386 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-59898c54fd-qhrd2"] Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.976823 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hfn6n" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.977049 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d283af99-afc0-41a9-85e5-0d7172cd255c-service-ca\") pod \"console-59898c54fd-qhrd2\" (UID: \"d283af99-afc0-41a9-85e5-0d7172cd255c\") " pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.977133 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d283af99-afc0-41a9-85e5-0d7172cd255c-console-config\") pod \"console-59898c54fd-qhrd2\" (UID: \"d283af99-afc0-41a9-85e5-0d7172cd255c\") " pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.977154 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d283af99-afc0-41a9-85e5-0d7172cd255c-trusted-ca-bundle\") pod \"console-59898c54fd-qhrd2\" (UID: \"d283af99-afc0-41a9-85e5-0d7172cd255c\") " pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.977181 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d283af99-afc0-41a9-85e5-0d7172cd255c-oauth-serving-cert\") pod \"console-59898c54fd-qhrd2\" (UID: \"d283af99-afc0-41a9-85e5-0d7172cd255c\") " pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.977201 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d283af99-afc0-41a9-85e5-0d7172cd255c-console-serving-cert\") pod \"console-59898c54fd-qhrd2\" (UID: \"d283af99-afc0-41a9-85e5-0d7172cd255c\") " pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.977234 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p66xm\" (UniqueName: \"kubernetes.io/projected/d283af99-afc0-41a9-85e5-0d7172cd255c-kube-api-access-p66xm\") pod \"console-59898c54fd-qhrd2\" (UID: \"d283af99-afc0-41a9-85e5-0d7172cd255c\") " pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:44 crc kubenswrapper[4672]: I0217 16:16:44.977253 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d283af99-afc0-41a9-85e5-0d7172cd255c-console-oauth-config\") pod \"console-59898c54fd-qhrd2\" (UID: \"d283af99-afc0-41a9-85e5-0d7172cd255c\") " pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.080211 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d283af99-afc0-41a9-85e5-0d7172cd255c-console-config\") pod \"console-59898c54fd-qhrd2\" (UID: \"d283af99-afc0-41a9-85e5-0d7172cd255c\") " pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.080493 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d283af99-afc0-41a9-85e5-0d7172cd255c-trusted-ca-bundle\") pod \"console-59898c54fd-qhrd2\" (UID: \"d283af99-afc0-41a9-85e5-0d7172cd255c\") " pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.080538 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d283af99-afc0-41a9-85e5-0d7172cd255c-oauth-serving-cert\") pod \"console-59898c54fd-qhrd2\" (UID: \"d283af99-afc0-41a9-85e5-0d7172cd255c\") " pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.080560 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d283af99-afc0-41a9-85e5-0d7172cd255c-console-serving-cert\") pod \"console-59898c54fd-qhrd2\" (UID: \"d283af99-afc0-41a9-85e5-0d7172cd255c\") " pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.080598 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p66xm\" (UniqueName: \"kubernetes.io/projected/d283af99-afc0-41a9-85e5-0d7172cd255c-kube-api-access-p66xm\") pod \"console-59898c54fd-qhrd2\" (UID: \"d283af99-afc0-41a9-85e5-0d7172cd255c\") " pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.080618 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d283af99-afc0-41a9-85e5-0d7172cd255c-console-oauth-config\") pod \"console-59898c54fd-qhrd2\" (UID: \"d283af99-afc0-41a9-85e5-0d7172cd255c\") " pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.080638 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d283af99-afc0-41a9-85e5-0d7172cd255c-service-ca\") pod \"console-59898c54fd-qhrd2\" (UID: \"d283af99-afc0-41a9-85e5-0d7172cd255c\") " pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.081496 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d283af99-afc0-41a9-85e5-0d7172cd255c-console-config\") pod \"console-59898c54fd-qhrd2\" (UID: \"d283af99-afc0-41a9-85e5-0d7172cd255c\") " pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.081625 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d283af99-afc0-41a9-85e5-0d7172cd255c-oauth-serving-cert\") pod \"console-59898c54fd-qhrd2\" (UID: \"d283af99-afc0-41a9-85e5-0d7172cd255c\") " pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.081646 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d283af99-afc0-41a9-85e5-0d7172cd255c-service-ca\") pod \"console-59898c54fd-qhrd2\" (UID: \"d283af99-afc0-41a9-85e5-0d7172cd255c\") " pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.081695 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d283af99-afc0-41a9-85e5-0d7172cd255c-trusted-ca-bundle\") pod \"console-59898c54fd-qhrd2\" (UID: \"d283af99-afc0-41a9-85e5-0d7172cd255c\") " pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.085373 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d283af99-afc0-41a9-85e5-0d7172cd255c-console-oauth-config\") pod \"console-59898c54fd-qhrd2\" (UID: \"d283af99-afc0-41a9-85e5-0d7172cd255c\") " pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.090010 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d283af99-afc0-41a9-85e5-0d7172cd255c-console-serving-cert\") pod \"console-59898c54fd-qhrd2\" (UID: \"d283af99-afc0-41a9-85e5-0d7172cd255c\") " pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.096754 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p66xm\" (UniqueName: \"kubernetes.io/projected/d283af99-afc0-41a9-85e5-0d7172cd255c-kube-api-access-p66xm\") pod \"console-59898c54fd-qhrd2\" (UID: \"d283af99-afc0-41a9-85e5-0d7172cd255c\") " pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.166757 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-49cvf"] Feb 17 16:16:45 crc kubenswrapper[4672]: W0217 16:16:45.166979 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod150f899e_0d70_4d0b_8021_82aedb51ea0c.slice/crio-e76835b28ca6758a87f496dc71c5b3f407e5e0220c0574e15fb5d6a24ba3ea48 WatchSource:0}: Error finding container e76835b28ca6758a87f496dc71c5b3f407e5e0220c0574e15fb5d6a24ba3ea48: Status 404 returned error can't find the container with id e76835b28ca6758a87f496dc71c5b3f407e5e0220c0574e15fb5d6a24ba3ea48 Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.220291 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hfn6n"] Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.282813 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/76662e89-70bf-4e3e-8fd4-df5f7af9c24f-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-q4kl8\" (UID: \"76662e89-70bf-4e3e-8fd4-df5f7af9c24f\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-q4kl8" Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.286175 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/76662e89-70bf-4e3e-8fd4-df5f7af9c24f-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-q4kl8\" (UID: \"76662e89-70bf-4e3e-8fd4-df5f7af9c24f\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-q4kl8" Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.291056 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.474861 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-q4kl8" Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.620210 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-db8bf" event={"ID":"19ebc984-d273-4d9e-9801-5e6b8d2c99b5","Type":"ContainerStarted","Data":"0a29f844ccc38fb270517b35c19068d1c8f516bebace81a8faec63875a38f7fa"} Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.621276 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-49cvf" event={"ID":"150f899e-0d70-4d0b-8021-82aedb51ea0c","Type":"ContainerStarted","Data":"e76835b28ca6758a87f496dc71c5b3f407e5e0220c0574e15fb5d6a24ba3ea48"} Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.622004 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hfn6n" event={"ID":"24078e98-6c8d-4bb5-a40f-2042ad57c490","Type":"ContainerStarted","Data":"b4d8e85ce3a2f5f7c78837c22963022cb4af324b00f09cf2839cbbdc51f8153a"} Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.680610 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-59898c54fd-qhrd2"] Feb 17 16:16:45 crc kubenswrapper[4672]: W0217 16:16:45.690025 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd283af99_afc0_41a9_85e5_0d7172cd255c.slice/crio-3f61d0d2dead13993ad38f5fc1be010469c67199b58fb1417928e2df5e7536e8 WatchSource:0}: Error finding container 3f61d0d2dead13993ad38f5fc1be010469c67199b58fb1417928e2df5e7536e8: Status 404 returned error can't find the container with id 3f61d0d2dead13993ad38f5fc1be010469c67199b58fb1417928e2df5e7536e8 Feb 17 16:16:45 crc kubenswrapper[4672]: I0217 16:16:45.717669 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-q4kl8"] Feb 17 16:16:45 crc kubenswrapper[4672]: W0217 16:16:45.725981 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76662e89_70bf_4e3e_8fd4_df5f7af9c24f.slice/crio-c2d8557883534cd208f19d6f0211801497c150108fa99e57a96474d70133e757 WatchSource:0}: Error finding container c2d8557883534cd208f19d6f0211801497c150108fa99e57a96474d70133e757: Status 404 returned error can't find the container with id c2d8557883534cd208f19d6f0211801497c150108fa99e57a96474d70133e757 Feb 17 16:16:46 crc kubenswrapper[4672]: I0217 16:16:46.633572 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59898c54fd-qhrd2" event={"ID":"d283af99-afc0-41a9-85e5-0d7172cd255c","Type":"ContainerStarted","Data":"945a1f165c4723c6aa62054eec7a8b1767daa763a8dcd4097125895e99d19054"} Feb 17 16:16:46 crc kubenswrapper[4672]: I0217 16:16:46.634113 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59898c54fd-qhrd2" event={"ID":"d283af99-afc0-41a9-85e5-0d7172cd255c","Type":"ContainerStarted","Data":"3f61d0d2dead13993ad38f5fc1be010469c67199b58fb1417928e2df5e7536e8"} Feb 17 16:16:46 crc kubenswrapper[4672]: I0217 16:16:46.636074 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-q4kl8" event={"ID":"76662e89-70bf-4e3e-8fd4-df5f7af9c24f","Type":"ContainerStarted","Data":"c2d8557883534cd208f19d6f0211801497c150108fa99e57a96474d70133e757"} Feb 17 16:16:48 crc kubenswrapper[4672]: I0217 16:16:48.652819 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-49cvf" event={"ID":"150f899e-0d70-4d0b-8021-82aedb51ea0c","Type":"ContainerStarted","Data":"7ba8dba5758af0d991b3ec2ec84131d4f9422188e2cf4a4f4be44f3d8389e06d"} Feb 17 16:16:48 crc kubenswrapper[4672]: I0217 16:16:48.656023 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hfn6n" event={"ID":"24078e98-6c8d-4bb5-a40f-2042ad57c490","Type":"ContainerStarted","Data":"4611b5bdf5eee669c3013270ebd4146bd439ac85ee7a893d453abe966ad17fa8"} Feb 17 16:16:48 crc kubenswrapper[4672]: I0217 16:16:48.660309 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-q4kl8" event={"ID":"76662e89-70bf-4e3e-8fd4-df5f7af9c24f","Type":"ContainerStarted","Data":"86a945d78f9c0881abbb8f516fa252f125ef484313fe29cd98716ba141aa3567"} Feb 17 16:16:48 crc kubenswrapper[4672]: I0217 16:16:48.660580 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-q4kl8" Feb 17 16:16:48 crc kubenswrapper[4672]: I0217 16:16:48.663368 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-db8bf" Feb 17 16:16:48 crc kubenswrapper[4672]: I0217 16:16:48.672882 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-hfn6n" podStartSLOduration=1.614085204 podStartE2EDuration="4.672856s" podCreationTimestamp="2026-02-17 16:16:44 +0000 UTC" firstStartedPulling="2026-02-17 16:16:45.230415881 +0000 UTC m=+813.984504613" lastFinishedPulling="2026-02-17 16:16:48.289186677 +0000 UTC m=+817.043275409" observedRunningTime="2026-02-17 16:16:48.668624387 +0000 UTC m=+817.422713169" watchObservedRunningTime="2026-02-17 16:16:48.672856 +0000 UTC m=+817.426944762" Feb 17 16:16:48 crc kubenswrapper[4672]: I0217 16:16:48.673340 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-59898c54fd-qhrd2" podStartSLOduration=4.6733302519999995 podStartE2EDuration="4.673330252s" podCreationTimestamp="2026-02-17 16:16:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:46.660844243 +0000 UTC m=+815.414933065" watchObservedRunningTime="2026-02-17 16:16:48.673330252 +0000 UTC m=+817.427419024" Feb 17 16:16:48 crc kubenswrapper[4672]: I0217 16:16:48.690178 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-q4kl8" podStartSLOduration=2.077528393 podStartE2EDuration="4.690151079s" podCreationTimestamp="2026-02-17 16:16:44 +0000 UTC" firstStartedPulling="2026-02-17 16:16:45.728201622 +0000 UTC m=+814.482290354" lastFinishedPulling="2026-02-17 16:16:48.340824308 +0000 UTC m=+817.094913040" observedRunningTime="2026-02-17 16:16:48.689870651 +0000 UTC m=+817.443959393" watchObservedRunningTime="2026-02-17 16:16:48.690151079 +0000 UTC m=+817.444239851" Feb 17 16:16:48 crc kubenswrapper[4672]: I0217 16:16:48.715695 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-db8bf" podStartSLOduration=1.435271448 podStartE2EDuration="4.715670786s" podCreationTimestamp="2026-02-17 16:16:44 +0000 UTC" firstStartedPulling="2026-02-17 16:16:45.015620711 +0000 UTC m=+813.769709443" lastFinishedPulling="2026-02-17 16:16:48.296020049 +0000 UTC m=+817.050108781" observedRunningTime="2026-02-17 16:16:48.708986869 +0000 UTC m=+817.463075611" watchObservedRunningTime="2026-02-17 16:16:48.715670786 +0000 UTC m=+817.469759548" Feb 17 16:16:49 crc kubenswrapper[4672]: I0217 16:16:49.672588 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-db8bf" event={"ID":"19ebc984-d273-4d9e-9801-5e6b8d2c99b5","Type":"ContainerStarted","Data":"5d39e8163a17831b9be7d264f9e76a680220bb50d020c78abd05289f27271f03"} Feb 17 16:16:51 crc kubenswrapper[4672]: I0217 16:16:51.690553 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-49cvf" event={"ID":"150f899e-0d70-4d0b-8021-82aedb51ea0c","Type":"ContainerStarted","Data":"de28b1c7d2319b1c44eaf2b695b1fac8f876b007e073a535b2a206f5d2ceffe5"} Feb 17 16:16:51 crc kubenswrapper[4672]: I0217 16:16:51.719592 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-49cvf" podStartSLOduration=2.001149106 podStartE2EDuration="7.719567726s" podCreationTimestamp="2026-02-17 16:16:44 +0000 UTC" firstStartedPulling="2026-02-17 16:16:45.174547828 +0000 UTC m=+813.928642981" lastFinishedPulling="2026-02-17 16:16:50.892972869 +0000 UTC m=+819.647061601" observedRunningTime="2026-02-17 16:16:51.712793936 +0000 UTC m=+820.466882708" watchObservedRunningTime="2026-02-17 16:16:51.719567726 +0000 UTC m=+820.473656498" Feb 17 16:16:54 crc kubenswrapper[4672]: I0217 16:16:54.924919 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-db8bf" Feb 17 16:16:55 crc kubenswrapper[4672]: I0217 16:16:55.292046 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:55 crc kubenswrapper[4672]: I0217 16:16:55.292433 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:55 crc kubenswrapper[4672]: I0217 16:16:55.296825 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:55 crc kubenswrapper[4672]: I0217 16:16:55.734624 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-59898c54fd-qhrd2" Feb 17 16:16:55 crc kubenswrapper[4672]: I0217 16:16:55.793697 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-d9vk6"] Feb 17 16:17:03 crc kubenswrapper[4672]: I0217 16:17:03.376205 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5td8m"] Feb 17 16:17:03 crc kubenswrapper[4672]: I0217 16:17:03.378108 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5td8m" Feb 17 16:17:03 crc kubenswrapper[4672]: I0217 16:17:03.412828 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5td8m"] Feb 17 16:17:03 crc kubenswrapper[4672]: I0217 16:17:03.436327 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bpv5\" (UniqueName: \"kubernetes.io/projected/fa149417-5f84-49f3-8f33-cd141a4cfc08-kube-api-access-5bpv5\") pod \"community-operators-5td8m\" (UID: \"fa149417-5f84-49f3-8f33-cd141a4cfc08\") " pod="openshift-marketplace/community-operators-5td8m" Feb 17 16:17:03 crc kubenswrapper[4672]: I0217 16:17:03.436405 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa149417-5f84-49f3-8f33-cd141a4cfc08-utilities\") pod \"community-operators-5td8m\" (UID: \"fa149417-5f84-49f3-8f33-cd141a4cfc08\") " pod="openshift-marketplace/community-operators-5td8m" Feb 17 16:17:03 crc kubenswrapper[4672]: I0217 16:17:03.436450 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa149417-5f84-49f3-8f33-cd141a4cfc08-catalog-content\") pod \"community-operators-5td8m\" (UID: \"fa149417-5f84-49f3-8f33-cd141a4cfc08\") " pod="openshift-marketplace/community-operators-5td8m" Feb 17 16:17:03 crc kubenswrapper[4672]: I0217 16:17:03.537150 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa149417-5f84-49f3-8f33-cd141a4cfc08-catalog-content\") pod \"community-operators-5td8m\" (UID: \"fa149417-5f84-49f3-8f33-cd141a4cfc08\") " pod="openshift-marketplace/community-operators-5td8m" Feb 17 16:17:03 crc kubenswrapper[4672]: I0217 16:17:03.537263 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bpv5\" (UniqueName: \"kubernetes.io/projected/fa149417-5f84-49f3-8f33-cd141a4cfc08-kube-api-access-5bpv5\") pod \"community-operators-5td8m\" (UID: \"fa149417-5f84-49f3-8f33-cd141a4cfc08\") " pod="openshift-marketplace/community-operators-5td8m" Feb 17 16:17:03 crc kubenswrapper[4672]: I0217 16:17:03.537301 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa149417-5f84-49f3-8f33-cd141a4cfc08-utilities\") pod \"community-operators-5td8m\" (UID: \"fa149417-5f84-49f3-8f33-cd141a4cfc08\") " pod="openshift-marketplace/community-operators-5td8m" Feb 17 16:17:03 crc kubenswrapper[4672]: I0217 16:17:03.537932 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa149417-5f84-49f3-8f33-cd141a4cfc08-utilities\") pod \"community-operators-5td8m\" (UID: \"fa149417-5f84-49f3-8f33-cd141a4cfc08\") " pod="openshift-marketplace/community-operators-5td8m" Feb 17 16:17:03 crc kubenswrapper[4672]: I0217 16:17:03.538211 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa149417-5f84-49f3-8f33-cd141a4cfc08-catalog-content\") pod \"community-operators-5td8m\" (UID: \"fa149417-5f84-49f3-8f33-cd141a4cfc08\") " pod="openshift-marketplace/community-operators-5td8m" Feb 17 16:17:03 crc kubenswrapper[4672]: I0217 16:17:03.568246 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bpv5\" (UniqueName: \"kubernetes.io/projected/fa149417-5f84-49f3-8f33-cd141a4cfc08-kube-api-access-5bpv5\") pod \"community-operators-5td8m\" (UID: \"fa149417-5f84-49f3-8f33-cd141a4cfc08\") " pod="openshift-marketplace/community-operators-5td8m" Feb 17 16:17:03 crc kubenswrapper[4672]: I0217 16:17:03.702806 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5td8m" Feb 17 16:17:04 crc kubenswrapper[4672]: I0217 16:17:04.160019 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5td8m"] Feb 17 16:17:04 crc kubenswrapper[4672]: W0217 16:17:04.170738 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa149417_5f84_49f3_8f33_cd141a4cfc08.slice/crio-f650bf7ea5c7b1fa80cbfc5964d6f82038322664299601eb75666d3f9b1ca6a0 WatchSource:0}: Error finding container f650bf7ea5c7b1fa80cbfc5964d6f82038322664299601eb75666d3f9b1ca6a0: Status 404 returned error can't find the container with id f650bf7ea5c7b1fa80cbfc5964d6f82038322664299601eb75666d3f9b1ca6a0 Feb 17 16:17:04 crc kubenswrapper[4672]: I0217 16:17:04.831168 4672 generic.go:334] "Generic (PLEG): container finished" podID="fa149417-5f84-49f3-8f33-cd141a4cfc08" containerID="c298f2262ef5efbf13d4036ba437df5c291a4fea5fc10aac1573749bc0bb9c2b" exitCode=0 Feb 17 16:17:04 crc kubenswrapper[4672]: I0217 16:17:04.831262 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5td8m" event={"ID":"fa149417-5f84-49f3-8f33-cd141a4cfc08","Type":"ContainerDied","Data":"c298f2262ef5efbf13d4036ba437df5c291a4fea5fc10aac1573749bc0bb9c2b"} Feb 17 16:17:04 crc kubenswrapper[4672]: I0217 16:17:04.831561 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5td8m" event={"ID":"fa149417-5f84-49f3-8f33-cd141a4cfc08","Type":"ContainerStarted","Data":"f650bf7ea5c7b1fa80cbfc5964d6f82038322664299601eb75666d3f9b1ca6a0"} Feb 17 16:17:05 crc kubenswrapper[4672]: I0217 16:17:05.481487 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-q4kl8" Feb 17 16:17:05 crc kubenswrapper[4672]: I0217 16:17:05.838759 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5td8m" event={"ID":"fa149417-5f84-49f3-8f33-cd141a4cfc08","Type":"ContainerStarted","Data":"0654e3da0ead9a59f4c740998a399b2e4f774c8e1cbabae9724cef80f77e67b1"} Feb 17 16:17:06 crc kubenswrapper[4672]: I0217 16:17:06.851507 4672 generic.go:334] "Generic (PLEG): container finished" podID="fa149417-5f84-49f3-8f33-cd141a4cfc08" containerID="0654e3da0ead9a59f4c740998a399b2e4f774c8e1cbabae9724cef80f77e67b1" exitCode=0 Feb 17 16:17:06 crc kubenswrapper[4672]: I0217 16:17:06.851614 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5td8m" event={"ID":"fa149417-5f84-49f3-8f33-cd141a4cfc08","Type":"ContainerDied","Data":"0654e3da0ead9a59f4c740998a399b2e4f774c8e1cbabae9724cef80f77e67b1"} Feb 17 16:17:07 crc kubenswrapper[4672]: I0217 16:17:07.869071 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5td8m" event={"ID":"fa149417-5f84-49f3-8f33-cd141a4cfc08","Type":"ContainerStarted","Data":"f04a8704cc01b9a3f56a64734ff0235bf51a3d32e84f7ec0b0b51f3b42aa386e"} Feb 17 16:17:07 crc kubenswrapper[4672]: I0217 16:17:07.892959 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5td8m" podStartSLOduration=2.279559771 podStartE2EDuration="4.892943728s" podCreationTimestamp="2026-02-17 16:17:03 +0000 UTC" firstStartedPulling="2026-02-17 16:17:04.833932075 +0000 UTC m=+833.588020847" lastFinishedPulling="2026-02-17 16:17:07.447316032 +0000 UTC m=+836.201404804" observedRunningTime="2026-02-17 16:17:07.889212029 +0000 UTC m=+836.643300761" watchObservedRunningTime="2026-02-17 16:17:07.892943728 +0000 UTC m=+836.647032460" Feb 17 16:17:13 crc kubenswrapper[4672]: I0217 16:17:13.709062 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5td8m" Feb 17 16:17:13 crc kubenswrapper[4672]: I0217 16:17:13.709726 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5td8m" Feb 17 16:17:13 crc kubenswrapper[4672]: I0217 16:17:13.775400 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5td8m" Feb 17 16:17:13 crc kubenswrapper[4672]: I0217 16:17:13.955538 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5td8m" Feb 17 16:17:14 crc kubenswrapper[4672]: I0217 16:17:14.008953 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5td8m"] Feb 17 16:17:15 crc kubenswrapper[4672]: I0217 16:17:15.923906 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5td8m" podUID="fa149417-5f84-49f3-8f33-cd141a4cfc08" containerName="registry-server" containerID="cri-o://f04a8704cc01b9a3f56a64734ff0235bf51a3d32e84f7ec0b0b51f3b42aa386e" gracePeriod=2 Feb 17 16:17:16 crc kubenswrapper[4672]: I0217 16:17:16.382837 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5td8m" Feb 17 16:17:16 crc kubenswrapper[4672]: I0217 16:17:16.528127 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa149417-5f84-49f3-8f33-cd141a4cfc08-utilities\") pod \"fa149417-5f84-49f3-8f33-cd141a4cfc08\" (UID: \"fa149417-5f84-49f3-8f33-cd141a4cfc08\") " Feb 17 16:17:16 crc kubenswrapper[4672]: I0217 16:17:16.528328 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bpv5\" (UniqueName: \"kubernetes.io/projected/fa149417-5f84-49f3-8f33-cd141a4cfc08-kube-api-access-5bpv5\") pod \"fa149417-5f84-49f3-8f33-cd141a4cfc08\" (UID: \"fa149417-5f84-49f3-8f33-cd141a4cfc08\") " Feb 17 16:17:16 crc kubenswrapper[4672]: I0217 16:17:16.528420 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa149417-5f84-49f3-8f33-cd141a4cfc08-catalog-content\") pod \"fa149417-5f84-49f3-8f33-cd141a4cfc08\" (UID: \"fa149417-5f84-49f3-8f33-cd141a4cfc08\") " Feb 17 16:17:16 crc kubenswrapper[4672]: I0217 16:17:16.530015 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa149417-5f84-49f3-8f33-cd141a4cfc08-utilities" (OuterVolumeSpecName: "utilities") pod "fa149417-5f84-49f3-8f33-cd141a4cfc08" (UID: "fa149417-5f84-49f3-8f33-cd141a4cfc08"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:16 crc kubenswrapper[4672]: I0217 16:17:16.543802 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa149417-5f84-49f3-8f33-cd141a4cfc08-kube-api-access-5bpv5" (OuterVolumeSpecName: "kube-api-access-5bpv5") pod "fa149417-5f84-49f3-8f33-cd141a4cfc08" (UID: "fa149417-5f84-49f3-8f33-cd141a4cfc08"). InnerVolumeSpecName "kube-api-access-5bpv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:16 crc kubenswrapper[4672]: I0217 16:17:16.630686 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bpv5\" (UniqueName: \"kubernetes.io/projected/fa149417-5f84-49f3-8f33-cd141a4cfc08-kube-api-access-5bpv5\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:16 crc kubenswrapper[4672]: I0217 16:17:16.630730 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa149417-5f84-49f3-8f33-cd141a4cfc08-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:16 crc kubenswrapper[4672]: I0217 16:17:16.641884 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa149417-5f84-49f3-8f33-cd141a4cfc08-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa149417-5f84-49f3-8f33-cd141a4cfc08" (UID: "fa149417-5f84-49f3-8f33-cd141a4cfc08"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:16 crc kubenswrapper[4672]: I0217 16:17:16.732041 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa149417-5f84-49f3-8f33-cd141a4cfc08-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:16 crc kubenswrapper[4672]: I0217 16:17:16.935541 4672 generic.go:334] "Generic (PLEG): container finished" podID="fa149417-5f84-49f3-8f33-cd141a4cfc08" containerID="f04a8704cc01b9a3f56a64734ff0235bf51a3d32e84f7ec0b0b51f3b42aa386e" exitCode=0 Feb 17 16:17:16 crc kubenswrapper[4672]: I0217 16:17:16.935666 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5td8m" Feb 17 16:17:16 crc kubenswrapper[4672]: I0217 16:17:16.935695 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5td8m" event={"ID":"fa149417-5f84-49f3-8f33-cd141a4cfc08","Type":"ContainerDied","Data":"f04a8704cc01b9a3f56a64734ff0235bf51a3d32e84f7ec0b0b51f3b42aa386e"} Feb 17 16:17:16 crc kubenswrapper[4672]: I0217 16:17:16.937115 4672 scope.go:117] "RemoveContainer" containerID="f04a8704cc01b9a3f56a64734ff0235bf51a3d32e84f7ec0b0b51f3b42aa386e" Feb 17 16:17:16 crc kubenswrapper[4672]: I0217 16:17:16.937042 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5td8m" event={"ID":"fa149417-5f84-49f3-8f33-cd141a4cfc08","Type":"ContainerDied","Data":"f650bf7ea5c7b1fa80cbfc5964d6f82038322664299601eb75666d3f9b1ca6a0"} Feb 17 16:17:16 crc kubenswrapper[4672]: I0217 16:17:16.963288 4672 scope.go:117] "RemoveContainer" containerID="0654e3da0ead9a59f4c740998a399b2e4f774c8e1cbabae9724cef80f77e67b1" Feb 17 16:17:16 crc kubenswrapper[4672]: I0217 16:17:16.985632 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5td8m"] Feb 17 16:17:16 crc kubenswrapper[4672]: I0217 16:17:16.991980 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5td8m"] Feb 17 16:17:17 crc kubenswrapper[4672]: I0217 16:17:17.014804 4672 scope.go:117] "RemoveContainer" containerID="c298f2262ef5efbf13d4036ba437df5c291a4fea5fc10aac1573749bc0bb9c2b" Feb 17 16:17:17 crc kubenswrapper[4672]: I0217 16:17:17.033145 4672 scope.go:117] "RemoveContainer" containerID="f04a8704cc01b9a3f56a64734ff0235bf51a3d32e84f7ec0b0b51f3b42aa386e" Feb 17 16:17:17 crc kubenswrapper[4672]: E0217 16:17:17.033780 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f04a8704cc01b9a3f56a64734ff0235bf51a3d32e84f7ec0b0b51f3b42aa386e\": container with ID starting with f04a8704cc01b9a3f56a64734ff0235bf51a3d32e84f7ec0b0b51f3b42aa386e not found: ID does not exist" containerID="f04a8704cc01b9a3f56a64734ff0235bf51a3d32e84f7ec0b0b51f3b42aa386e" Feb 17 16:17:17 crc kubenswrapper[4672]: I0217 16:17:17.033934 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f04a8704cc01b9a3f56a64734ff0235bf51a3d32e84f7ec0b0b51f3b42aa386e"} err="failed to get container status \"f04a8704cc01b9a3f56a64734ff0235bf51a3d32e84f7ec0b0b51f3b42aa386e\": rpc error: code = NotFound desc = could not find container \"f04a8704cc01b9a3f56a64734ff0235bf51a3d32e84f7ec0b0b51f3b42aa386e\": container with ID starting with f04a8704cc01b9a3f56a64734ff0235bf51a3d32e84f7ec0b0b51f3b42aa386e not found: ID does not exist" Feb 17 16:17:17 crc kubenswrapper[4672]: I0217 16:17:17.034101 4672 scope.go:117] "RemoveContainer" containerID="0654e3da0ead9a59f4c740998a399b2e4f774c8e1cbabae9724cef80f77e67b1" Feb 17 16:17:17 crc kubenswrapper[4672]: E0217 16:17:17.034469 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0654e3da0ead9a59f4c740998a399b2e4f774c8e1cbabae9724cef80f77e67b1\": container with ID starting with 0654e3da0ead9a59f4c740998a399b2e4f774c8e1cbabae9724cef80f77e67b1 not found: ID does not exist" containerID="0654e3da0ead9a59f4c740998a399b2e4f774c8e1cbabae9724cef80f77e67b1" Feb 17 16:17:17 crc kubenswrapper[4672]: I0217 16:17:17.034634 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0654e3da0ead9a59f4c740998a399b2e4f774c8e1cbabae9724cef80f77e67b1"} err="failed to get container status \"0654e3da0ead9a59f4c740998a399b2e4f774c8e1cbabae9724cef80f77e67b1\": rpc error: code = NotFound desc = could not find container \"0654e3da0ead9a59f4c740998a399b2e4f774c8e1cbabae9724cef80f77e67b1\": container with ID starting with 0654e3da0ead9a59f4c740998a399b2e4f774c8e1cbabae9724cef80f77e67b1 not found: ID does not exist" Feb 17 16:17:17 crc kubenswrapper[4672]: I0217 16:17:17.034752 4672 scope.go:117] "RemoveContainer" containerID="c298f2262ef5efbf13d4036ba437df5c291a4fea5fc10aac1573749bc0bb9c2b" Feb 17 16:17:17 crc kubenswrapper[4672]: E0217 16:17:17.035216 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c298f2262ef5efbf13d4036ba437df5c291a4fea5fc10aac1573749bc0bb9c2b\": container with ID starting with c298f2262ef5efbf13d4036ba437df5c291a4fea5fc10aac1573749bc0bb9c2b not found: ID does not exist" containerID="c298f2262ef5efbf13d4036ba437df5c291a4fea5fc10aac1573749bc0bb9c2b" Feb 17 16:17:17 crc kubenswrapper[4672]: I0217 16:17:17.035261 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c298f2262ef5efbf13d4036ba437df5c291a4fea5fc10aac1573749bc0bb9c2b"} err="failed to get container status \"c298f2262ef5efbf13d4036ba437df5c291a4fea5fc10aac1573749bc0bb9c2b\": rpc error: code = NotFound desc = could not find container \"c298f2262ef5efbf13d4036ba437df5c291a4fea5fc10aac1573749bc0bb9c2b\": container with ID starting with c298f2262ef5efbf13d4036ba437df5c291a4fea5fc10aac1573749bc0bb9c2b not found: ID does not exist" Feb 17 16:17:17 crc kubenswrapper[4672]: I0217 16:17:17.960361 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa149417-5f84-49f3-8f33-cd141a4cfc08" path="/var/lib/kubelet/pods/fa149417-5f84-49f3-8f33-cd141a4cfc08/volumes" Feb 17 16:17:20 crc kubenswrapper[4672]: I0217 16:17:20.853970 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-d9vk6" podUID="750ef8f5-44ad-4016-8894-0b2a05430464" containerName="console" containerID="cri-o://7021f5fb4f39da5fa603e0bddd03a5b74b61918c0c7d030e214c8f8e0caa96ed" gracePeriod=15 Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.259839 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-d9vk6_750ef8f5-44ad-4016-8894-0b2a05430464/console/0.log" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.260174 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.431217 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-console-config\") pod \"750ef8f5-44ad-4016-8894-0b2a05430464\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.431304 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-service-ca\") pod \"750ef8f5-44ad-4016-8894-0b2a05430464\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.431329 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/750ef8f5-44ad-4016-8894-0b2a05430464-console-oauth-config\") pod \"750ef8f5-44ad-4016-8894-0b2a05430464\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.431364 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/750ef8f5-44ad-4016-8894-0b2a05430464-console-serving-cert\") pod \"750ef8f5-44ad-4016-8894-0b2a05430464\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.431384 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-oauth-serving-cert\") pod \"750ef8f5-44ad-4016-8894-0b2a05430464\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.431423 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks7zk\" (UniqueName: \"kubernetes.io/projected/750ef8f5-44ad-4016-8894-0b2a05430464-kube-api-access-ks7zk\") pod \"750ef8f5-44ad-4016-8894-0b2a05430464\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.431439 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-trusted-ca-bundle\") pod \"750ef8f5-44ad-4016-8894-0b2a05430464\" (UID: \"750ef8f5-44ad-4016-8894-0b2a05430464\") " Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.432253 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "750ef8f5-44ad-4016-8894-0b2a05430464" (UID: "750ef8f5-44ad-4016-8894-0b2a05430464"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.432371 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "750ef8f5-44ad-4016-8894-0b2a05430464" (UID: "750ef8f5-44ad-4016-8894-0b2a05430464"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.432576 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-service-ca" (OuterVolumeSpecName: "service-ca") pod "750ef8f5-44ad-4016-8894-0b2a05430464" (UID: "750ef8f5-44ad-4016-8894-0b2a05430464"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.432841 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-console-config" (OuterVolumeSpecName: "console-config") pod "750ef8f5-44ad-4016-8894-0b2a05430464" (UID: "750ef8f5-44ad-4016-8894-0b2a05430464"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.437579 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/750ef8f5-44ad-4016-8894-0b2a05430464-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "750ef8f5-44ad-4016-8894-0b2a05430464" (UID: "750ef8f5-44ad-4016-8894-0b2a05430464"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.438873 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/750ef8f5-44ad-4016-8894-0b2a05430464-kube-api-access-ks7zk" (OuterVolumeSpecName: "kube-api-access-ks7zk") pod "750ef8f5-44ad-4016-8894-0b2a05430464" (UID: "750ef8f5-44ad-4016-8894-0b2a05430464"). InnerVolumeSpecName "kube-api-access-ks7zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.444745 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/750ef8f5-44ad-4016-8894-0b2a05430464-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "750ef8f5-44ad-4016-8894-0b2a05430464" (UID: "750ef8f5-44ad-4016-8894-0b2a05430464"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.532739 4672 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.532772 4672 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.532781 4672 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/750ef8f5-44ad-4016-8894-0b2a05430464-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.532789 4672 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/750ef8f5-44ad-4016-8894-0b2a05430464-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.532800 4672 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.532807 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks7zk\" (UniqueName: \"kubernetes.io/projected/750ef8f5-44ad-4016-8894-0b2a05430464-kube-api-access-ks7zk\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.532817 4672 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/750ef8f5-44ad-4016-8894-0b2a05430464-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.718236 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p"] Feb 17 16:17:21 crc kubenswrapper[4672]: E0217 16:17:21.718446 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="750ef8f5-44ad-4016-8894-0b2a05430464" containerName="console" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.718457 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="750ef8f5-44ad-4016-8894-0b2a05430464" containerName="console" Feb 17 16:17:21 crc kubenswrapper[4672]: E0217 16:17:21.718468 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa149417-5f84-49f3-8f33-cd141a4cfc08" containerName="registry-server" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.718474 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa149417-5f84-49f3-8f33-cd141a4cfc08" containerName="registry-server" Feb 17 16:17:21 crc kubenswrapper[4672]: E0217 16:17:21.718489 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa149417-5f84-49f3-8f33-cd141a4cfc08" containerName="extract-content" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.718496 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa149417-5f84-49f3-8f33-cd141a4cfc08" containerName="extract-content" Feb 17 16:17:21 crc kubenswrapper[4672]: E0217 16:17:21.718560 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa149417-5f84-49f3-8f33-cd141a4cfc08" containerName="extract-utilities" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.718570 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa149417-5f84-49f3-8f33-cd141a4cfc08" containerName="extract-utilities" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.718676 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="750ef8f5-44ad-4016-8894-0b2a05430464" containerName="console" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.718691 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa149417-5f84-49f3-8f33-cd141a4cfc08" containerName="registry-server" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.719401 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.721096 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.732410 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p"] Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.735729 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts9z6\" (UniqueName: \"kubernetes.io/projected/382b3125-0f90-40a4-91f2-28ca8ac0e894-kube-api-access-ts9z6\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p\" (UID: \"382b3125-0f90-40a4-91f2-28ca8ac0e894\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.735842 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/382b3125-0f90-40a4-91f2-28ca8ac0e894-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p\" (UID: \"382b3125-0f90-40a4-91f2-28ca8ac0e894\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.735867 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/382b3125-0f90-40a4-91f2-28ca8ac0e894-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p\" (UID: \"382b3125-0f90-40a4-91f2-28ca8ac0e894\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.836749 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/382b3125-0f90-40a4-91f2-28ca8ac0e894-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p\" (UID: \"382b3125-0f90-40a4-91f2-28ca8ac0e894\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.836787 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/382b3125-0f90-40a4-91f2-28ca8ac0e894-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p\" (UID: \"382b3125-0f90-40a4-91f2-28ca8ac0e894\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.836822 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts9z6\" (UniqueName: \"kubernetes.io/projected/382b3125-0f90-40a4-91f2-28ca8ac0e894-kube-api-access-ts9z6\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p\" (UID: \"382b3125-0f90-40a4-91f2-28ca8ac0e894\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.837366 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/382b3125-0f90-40a4-91f2-28ca8ac0e894-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p\" (UID: \"382b3125-0f90-40a4-91f2-28ca8ac0e894\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.839223 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/382b3125-0f90-40a4-91f2-28ca8ac0e894-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p\" (UID: \"382b3125-0f90-40a4-91f2-28ca8ac0e894\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.861787 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts9z6\" (UniqueName: \"kubernetes.io/projected/382b3125-0f90-40a4-91f2-28ca8ac0e894-kube-api-access-ts9z6\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p\" (UID: \"382b3125-0f90-40a4-91f2-28ca8ac0e894\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.995372 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-d9vk6_750ef8f5-44ad-4016-8894-0b2a05430464/console/0.log" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.995453 4672 generic.go:334] "Generic (PLEG): container finished" podID="750ef8f5-44ad-4016-8894-0b2a05430464" containerID="7021f5fb4f39da5fa603e0bddd03a5b74b61918c0c7d030e214c8f8e0caa96ed" exitCode=2 Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.995492 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-d9vk6" event={"ID":"750ef8f5-44ad-4016-8894-0b2a05430464","Type":"ContainerDied","Data":"7021f5fb4f39da5fa603e0bddd03a5b74b61918c0c7d030e214c8f8e0caa96ed"} Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.995552 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-d9vk6" event={"ID":"750ef8f5-44ad-4016-8894-0b2a05430464","Type":"ContainerDied","Data":"9a4862b47fceaa04e03c34548b09f49397cebf31862f79f3151db667cc4a2860"} Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.995595 4672 scope.go:117] "RemoveContainer" containerID="7021f5fb4f39da5fa603e0bddd03a5b74b61918c0c7d030e214c8f8e0caa96ed" Feb 17 16:17:21 crc kubenswrapper[4672]: I0217 16:17:21.995681 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-d9vk6" Feb 17 16:17:22 crc kubenswrapper[4672]: I0217 16:17:22.026834 4672 scope.go:117] "RemoveContainer" containerID="7021f5fb4f39da5fa603e0bddd03a5b74b61918c0c7d030e214c8f8e0caa96ed" Feb 17 16:17:22 crc kubenswrapper[4672]: I0217 16:17:22.027524 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-d9vk6"] Feb 17 16:17:22 crc kubenswrapper[4672]: E0217 16:17:22.027613 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7021f5fb4f39da5fa603e0bddd03a5b74b61918c0c7d030e214c8f8e0caa96ed\": container with ID starting with 7021f5fb4f39da5fa603e0bddd03a5b74b61918c0c7d030e214c8f8e0caa96ed not found: ID does not exist" containerID="7021f5fb4f39da5fa603e0bddd03a5b74b61918c0c7d030e214c8f8e0caa96ed" Feb 17 16:17:22 crc kubenswrapper[4672]: I0217 16:17:22.027665 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7021f5fb4f39da5fa603e0bddd03a5b74b61918c0c7d030e214c8f8e0caa96ed"} err="failed to get container status \"7021f5fb4f39da5fa603e0bddd03a5b74b61918c0c7d030e214c8f8e0caa96ed\": rpc error: code = NotFound desc = could not find container \"7021f5fb4f39da5fa603e0bddd03a5b74b61918c0c7d030e214c8f8e0caa96ed\": container with ID starting with 7021f5fb4f39da5fa603e0bddd03a5b74b61918c0c7d030e214c8f8e0caa96ed not found: ID does not exist" Feb 17 16:17:22 crc kubenswrapper[4672]: I0217 16:17:22.036045 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-d9vk6"] Feb 17 16:17:22 crc kubenswrapper[4672]: I0217 16:17:22.046788 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p" Feb 17 16:17:22 crc kubenswrapper[4672]: I0217 16:17:22.281478 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p"] Feb 17 16:17:23 crc kubenswrapper[4672]: I0217 16:17:23.004821 4672 generic.go:334] "Generic (PLEG): container finished" podID="382b3125-0f90-40a4-91f2-28ca8ac0e894" containerID="3984d1f4fef0b65f0a9c2f11e91711f43e220003ef5f9ba13615b6b8a349c54d" exitCode=0 Feb 17 16:17:23 crc kubenswrapper[4672]: I0217 16:17:23.004907 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p" event={"ID":"382b3125-0f90-40a4-91f2-28ca8ac0e894","Type":"ContainerDied","Data":"3984d1f4fef0b65f0a9c2f11e91711f43e220003ef5f9ba13615b6b8a349c54d"} Feb 17 16:17:23 crc kubenswrapper[4672]: I0217 16:17:23.005539 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p" event={"ID":"382b3125-0f90-40a4-91f2-28ca8ac0e894","Type":"ContainerStarted","Data":"a7d89d3a76775c821ce26a3a43c0d3b50b9fee390e8743e2c100d227fc1467f8"} Feb 17 16:17:23 crc kubenswrapper[4672]: I0217 16:17:23.964670 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="750ef8f5-44ad-4016-8894-0b2a05430464" path="/var/lib/kubelet/pods/750ef8f5-44ad-4016-8894-0b2a05430464/volumes" Feb 17 16:17:25 crc kubenswrapper[4672]: I0217 16:17:25.024252 4672 generic.go:334] "Generic (PLEG): container finished" podID="382b3125-0f90-40a4-91f2-28ca8ac0e894" containerID="4d872f66e6527fa3568323148b36baafa49996c8692452215a07c122dfc37d79" exitCode=0 Feb 17 16:17:25 crc kubenswrapper[4672]: I0217 16:17:25.024387 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p" event={"ID":"382b3125-0f90-40a4-91f2-28ca8ac0e894","Type":"ContainerDied","Data":"4d872f66e6527fa3568323148b36baafa49996c8692452215a07c122dfc37d79"} Feb 17 16:17:26 crc kubenswrapper[4672]: I0217 16:17:26.035468 4672 generic.go:334] "Generic (PLEG): container finished" podID="382b3125-0f90-40a4-91f2-28ca8ac0e894" containerID="15140cdb6481f5194eb179d2464453d12b5772654127d7e8bc75e1217a20e7d7" exitCode=0 Feb 17 16:17:26 crc kubenswrapper[4672]: I0217 16:17:26.035554 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p" event={"ID":"382b3125-0f90-40a4-91f2-28ca8ac0e894","Type":"ContainerDied","Data":"15140cdb6481f5194eb179d2464453d12b5772654127d7e8bc75e1217a20e7d7"} Feb 17 16:17:27 crc kubenswrapper[4672]: I0217 16:17:27.357851 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p" Feb 17 16:17:27 crc kubenswrapper[4672]: I0217 16:17:27.434151 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/382b3125-0f90-40a4-91f2-28ca8ac0e894-bundle\") pod \"382b3125-0f90-40a4-91f2-28ca8ac0e894\" (UID: \"382b3125-0f90-40a4-91f2-28ca8ac0e894\") " Feb 17 16:17:27 crc kubenswrapper[4672]: I0217 16:17:27.434283 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/382b3125-0f90-40a4-91f2-28ca8ac0e894-util\") pod \"382b3125-0f90-40a4-91f2-28ca8ac0e894\" (UID: \"382b3125-0f90-40a4-91f2-28ca8ac0e894\") " Feb 17 16:17:27 crc kubenswrapper[4672]: I0217 16:17:27.434426 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts9z6\" (UniqueName: \"kubernetes.io/projected/382b3125-0f90-40a4-91f2-28ca8ac0e894-kube-api-access-ts9z6\") pod \"382b3125-0f90-40a4-91f2-28ca8ac0e894\" (UID: \"382b3125-0f90-40a4-91f2-28ca8ac0e894\") " Feb 17 16:17:27 crc kubenswrapper[4672]: I0217 16:17:27.440044 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/382b3125-0f90-40a4-91f2-28ca8ac0e894-bundle" (OuterVolumeSpecName: "bundle") pod "382b3125-0f90-40a4-91f2-28ca8ac0e894" (UID: "382b3125-0f90-40a4-91f2-28ca8ac0e894"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:27 crc kubenswrapper[4672]: I0217 16:17:27.441619 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/382b3125-0f90-40a4-91f2-28ca8ac0e894-kube-api-access-ts9z6" (OuterVolumeSpecName: "kube-api-access-ts9z6") pod "382b3125-0f90-40a4-91f2-28ca8ac0e894" (UID: "382b3125-0f90-40a4-91f2-28ca8ac0e894"). InnerVolumeSpecName "kube-api-access-ts9z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:27 crc kubenswrapper[4672]: I0217 16:17:27.449142 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/382b3125-0f90-40a4-91f2-28ca8ac0e894-util" (OuterVolumeSpecName: "util") pod "382b3125-0f90-40a4-91f2-28ca8ac0e894" (UID: "382b3125-0f90-40a4-91f2-28ca8ac0e894"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:27 crc kubenswrapper[4672]: I0217 16:17:27.536246 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ts9z6\" (UniqueName: \"kubernetes.io/projected/382b3125-0f90-40a4-91f2-28ca8ac0e894-kube-api-access-ts9z6\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:27 crc kubenswrapper[4672]: I0217 16:17:27.536305 4672 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/382b3125-0f90-40a4-91f2-28ca8ac0e894-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:27 crc kubenswrapper[4672]: I0217 16:17:27.536335 4672 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/382b3125-0f90-40a4-91f2-28ca8ac0e894-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:28 crc kubenswrapper[4672]: I0217 16:17:28.056859 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p" event={"ID":"382b3125-0f90-40a4-91f2-28ca8ac0e894","Type":"ContainerDied","Data":"a7d89d3a76775c821ce26a3a43c0d3b50b9fee390e8743e2c100d227fc1467f8"} Feb 17 16:17:28 crc kubenswrapper[4672]: I0217 16:17:28.056910 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7d89d3a76775c821ce26a3a43c0d3b50b9fee390e8743e2c100d227fc1467f8" Feb 17 16:17:28 crc kubenswrapper[4672]: I0217 16:17:28.056933 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.407996 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-65789c999f-rcpdd"] Feb 17 16:17:35 crc kubenswrapper[4672]: E0217 16:17:35.408865 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="382b3125-0f90-40a4-91f2-28ca8ac0e894" containerName="pull" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.408877 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="382b3125-0f90-40a4-91f2-28ca8ac0e894" containerName="pull" Feb 17 16:17:35 crc kubenswrapper[4672]: E0217 16:17:35.408889 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="382b3125-0f90-40a4-91f2-28ca8ac0e894" containerName="extract" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.408896 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="382b3125-0f90-40a4-91f2-28ca8ac0e894" containerName="extract" Feb 17 16:17:35 crc kubenswrapper[4672]: E0217 16:17:35.408915 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="382b3125-0f90-40a4-91f2-28ca8ac0e894" containerName="util" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.408923 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="382b3125-0f90-40a4-91f2-28ca8ac0e894" containerName="util" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.409051 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="382b3125-0f90-40a4-91f2-28ca8ac0e894" containerName="extract" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.409602 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-65789c999f-rcpdd" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.411137 4672 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.412023 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.412304 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.412462 4672 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-z6cxp" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.412477 4672 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.427480 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-65789c999f-rcpdd"] Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.445193 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79c462ff-317c-49da-a2db-9e6039c136a7-apiservice-cert\") pod \"metallb-operator-controller-manager-65789c999f-rcpdd\" (UID: \"79c462ff-317c-49da-a2db-9e6039c136a7\") " pod="metallb-system/metallb-operator-controller-manager-65789c999f-rcpdd" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.445249 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjxjm\" (UniqueName: \"kubernetes.io/projected/79c462ff-317c-49da-a2db-9e6039c136a7-kube-api-access-pjxjm\") pod \"metallb-operator-controller-manager-65789c999f-rcpdd\" (UID: \"79c462ff-317c-49da-a2db-9e6039c136a7\") " pod="metallb-system/metallb-operator-controller-manager-65789c999f-rcpdd" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.445319 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79c462ff-317c-49da-a2db-9e6039c136a7-webhook-cert\") pod \"metallb-operator-controller-manager-65789c999f-rcpdd\" (UID: \"79c462ff-317c-49da-a2db-9e6039c136a7\") " pod="metallb-system/metallb-operator-controller-manager-65789c999f-rcpdd" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.546206 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79c462ff-317c-49da-a2db-9e6039c136a7-apiservice-cert\") pod \"metallb-operator-controller-manager-65789c999f-rcpdd\" (UID: \"79c462ff-317c-49da-a2db-9e6039c136a7\") " pod="metallb-system/metallb-operator-controller-manager-65789c999f-rcpdd" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.546262 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjxjm\" (UniqueName: \"kubernetes.io/projected/79c462ff-317c-49da-a2db-9e6039c136a7-kube-api-access-pjxjm\") pod \"metallb-operator-controller-manager-65789c999f-rcpdd\" (UID: \"79c462ff-317c-49da-a2db-9e6039c136a7\") " pod="metallb-system/metallb-operator-controller-manager-65789c999f-rcpdd" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.546294 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79c462ff-317c-49da-a2db-9e6039c136a7-webhook-cert\") pod \"metallb-operator-controller-manager-65789c999f-rcpdd\" (UID: \"79c462ff-317c-49da-a2db-9e6039c136a7\") " pod="metallb-system/metallb-operator-controller-manager-65789c999f-rcpdd" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.552265 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79c462ff-317c-49da-a2db-9e6039c136a7-webhook-cert\") pod \"metallb-operator-controller-manager-65789c999f-rcpdd\" (UID: \"79c462ff-317c-49da-a2db-9e6039c136a7\") " pod="metallb-system/metallb-operator-controller-manager-65789c999f-rcpdd" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.559363 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79c462ff-317c-49da-a2db-9e6039c136a7-apiservice-cert\") pod \"metallb-operator-controller-manager-65789c999f-rcpdd\" (UID: \"79c462ff-317c-49da-a2db-9e6039c136a7\") " pod="metallb-system/metallb-operator-controller-manager-65789c999f-rcpdd" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.565047 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjxjm\" (UniqueName: \"kubernetes.io/projected/79c462ff-317c-49da-a2db-9e6039c136a7-kube-api-access-pjxjm\") pod \"metallb-operator-controller-manager-65789c999f-rcpdd\" (UID: \"79c462ff-317c-49da-a2db-9e6039c136a7\") " pod="metallb-system/metallb-operator-controller-manager-65789c999f-rcpdd" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.724105 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-65789c999f-rcpdd" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.745455 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-57c9dfc97-nmggd"] Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.746258 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-57c9dfc97-nmggd" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.750317 4672 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-ks5pp" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.750336 4672 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.751021 4672 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.773227 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-57c9dfc97-nmggd"] Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.849074 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4cc21a41-d48f-44f6-adfc-7d88c1e6c4b3-webhook-cert\") pod \"metallb-operator-webhook-server-57c9dfc97-nmggd\" (UID: \"4cc21a41-d48f-44f6-adfc-7d88c1e6c4b3\") " pod="metallb-system/metallb-operator-webhook-server-57c9dfc97-nmggd" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.849377 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8ck8\" (UniqueName: \"kubernetes.io/projected/4cc21a41-d48f-44f6-adfc-7d88c1e6c4b3-kube-api-access-w8ck8\") pod \"metallb-operator-webhook-server-57c9dfc97-nmggd\" (UID: \"4cc21a41-d48f-44f6-adfc-7d88c1e6c4b3\") " pod="metallb-system/metallb-operator-webhook-server-57c9dfc97-nmggd" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.849453 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cc21a41-d48f-44f6-adfc-7d88c1e6c4b3-apiservice-cert\") pod \"metallb-operator-webhook-server-57c9dfc97-nmggd\" (UID: \"4cc21a41-d48f-44f6-adfc-7d88c1e6c4b3\") " pod="metallb-system/metallb-operator-webhook-server-57c9dfc97-nmggd" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.950327 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cc21a41-d48f-44f6-adfc-7d88c1e6c4b3-apiservice-cert\") pod \"metallb-operator-webhook-server-57c9dfc97-nmggd\" (UID: \"4cc21a41-d48f-44f6-adfc-7d88c1e6c4b3\") " pod="metallb-system/metallb-operator-webhook-server-57c9dfc97-nmggd" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.950432 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4cc21a41-d48f-44f6-adfc-7d88c1e6c4b3-webhook-cert\") pod \"metallb-operator-webhook-server-57c9dfc97-nmggd\" (UID: \"4cc21a41-d48f-44f6-adfc-7d88c1e6c4b3\") " pod="metallb-system/metallb-operator-webhook-server-57c9dfc97-nmggd" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.950471 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8ck8\" (UniqueName: \"kubernetes.io/projected/4cc21a41-d48f-44f6-adfc-7d88c1e6c4b3-kube-api-access-w8ck8\") pod \"metallb-operator-webhook-server-57c9dfc97-nmggd\" (UID: \"4cc21a41-d48f-44f6-adfc-7d88c1e6c4b3\") " pod="metallb-system/metallb-operator-webhook-server-57c9dfc97-nmggd" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.954130 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4cc21a41-d48f-44f6-adfc-7d88c1e6c4b3-webhook-cert\") pod \"metallb-operator-webhook-server-57c9dfc97-nmggd\" (UID: \"4cc21a41-d48f-44f6-adfc-7d88c1e6c4b3\") " pod="metallb-system/metallb-operator-webhook-server-57c9dfc97-nmggd" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.956284 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cc21a41-d48f-44f6-adfc-7d88c1e6c4b3-apiservice-cert\") pod \"metallb-operator-webhook-server-57c9dfc97-nmggd\" (UID: \"4cc21a41-d48f-44f6-adfc-7d88c1e6c4b3\") " pod="metallb-system/metallb-operator-webhook-server-57c9dfc97-nmggd" Feb 17 16:17:35 crc kubenswrapper[4672]: I0217 16:17:35.966270 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8ck8\" (UniqueName: \"kubernetes.io/projected/4cc21a41-d48f-44f6-adfc-7d88c1e6c4b3-kube-api-access-w8ck8\") pod \"metallb-operator-webhook-server-57c9dfc97-nmggd\" (UID: \"4cc21a41-d48f-44f6-adfc-7d88c1e6c4b3\") " pod="metallb-system/metallb-operator-webhook-server-57c9dfc97-nmggd" Feb 17 16:17:36 crc kubenswrapper[4672]: I0217 16:17:36.100028 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-57c9dfc97-nmggd" Feb 17 16:17:36 crc kubenswrapper[4672]: I0217 16:17:36.191415 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-65789c999f-rcpdd"] Feb 17 16:17:36 crc kubenswrapper[4672]: I0217 16:17:36.470830 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-57c9dfc97-nmggd"] Feb 17 16:17:36 crc kubenswrapper[4672]: W0217 16:17:36.474169 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cc21a41_d48f_44f6_adfc_7d88c1e6c4b3.slice/crio-f1668fb9c5aae582784dfa69f7a22488d5fdf1ce087913575a8f15e5cfda6388 WatchSource:0}: Error finding container f1668fb9c5aae582784dfa69f7a22488d5fdf1ce087913575a8f15e5cfda6388: Status 404 returned error can't find the container with id f1668fb9c5aae582784dfa69f7a22488d5fdf1ce087913575a8f15e5cfda6388 Feb 17 16:17:37 crc kubenswrapper[4672]: I0217 16:17:37.114801 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-65789c999f-rcpdd" event={"ID":"79c462ff-317c-49da-a2db-9e6039c136a7","Type":"ContainerStarted","Data":"3b6463c9ae3ee2b61be66163b746bf7e47927295e3632294991e230b02e5555c"} Feb 17 16:17:37 crc kubenswrapper[4672]: I0217 16:17:37.116481 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-57c9dfc97-nmggd" event={"ID":"4cc21a41-d48f-44f6-adfc-7d88c1e6c4b3","Type":"ContainerStarted","Data":"f1668fb9c5aae582784dfa69f7a22488d5fdf1ce087913575a8f15e5cfda6388"} Feb 17 16:17:40 crc kubenswrapper[4672]: I0217 16:17:40.154819 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-65789c999f-rcpdd" event={"ID":"79c462ff-317c-49da-a2db-9e6039c136a7","Type":"ContainerStarted","Data":"26d12a7c57c5b381966df0f2f5199f43b74bb9ab28f7f2716817975d1f3859c3"} Feb 17 16:17:40 crc kubenswrapper[4672]: I0217 16:17:40.155684 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-65789c999f-rcpdd" Feb 17 16:17:40 crc kubenswrapper[4672]: I0217 16:17:40.185724 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-65789c999f-rcpdd" podStartSLOduration=2.25937574 podStartE2EDuration="5.185702754s" podCreationTimestamp="2026-02-17 16:17:35 +0000 UTC" firstStartedPulling="2026-02-17 16:17:36.215202859 +0000 UTC m=+864.969291591" lastFinishedPulling="2026-02-17 16:17:39.141529873 +0000 UTC m=+867.895618605" observedRunningTime="2026-02-17 16:17:40.175225525 +0000 UTC m=+868.929314257" watchObservedRunningTime="2026-02-17 16:17:40.185702754 +0000 UTC m=+868.939791486" Feb 17 16:17:42 crc kubenswrapper[4672]: I0217 16:17:42.174785 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-57c9dfc97-nmggd" event={"ID":"4cc21a41-d48f-44f6-adfc-7d88c1e6c4b3","Type":"ContainerStarted","Data":"be4896e2d1c0b69453bbf2fa2f655465ef5971bf4315681b2405469e19a47c3d"} Feb 17 16:17:42 crc kubenswrapper[4672]: I0217 16:17:42.176405 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-57c9dfc97-nmggd" Feb 17 16:17:56 crc kubenswrapper[4672]: I0217 16:17:56.111324 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-57c9dfc97-nmggd" Feb 17 16:17:56 crc kubenswrapper[4672]: I0217 16:17:56.144548 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-57c9dfc97-nmggd" podStartSLOduration=16.730189673 podStartE2EDuration="21.144473126s" podCreationTimestamp="2026-02-17 16:17:35 +0000 UTC" firstStartedPulling="2026-02-17 16:17:36.476455082 +0000 UTC m=+865.230543814" lastFinishedPulling="2026-02-17 16:17:40.890738535 +0000 UTC m=+869.644827267" observedRunningTime="2026-02-17 16:17:42.206018303 +0000 UTC m=+870.960107045" watchObservedRunningTime="2026-02-17 16:17:56.144473126 +0000 UTC m=+884.898561898" Feb 17 16:18:05 crc kubenswrapper[4672]: I0217 16:18:05.584677 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qql8n"] Feb 17 16:18:05 crc kubenswrapper[4672]: I0217 16:18:05.587332 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qql8n" Feb 17 16:18:05 crc kubenswrapper[4672]: I0217 16:18:05.592396 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qql8n"] Feb 17 16:18:05 crc kubenswrapper[4672]: I0217 16:18:05.679203 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pglwj\" (UniqueName: \"kubernetes.io/projected/f0fcc3ff-02c1-44db-87f6-64abab496fac-kube-api-access-pglwj\") pod \"certified-operators-qql8n\" (UID: \"f0fcc3ff-02c1-44db-87f6-64abab496fac\") " pod="openshift-marketplace/certified-operators-qql8n" Feb 17 16:18:05 crc kubenswrapper[4672]: I0217 16:18:05.679268 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fcc3ff-02c1-44db-87f6-64abab496fac-utilities\") pod \"certified-operators-qql8n\" (UID: \"f0fcc3ff-02c1-44db-87f6-64abab496fac\") " pod="openshift-marketplace/certified-operators-qql8n" Feb 17 16:18:05 crc kubenswrapper[4672]: I0217 16:18:05.679313 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fcc3ff-02c1-44db-87f6-64abab496fac-catalog-content\") pod \"certified-operators-qql8n\" (UID: \"f0fcc3ff-02c1-44db-87f6-64abab496fac\") " pod="openshift-marketplace/certified-operators-qql8n" Feb 17 16:18:05 crc kubenswrapper[4672]: I0217 16:18:05.781158 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pglwj\" (UniqueName: \"kubernetes.io/projected/f0fcc3ff-02c1-44db-87f6-64abab496fac-kube-api-access-pglwj\") pod \"certified-operators-qql8n\" (UID: \"f0fcc3ff-02c1-44db-87f6-64abab496fac\") " pod="openshift-marketplace/certified-operators-qql8n" Feb 17 16:18:05 crc kubenswrapper[4672]: I0217 16:18:05.781591 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fcc3ff-02c1-44db-87f6-64abab496fac-utilities\") pod \"certified-operators-qql8n\" (UID: \"f0fcc3ff-02c1-44db-87f6-64abab496fac\") " pod="openshift-marketplace/certified-operators-qql8n" Feb 17 16:18:05 crc kubenswrapper[4672]: I0217 16:18:05.781637 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fcc3ff-02c1-44db-87f6-64abab496fac-catalog-content\") pod \"certified-operators-qql8n\" (UID: \"f0fcc3ff-02c1-44db-87f6-64abab496fac\") " pod="openshift-marketplace/certified-operators-qql8n" Feb 17 16:18:05 crc kubenswrapper[4672]: I0217 16:18:05.782145 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fcc3ff-02c1-44db-87f6-64abab496fac-utilities\") pod \"certified-operators-qql8n\" (UID: \"f0fcc3ff-02c1-44db-87f6-64abab496fac\") " pod="openshift-marketplace/certified-operators-qql8n" Feb 17 16:18:05 crc kubenswrapper[4672]: I0217 16:18:05.782161 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fcc3ff-02c1-44db-87f6-64abab496fac-catalog-content\") pod \"certified-operators-qql8n\" (UID: \"f0fcc3ff-02c1-44db-87f6-64abab496fac\") " pod="openshift-marketplace/certified-operators-qql8n" Feb 17 16:18:05 crc kubenswrapper[4672]: I0217 16:18:05.809852 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pglwj\" (UniqueName: \"kubernetes.io/projected/f0fcc3ff-02c1-44db-87f6-64abab496fac-kube-api-access-pglwj\") pod \"certified-operators-qql8n\" (UID: \"f0fcc3ff-02c1-44db-87f6-64abab496fac\") " pod="openshift-marketplace/certified-operators-qql8n" Feb 17 16:18:05 crc kubenswrapper[4672]: I0217 16:18:05.943195 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qql8n" Feb 17 16:18:06 crc kubenswrapper[4672]: I0217 16:18:06.386631 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qql8n"] Feb 17 16:18:07 crc kubenswrapper[4672]: I0217 16:18:07.349718 4672 generic.go:334] "Generic (PLEG): container finished" podID="f0fcc3ff-02c1-44db-87f6-64abab496fac" containerID="8943ee41340fcc9b5c0b30bbb144267ec947af8f84f8223593bffd6f2ca5890d" exitCode=0 Feb 17 16:18:07 crc kubenswrapper[4672]: I0217 16:18:07.349846 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qql8n" event={"ID":"f0fcc3ff-02c1-44db-87f6-64abab496fac","Type":"ContainerDied","Data":"8943ee41340fcc9b5c0b30bbb144267ec947af8f84f8223593bffd6f2ca5890d"} Feb 17 16:18:07 crc kubenswrapper[4672]: I0217 16:18:07.350231 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qql8n" event={"ID":"f0fcc3ff-02c1-44db-87f6-64abab496fac","Type":"ContainerStarted","Data":"18b946ad89c3f34c5ed08d483d4cd5fa9d4019e1a531eb912650a4f1db4307c3"} Feb 17 16:18:10 crc kubenswrapper[4672]: I0217 16:18:10.835576 4672 generic.go:334] "Generic (PLEG): container finished" podID="f0fcc3ff-02c1-44db-87f6-64abab496fac" containerID="47eb700fe1f953268fb4d68c8d9fb5ca30448474eee15547c386fe05e2325cfd" exitCode=0 Feb 17 16:18:10 crc kubenswrapper[4672]: I0217 16:18:10.835642 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qql8n" event={"ID":"f0fcc3ff-02c1-44db-87f6-64abab496fac","Type":"ContainerDied","Data":"47eb700fe1f953268fb4d68c8d9fb5ca30448474eee15547c386fe05e2325cfd"} Feb 17 16:18:11 crc kubenswrapper[4672]: I0217 16:18:11.845870 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qql8n" event={"ID":"f0fcc3ff-02c1-44db-87f6-64abab496fac","Type":"ContainerStarted","Data":"8339e1bae78184611088357f6e588c919850716143b5372801156521e08874a5"} Feb 17 16:18:11 crc kubenswrapper[4672]: I0217 16:18:11.871001 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qql8n" podStartSLOduration=2.97284487 podStartE2EDuration="6.870971676s" podCreationTimestamp="2026-02-17 16:18:05 +0000 UTC" firstStartedPulling="2026-02-17 16:18:07.351662724 +0000 UTC m=+896.105751456" lastFinishedPulling="2026-02-17 16:18:11.24978952 +0000 UTC m=+900.003878262" observedRunningTime="2026-02-17 16:18:11.865366627 +0000 UTC m=+900.619455369" watchObservedRunningTime="2026-02-17 16:18:11.870971676 +0000 UTC m=+900.625060448" Feb 17 16:18:15 crc kubenswrapper[4672]: I0217 16:18:15.727974 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-65789c999f-rcpdd" Feb 17 16:18:15 crc kubenswrapper[4672]: I0217 16:18:15.953998 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qql8n" Feb 17 16:18:15 crc kubenswrapper[4672]: I0217 16:18:15.954068 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qql8n" Feb 17 16:18:15 crc kubenswrapper[4672]: I0217 16:18:15.988977 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qql8n" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.406296 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-p85jn"] Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.407343 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-p85jn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.409667 4672 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-fr58n" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.410388 4672 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.410888 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-gcmvn"] Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.413866 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.415990 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.418349 4672 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.425180 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-p85jn"] Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.502932 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-k5j8q"] Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.504244 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-k5j8q" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.506491 4672 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.506933 4672 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.507036 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.507040 4672 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-k6vgk" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.508307 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fd5a2c9d-3e7b-4525-a730-efd640c47fc6-reloader\") pod \"frr-k8s-gcmvn\" (UID: \"fd5a2c9d-3e7b-4525-a730-efd640c47fc6\") " pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.508362 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66vpn\" (UniqueName: \"kubernetes.io/projected/fd5a2c9d-3e7b-4525-a730-efd640c47fc6-kube-api-access-66vpn\") pod \"frr-k8s-gcmvn\" (UID: \"fd5a2c9d-3e7b-4525-a730-efd640c47fc6\") " pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.508401 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d4185ee-4bef-46e2-abf0-088c934361f2-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-p85jn\" (UID: \"9d4185ee-4bef-46e2-abf0-088c934361f2\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-p85jn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.508446 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fd5a2c9d-3e7b-4525-a730-efd640c47fc6-frr-conf\") pod \"frr-k8s-gcmvn\" (UID: \"fd5a2c9d-3e7b-4525-a730-efd640c47fc6\") " pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.508473 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fd5a2c9d-3e7b-4525-a730-efd640c47fc6-frr-sockets\") pod \"frr-k8s-gcmvn\" (UID: \"fd5a2c9d-3e7b-4525-a730-efd640c47fc6\") " pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.508538 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fd5a2c9d-3e7b-4525-a730-efd640c47fc6-frr-startup\") pod \"frr-k8s-gcmvn\" (UID: \"fd5a2c9d-3e7b-4525-a730-efd640c47fc6\") " pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.508567 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fd5a2c9d-3e7b-4525-a730-efd640c47fc6-metrics\") pod \"frr-k8s-gcmvn\" (UID: \"fd5a2c9d-3e7b-4525-a730-efd640c47fc6\") " pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.508587 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd5a2c9d-3e7b-4525-a730-efd640c47fc6-metrics-certs\") pod \"frr-k8s-gcmvn\" (UID: \"fd5a2c9d-3e7b-4525-a730-efd640c47fc6\") " pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.508615 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnh2m\" (UniqueName: \"kubernetes.io/projected/9d4185ee-4bef-46e2-abf0-088c934361f2-kube-api-access-cnh2m\") pod \"frr-k8s-webhook-server-78b44bf5bb-p85jn\" (UID: \"9d4185ee-4bef-46e2-abf0-088c934361f2\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-p85jn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.518699 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-l8jng"] Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.519744 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-l8jng" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.521266 4672 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.528930 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-l8jng"] Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.609803 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fd5a2c9d-3e7b-4525-a730-efd640c47fc6-reloader\") pod \"frr-k8s-gcmvn\" (UID: \"fd5a2c9d-3e7b-4525-a730-efd640c47fc6\") " pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.610089 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c4ecaffa-63e8-4b49-9274-3e8f715b7d7b-memberlist\") pod \"speaker-k5j8q\" (UID: \"c4ecaffa-63e8-4b49-9274-3e8f715b7d7b\") " pod="metallb-system/speaker-k5j8q" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.610274 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c4ecaffa-63e8-4b49-9274-3e8f715b7d7b-metrics-certs\") pod \"speaker-k5j8q\" (UID: \"c4ecaffa-63e8-4b49-9274-3e8f715b7d7b\") " pod="metallb-system/speaker-k5j8q" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.610346 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66vpn\" (UniqueName: \"kubernetes.io/projected/fd5a2c9d-3e7b-4525-a730-efd640c47fc6-kube-api-access-66vpn\") pod \"frr-k8s-gcmvn\" (UID: \"fd5a2c9d-3e7b-4525-a730-efd640c47fc6\") " pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.610397 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c4ecaffa-63e8-4b49-9274-3e8f715b7d7b-metallb-excludel2\") pod \"speaker-k5j8q\" (UID: \"c4ecaffa-63e8-4b49-9274-3e8f715b7d7b\") " pod="metallb-system/speaker-k5j8q" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.610425 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7497b865-7479-4d35-97da-3d333bc26d66-metrics-certs\") pod \"controller-69bbfbf88f-l8jng\" (UID: \"7497b865-7479-4d35-97da-3d333bc26d66\") " pod="metallb-system/controller-69bbfbf88f-l8jng" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.610445 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d4185ee-4bef-46e2-abf0-088c934361f2-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-p85jn\" (UID: \"9d4185ee-4bef-46e2-abf0-088c934361f2\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-p85jn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.610538 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fd5a2c9d-3e7b-4525-a730-efd640c47fc6-frr-conf\") pod \"frr-k8s-gcmvn\" (UID: \"fd5a2c9d-3e7b-4525-a730-efd640c47fc6\") " pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.610564 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm6ns\" (UniqueName: \"kubernetes.io/projected/7497b865-7479-4d35-97da-3d333bc26d66-kube-api-access-vm6ns\") pod \"controller-69bbfbf88f-l8jng\" (UID: \"7497b865-7479-4d35-97da-3d333bc26d66\") " pod="metallb-system/controller-69bbfbf88f-l8jng" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.610583 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t25hb\" (UniqueName: \"kubernetes.io/projected/c4ecaffa-63e8-4b49-9274-3e8f715b7d7b-kube-api-access-t25hb\") pod \"speaker-k5j8q\" (UID: \"c4ecaffa-63e8-4b49-9274-3e8f715b7d7b\") " pod="metallb-system/speaker-k5j8q" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.610598 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fd5a2c9d-3e7b-4525-a730-efd640c47fc6-frr-sockets\") pod \"frr-k8s-gcmvn\" (UID: \"fd5a2c9d-3e7b-4525-a730-efd640c47fc6\") " pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.610662 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fd5a2c9d-3e7b-4525-a730-efd640c47fc6-reloader\") pod \"frr-k8s-gcmvn\" (UID: \"fd5a2c9d-3e7b-4525-a730-efd640c47fc6\") " pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.610672 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fd5a2c9d-3e7b-4525-a730-efd640c47fc6-frr-startup\") pod \"frr-k8s-gcmvn\" (UID: \"fd5a2c9d-3e7b-4525-a730-efd640c47fc6\") " pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.610730 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7497b865-7479-4d35-97da-3d333bc26d66-cert\") pod \"controller-69bbfbf88f-l8jng\" (UID: \"7497b865-7479-4d35-97da-3d333bc26d66\") " pod="metallb-system/controller-69bbfbf88f-l8jng" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.610753 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fd5a2c9d-3e7b-4525-a730-efd640c47fc6-metrics\") pod \"frr-k8s-gcmvn\" (UID: \"fd5a2c9d-3e7b-4525-a730-efd640c47fc6\") " pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.610777 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd5a2c9d-3e7b-4525-a730-efd640c47fc6-metrics-certs\") pod \"frr-k8s-gcmvn\" (UID: \"fd5a2c9d-3e7b-4525-a730-efd640c47fc6\") " pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.610804 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnh2m\" (UniqueName: \"kubernetes.io/projected/9d4185ee-4bef-46e2-abf0-088c934361f2-kube-api-access-cnh2m\") pod \"frr-k8s-webhook-server-78b44bf5bb-p85jn\" (UID: \"9d4185ee-4bef-46e2-abf0-088c934361f2\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-p85jn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.611065 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fd5a2c9d-3e7b-4525-a730-efd640c47fc6-frr-conf\") pod \"frr-k8s-gcmvn\" (UID: \"fd5a2c9d-3e7b-4525-a730-efd640c47fc6\") " pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.611202 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fd5a2c9d-3e7b-4525-a730-efd640c47fc6-metrics\") pod \"frr-k8s-gcmvn\" (UID: \"fd5a2c9d-3e7b-4525-a730-efd640c47fc6\") " pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.611427 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fd5a2c9d-3e7b-4525-a730-efd640c47fc6-frr-sockets\") pod \"frr-k8s-gcmvn\" (UID: \"fd5a2c9d-3e7b-4525-a730-efd640c47fc6\") " pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.611640 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fd5a2c9d-3e7b-4525-a730-efd640c47fc6-frr-startup\") pod \"frr-k8s-gcmvn\" (UID: \"fd5a2c9d-3e7b-4525-a730-efd640c47fc6\") " pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.617182 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd5a2c9d-3e7b-4525-a730-efd640c47fc6-metrics-certs\") pod \"frr-k8s-gcmvn\" (UID: \"fd5a2c9d-3e7b-4525-a730-efd640c47fc6\") " pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.624792 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d4185ee-4bef-46e2-abf0-088c934361f2-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-p85jn\" (UID: \"9d4185ee-4bef-46e2-abf0-088c934361f2\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-p85jn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.628049 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66vpn\" (UniqueName: \"kubernetes.io/projected/fd5a2c9d-3e7b-4525-a730-efd640c47fc6-kube-api-access-66vpn\") pod \"frr-k8s-gcmvn\" (UID: \"fd5a2c9d-3e7b-4525-a730-efd640c47fc6\") " pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.628714 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnh2m\" (UniqueName: \"kubernetes.io/projected/9d4185ee-4bef-46e2-abf0-088c934361f2-kube-api-access-cnh2m\") pod \"frr-k8s-webhook-server-78b44bf5bb-p85jn\" (UID: \"9d4185ee-4bef-46e2-abf0-088c934361f2\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-p85jn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.711683 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t25hb\" (UniqueName: \"kubernetes.io/projected/c4ecaffa-63e8-4b49-9274-3e8f715b7d7b-kube-api-access-t25hb\") pod \"speaker-k5j8q\" (UID: \"c4ecaffa-63e8-4b49-9274-3e8f715b7d7b\") " pod="metallb-system/speaker-k5j8q" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.711739 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm6ns\" (UniqueName: \"kubernetes.io/projected/7497b865-7479-4d35-97da-3d333bc26d66-kube-api-access-vm6ns\") pod \"controller-69bbfbf88f-l8jng\" (UID: \"7497b865-7479-4d35-97da-3d333bc26d66\") " pod="metallb-system/controller-69bbfbf88f-l8jng" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.711796 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7497b865-7479-4d35-97da-3d333bc26d66-cert\") pod \"controller-69bbfbf88f-l8jng\" (UID: \"7497b865-7479-4d35-97da-3d333bc26d66\") " pod="metallb-system/controller-69bbfbf88f-l8jng" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.711839 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c4ecaffa-63e8-4b49-9274-3e8f715b7d7b-memberlist\") pod \"speaker-k5j8q\" (UID: \"c4ecaffa-63e8-4b49-9274-3e8f715b7d7b\") " pod="metallb-system/speaker-k5j8q" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.711865 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c4ecaffa-63e8-4b49-9274-3e8f715b7d7b-metrics-certs\") pod \"speaker-k5j8q\" (UID: \"c4ecaffa-63e8-4b49-9274-3e8f715b7d7b\") " pod="metallb-system/speaker-k5j8q" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.711894 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c4ecaffa-63e8-4b49-9274-3e8f715b7d7b-metallb-excludel2\") pod \"speaker-k5j8q\" (UID: \"c4ecaffa-63e8-4b49-9274-3e8f715b7d7b\") " pod="metallb-system/speaker-k5j8q" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.711919 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7497b865-7479-4d35-97da-3d333bc26d66-metrics-certs\") pod \"controller-69bbfbf88f-l8jng\" (UID: \"7497b865-7479-4d35-97da-3d333bc26d66\") " pod="metallb-system/controller-69bbfbf88f-l8jng" Feb 17 16:18:16 crc kubenswrapper[4672]: E0217 16:18:16.712943 4672 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 16:18:16 crc kubenswrapper[4672]: E0217 16:18:16.713030 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4ecaffa-63e8-4b49-9274-3e8f715b7d7b-memberlist podName:c4ecaffa-63e8-4b49-9274-3e8f715b7d7b nodeName:}" failed. No retries permitted until 2026-02-17 16:18:17.213013333 +0000 UTC m=+905.967102065 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c4ecaffa-63e8-4b49-9274-3e8f715b7d7b-memberlist") pod "speaker-k5j8q" (UID: "c4ecaffa-63e8-4b49-9274-3e8f715b7d7b") : secret "metallb-memberlist" not found Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.713341 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c4ecaffa-63e8-4b49-9274-3e8f715b7d7b-metallb-excludel2\") pod \"speaker-k5j8q\" (UID: \"c4ecaffa-63e8-4b49-9274-3e8f715b7d7b\") " pod="metallb-system/speaker-k5j8q" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.714765 4672 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.716098 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c4ecaffa-63e8-4b49-9274-3e8f715b7d7b-metrics-certs\") pod \"speaker-k5j8q\" (UID: \"c4ecaffa-63e8-4b49-9274-3e8f715b7d7b\") " pod="metallb-system/speaker-k5j8q" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.716581 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7497b865-7479-4d35-97da-3d333bc26d66-metrics-certs\") pod \"controller-69bbfbf88f-l8jng\" (UID: \"7497b865-7479-4d35-97da-3d333bc26d66\") " pod="metallb-system/controller-69bbfbf88f-l8jng" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.730555 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-p85jn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.732673 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7497b865-7479-4d35-97da-3d333bc26d66-cert\") pod \"controller-69bbfbf88f-l8jng\" (UID: \"7497b865-7479-4d35-97da-3d333bc26d66\") " pod="metallb-system/controller-69bbfbf88f-l8jng" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.735260 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm6ns\" (UniqueName: \"kubernetes.io/projected/7497b865-7479-4d35-97da-3d333bc26d66-kube-api-access-vm6ns\") pod \"controller-69bbfbf88f-l8jng\" (UID: \"7497b865-7479-4d35-97da-3d333bc26d66\") " pod="metallb-system/controller-69bbfbf88f-l8jng" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.735388 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t25hb\" (UniqueName: \"kubernetes.io/projected/c4ecaffa-63e8-4b49-9274-3e8f715b7d7b-kube-api-access-t25hb\") pod \"speaker-k5j8q\" (UID: \"c4ecaffa-63e8-4b49-9274-3e8f715b7d7b\") " pod="metallb-system/speaker-k5j8q" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.739433 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:16 crc kubenswrapper[4672]: I0217 16:18:16.839963 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-l8jng" Feb 17 16:18:17 crc kubenswrapper[4672]: I0217 16:18:17.031312 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qql8n" Feb 17 16:18:17 crc kubenswrapper[4672]: I0217 16:18:17.081540 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-p85jn"] Feb 17 16:18:17 crc kubenswrapper[4672]: I0217 16:18:17.099620 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qql8n"] Feb 17 16:18:17 crc kubenswrapper[4672]: I0217 16:18:17.223380 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c4ecaffa-63e8-4b49-9274-3e8f715b7d7b-memberlist\") pod \"speaker-k5j8q\" (UID: \"c4ecaffa-63e8-4b49-9274-3e8f715b7d7b\") " pod="metallb-system/speaker-k5j8q" Feb 17 16:18:17 crc kubenswrapper[4672]: E0217 16:18:17.223576 4672 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 16:18:17 crc kubenswrapper[4672]: E0217 16:18:17.223684 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4ecaffa-63e8-4b49-9274-3e8f715b7d7b-memberlist podName:c4ecaffa-63e8-4b49-9274-3e8f715b7d7b nodeName:}" failed. No retries permitted until 2026-02-17 16:18:18.223663385 +0000 UTC m=+906.977752127 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c4ecaffa-63e8-4b49-9274-3e8f715b7d7b-memberlist") pod "speaker-k5j8q" (UID: "c4ecaffa-63e8-4b49-9274-3e8f715b7d7b") : secret "metallb-memberlist" not found Feb 17 16:18:17 crc kubenswrapper[4672]: I0217 16:18:17.407055 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-l8jng"] Feb 17 16:18:17 crc kubenswrapper[4672]: W0217 16:18:17.412639 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7497b865_7479_4d35_97da_3d333bc26d66.slice/crio-3eb1cfdb603422f05dedfeb441e745662f224b6d78e0051bdc1ae5b942bb071c WatchSource:0}: Error finding container 3eb1cfdb603422f05dedfeb441e745662f224b6d78e0051bdc1ae5b942bb071c: Status 404 returned error can't find the container with id 3eb1cfdb603422f05dedfeb441e745662f224b6d78e0051bdc1ae5b942bb071c Feb 17 16:18:17 crc kubenswrapper[4672]: I0217 16:18:17.930262 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-p85jn" event={"ID":"9d4185ee-4bef-46e2-abf0-088c934361f2","Type":"ContainerStarted","Data":"23e2bd10635c0aea2d887c55ac8a834dade2e99450cfc51393f7a2171e7a0bb6"} Feb 17 16:18:17 crc kubenswrapper[4672]: I0217 16:18:17.932629 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-l8jng" event={"ID":"7497b865-7479-4d35-97da-3d333bc26d66","Type":"ContainerStarted","Data":"599ff62f32f7f4418044b245274851546242508e7994e1400390c072ddf0ef5e"} Feb 17 16:18:17 crc kubenswrapper[4672]: I0217 16:18:17.932685 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-l8jng" event={"ID":"7497b865-7479-4d35-97da-3d333bc26d66","Type":"ContainerStarted","Data":"d452831a53523c86495922e05277deade9143d9978cc0cb90ac93c4705bcef02"} Feb 17 16:18:17 crc kubenswrapper[4672]: I0217 16:18:17.932703 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-l8jng" event={"ID":"7497b865-7479-4d35-97da-3d333bc26d66","Type":"ContainerStarted","Data":"3eb1cfdb603422f05dedfeb441e745662f224b6d78e0051bdc1ae5b942bb071c"} Feb 17 16:18:17 crc kubenswrapper[4672]: I0217 16:18:17.932948 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-l8jng" Feb 17 16:18:17 crc kubenswrapper[4672]: I0217 16:18:17.933789 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gcmvn" event={"ID":"fd5a2c9d-3e7b-4525-a730-efd640c47fc6","Type":"ContainerStarted","Data":"00b3394a50784bd50d8fc9d487fbe33002045da75ec84f7b0e6ddd202df00850"} Feb 17 16:18:17 crc kubenswrapper[4672]: I0217 16:18:17.964417 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-l8jng" podStartSLOduration=1.964390224 podStartE2EDuration="1.964390224s" podCreationTimestamp="2026-02-17 16:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:17.955902839 +0000 UTC m=+906.709991611" watchObservedRunningTime="2026-02-17 16:18:17.964390224 +0000 UTC m=+906.718478986" Feb 17 16:18:18 crc kubenswrapper[4672]: I0217 16:18:18.236277 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c4ecaffa-63e8-4b49-9274-3e8f715b7d7b-memberlist\") pod \"speaker-k5j8q\" (UID: \"c4ecaffa-63e8-4b49-9274-3e8f715b7d7b\") " pod="metallb-system/speaker-k5j8q" Feb 17 16:18:18 crc kubenswrapper[4672]: I0217 16:18:18.250941 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c4ecaffa-63e8-4b49-9274-3e8f715b7d7b-memberlist\") pod \"speaker-k5j8q\" (UID: \"c4ecaffa-63e8-4b49-9274-3e8f715b7d7b\") " pod="metallb-system/speaker-k5j8q" Feb 17 16:18:18 crc kubenswrapper[4672]: I0217 16:18:18.322453 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-k5j8q" Feb 17 16:18:18 crc kubenswrapper[4672]: W0217 16:18:18.357739 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4ecaffa_63e8_4b49_9274_3e8f715b7d7b.slice/crio-a2233adb172642c049d326967c5894bbc692d1737e9f285cee29cca940f5eaba WatchSource:0}: Error finding container a2233adb172642c049d326967c5894bbc692d1737e9f285cee29cca940f5eaba: Status 404 returned error can't find the container with id a2233adb172642c049d326967c5894bbc692d1737e9f285cee29cca940f5eaba Feb 17 16:18:18 crc kubenswrapper[4672]: I0217 16:18:18.951023 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k5j8q" event={"ID":"c4ecaffa-63e8-4b49-9274-3e8f715b7d7b","Type":"ContainerStarted","Data":"2e7ea9fc053c7daa838cb0e01503c6c99445a52bf175a4a862cef770225fc75e"} Feb 17 16:18:18 crc kubenswrapper[4672]: I0217 16:18:18.951060 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k5j8q" event={"ID":"c4ecaffa-63e8-4b49-9274-3e8f715b7d7b","Type":"ContainerStarted","Data":"9fb97d58a0aab6a2cdfa8604db54391cd975053079412f22711b0f4d9fbb38ab"} Feb 17 16:18:18 crc kubenswrapper[4672]: I0217 16:18:18.951070 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k5j8q" event={"ID":"c4ecaffa-63e8-4b49-9274-3e8f715b7d7b","Type":"ContainerStarted","Data":"a2233adb172642c049d326967c5894bbc692d1737e9f285cee29cca940f5eaba"} Feb 17 16:18:18 crc kubenswrapper[4672]: I0217 16:18:18.951657 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qql8n" podUID="f0fcc3ff-02c1-44db-87f6-64abab496fac" containerName="registry-server" containerID="cri-o://8339e1bae78184611088357f6e588c919850716143b5372801156521e08874a5" gracePeriod=2 Feb 17 16:18:18 crc kubenswrapper[4672]: I0217 16:18:18.951827 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-k5j8q" Feb 17 16:18:19 crc kubenswrapper[4672]: I0217 16:18:19.515415 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qql8n" Feb 17 16:18:19 crc kubenswrapper[4672]: I0217 16:18:19.532688 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-k5j8q" podStartSLOduration=3.532672545 podStartE2EDuration="3.532672545s" podCreationTimestamp="2026-02-17 16:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:18.982583007 +0000 UTC m=+907.736671739" watchObservedRunningTime="2026-02-17 16:18:19.532672545 +0000 UTC m=+908.286761277" Feb 17 16:18:19 crc kubenswrapper[4672]: I0217 16:18:19.657502 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pglwj\" (UniqueName: \"kubernetes.io/projected/f0fcc3ff-02c1-44db-87f6-64abab496fac-kube-api-access-pglwj\") pod \"f0fcc3ff-02c1-44db-87f6-64abab496fac\" (UID: \"f0fcc3ff-02c1-44db-87f6-64abab496fac\") " Feb 17 16:18:19 crc kubenswrapper[4672]: I0217 16:18:19.657677 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fcc3ff-02c1-44db-87f6-64abab496fac-catalog-content\") pod \"f0fcc3ff-02c1-44db-87f6-64abab496fac\" (UID: \"f0fcc3ff-02c1-44db-87f6-64abab496fac\") " Feb 17 16:18:19 crc kubenswrapper[4672]: I0217 16:18:19.657716 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fcc3ff-02c1-44db-87f6-64abab496fac-utilities\") pod \"f0fcc3ff-02c1-44db-87f6-64abab496fac\" (UID: \"f0fcc3ff-02c1-44db-87f6-64abab496fac\") " Feb 17 16:18:19 crc kubenswrapper[4672]: I0217 16:18:19.658492 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0fcc3ff-02c1-44db-87f6-64abab496fac-utilities" (OuterVolumeSpecName: "utilities") pod "f0fcc3ff-02c1-44db-87f6-64abab496fac" (UID: "f0fcc3ff-02c1-44db-87f6-64abab496fac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:19 crc kubenswrapper[4672]: I0217 16:18:19.661931 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0fcc3ff-02c1-44db-87f6-64abab496fac-kube-api-access-pglwj" (OuterVolumeSpecName: "kube-api-access-pglwj") pod "f0fcc3ff-02c1-44db-87f6-64abab496fac" (UID: "f0fcc3ff-02c1-44db-87f6-64abab496fac"). InnerVolumeSpecName "kube-api-access-pglwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:19 crc kubenswrapper[4672]: I0217 16:18:19.719072 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0fcc3ff-02c1-44db-87f6-64abab496fac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0fcc3ff-02c1-44db-87f6-64abab496fac" (UID: "f0fcc3ff-02c1-44db-87f6-64abab496fac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:19 crc kubenswrapper[4672]: I0217 16:18:19.761187 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pglwj\" (UniqueName: \"kubernetes.io/projected/f0fcc3ff-02c1-44db-87f6-64abab496fac-kube-api-access-pglwj\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:19 crc kubenswrapper[4672]: I0217 16:18:19.761230 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0fcc3ff-02c1-44db-87f6-64abab496fac-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:19 crc kubenswrapper[4672]: I0217 16:18:19.761239 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0fcc3ff-02c1-44db-87f6-64abab496fac-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:19 crc kubenswrapper[4672]: I0217 16:18:19.976308 4672 generic.go:334] "Generic (PLEG): container finished" podID="f0fcc3ff-02c1-44db-87f6-64abab496fac" containerID="8339e1bae78184611088357f6e588c919850716143b5372801156521e08874a5" exitCode=0 Feb 17 16:18:19 crc kubenswrapper[4672]: I0217 16:18:19.976410 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qql8n" event={"ID":"f0fcc3ff-02c1-44db-87f6-64abab496fac","Type":"ContainerDied","Data":"8339e1bae78184611088357f6e588c919850716143b5372801156521e08874a5"} Feb 17 16:18:19 crc kubenswrapper[4672]: I0217 16:18:19.976501 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qql8n" event={"ID":"f0fcc3ff-02c1-44db-87f6-64abab496fac","Type":"ContainerDied","Data":"18b946ad89c3f34c5ed08d483d4cd5fa9d4019e1a531eb912650a4f1db4307c3"} Feb 17 16:18:19 crc kubenswrapper[4672]: I0217 16:18:19.976533 4672 scope.go:117] "RemoveContainer" containerID="8339e1bae78184611088357f6e588c919850716143b5372801156521e08874a5" Feb 17 16:18:19 crc kubenswrapper[4672]: I0217 16:18:19.977110 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qql8n" Feb 17 16:18:20 crc kubenswrapper[4672]: I0217 16:18:20.005372 4672 scope.go:117] "RemoveContainer" containerID="47eb700fe1f953268fb4d68c8d9fb5ca30448474eee15547c386fe05e2325cfd" Feb 17 16:18:20 crc kubenswrapper[4672]: I0217 16:18:20.008757 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qql8n"] Feb 17 16:18:20 crc kubenswrapper[4672]: I0217 16:18:20.012602 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qql8n"] Feb 17 16:18:20 crc kubenswrapper[4672]: I0217 16:18:20.026067 4672 scope.go:117] "RemoveContainer" containerID="8943ee41340fcc9b5c0b30bbb144267ec947af8f84f8223593bffd6f2ca5890d" Feb 17 16:18:20 crc kubenswrapper[4672]: I0217 16:18:20.083664 4672 scope.go:117] "RemoveContainer" containerID="8339e1bae78184611088357f6e588c919850716143b5372801156521e08874a5" Feb 17 16:18:20 crc kubenswrapper[4672]: E0217 16:18:20.085955 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8339e1bae78184611088357f6e588c919850716143b5372801156521e08874a5\": container with ID starting with 8339e1bae78184611088357f6e588c919850716143b5372801156521e08874a5 not found: ID does not exist" containerID="8339e1bae78184611088357f6e588c919850716143b5372801156521e08874a5" Feb 17 16:18:20 crc kubenswrapper[4672]: I0217 16:18:20.085989 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8339e1bae78184611088357f6e588c919850716143b5372801156521e08874a5"} err="failed to get container status \"8339e1bae78184611088357f6e588c919850716143b5372801156521e08874a5\": rpc error: code = NotFound desc = could not find container \"8339e1bae78184611088357f6e588c919850716143b5372801156521e08874a5\": container with ID starting with 8339e1bae78184611088357f6e588c919850716143b5372801156521e08874a5 not found: ID does not exist" Feb 17 16:18:20 crc kubenswrapper[4672]: I0217 16:18:20.086013 4672 scope.go:117] "RemoveContainer" containerID="47eb700fe1f953268fb4d68c8d9fb5ca30448474eee15547c386fe05e2325cfd" Feb 17 16:18:20 crc kubenswrapper[4672]: E0217 16:18:20.089075 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47eb700fe1f953268fb4d68c8d9fb5ca30448474eee15547c386fe05e2325cfd\": container with ID starting with 47eb700fe1f953268fb4d68c8d9fb5ca30448474eee15547c386fe05e2325cfd not found: ID does not exist" containerID="47eb700fe1f953268fb4d68c8d9fb5ca30448474eee15547c386fe05e2325cfd" Feb 17 16:18:20 crc kubenswrapper[4672]: I0217 16:18:20.089120 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47eb700fe1f953268fb4d68c8d9fb5ca30448474eee15547c386fe05e2325cfd"} err="failed to get container status \"47eb700fe1f953268fb4d68c8d9fb5ca30448474eee15547c386fe05e2325cfd\": rpc error: code = NotFound desc = could not find container \"47eb700fe1f953268fb4d68c8d9fb5ca30448474eee15547c386fe05e2325cfd\": container with ID starting with 47eb700fe1f953268fb4d68c8d9fb5ca30448474eee15547c386fe05e2325cfd not found: ID does not exist" Feb 17 16:18:20 crc kubenswrapper[4672]: I0217 16:18:20.089148 4672 scope.go:117] "RemoveContainer" containerID="8943ee41340fcc9b5c0b30bbb144267ec947af8f84f8223593bffd6f2ca5890d" Feb 17 16:18:20 crc kubenswrapper[4672]: E0217 16:18:20.093784 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8943ee41340fcc9b5c0b30bbb144267ec947af8f84f8223593bffd6f2ca5890d\": container with ID starting with 8943ee41340fcc9b5c0b30bbb144267ec947af8f84f8223593bffd6f2ca5890d not found: ID does not exist" containerID="8943ee41340fcc9b5c0b30bbb144267ec947af8f84f8223593bffd6f2ca5890d" Feb 17 16:18:20 crc kubenswrapper[4672]: I0217 16:18:20.093834 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8943ee41340fcc9b5c0b30bbb144267ec947af8f84f8223593bffd6f2ca5890d"} err="failed to get container status \"8943ee41340fcc9b5c0b30bbb144267ec947af8f84f8223593bffd6f2ca5890d\": rpc error: code = NotFound desc = could not find container \"8943ee41340fcc9b5c0b30bbb144267ec947af8f84f8223593bffd6f2ca5890d\": container with ID starting with 8943ee41340fcc9b5c0b30bbb144267ec947af8f84f8223593bffd6f2ca5890d not found: ID does not exist" Feb 17 16:18:21 crc kubenswrapper[4672]: I0217 16:18:21.970659 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0fcc3ff-02c1-44db-87f6-64abab496fac" path="/var/lib/kubelet/pods/f0fcc3ff-02c1-44db-87f6-64abab496fac/volumes" Feb 17 16:18:25 crc kubenswrapper[4672]: I0217 16:18:25.007125 4672 generic.go:334] "Generic (PLEG): container finished" podID="fd5a2c9d-3e7b-4525-a730-efd640c47fc6" containerID="1ea08d5f2252da5d5d9a0fdc25b355b55fe1855baffd92c754cb163ac3a8594a" exitCode=0 Feb 17 16:18:25 crc kubenswrapper[4672]: I0217 16:18:25.007207 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gcmvn" event={"ID":"fd5a2c9d-3e7b-4525-a730-efd640c47fc6","Type":"ContainerDied","Data":"1ea08d5f2252da5d5d9a0fdc25b355b55fe1855baffd92c754cb163ac3a8594a"} Feb 17 16:18:25 crc kubenswrapper[4672]: I0217 16:18:25.010350 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-p85jn" event={"ID":"9d4185ee-4bef-46e2-abf0-088c934361f2","Type":"ContainerStarted","Data":"f4caf0e2a182a34551114be5c24f405b127888f871ca894f6bd260a03748e9e0"} Feb 17 16:18:25 crc kubenswrapper[4672]: I0217 16:18:25.010701 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-p85jn" Feb 17 16:18:25 crc kubenswrapper[4672]: I0217 16:18:25.061594 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-p85jn" podStartSLOduration=1.4708173549999999 podStartE2EDuration="9.06150581s" podCreationTimestamp="2026-02-17 16:18:16 +0000 UTC" firstStartedPulling="2026-02-17 16:18:17.091736074 +0000 UTC m=+905.845824796" lastFinishedPulling="2026-02-17 16:18:24.682424509 +0000 UTC m=+913.436513251" observedRunningTime="2026-02-17 16:18:25.049777148 +0000 UTC m=+913.803865910" watchObservedRunningTime="2026-02-17 16:18:25.06150581 +0000 UTC m=+913.815594582" Feb 17 16:18:26 crc kubenswrapper[4672]: I0217 16:18:26.017580 4672 generic.go:334] "Generic (PLEG): container finished" podID="fd5a2c9d-3e7b-4525-a730-efd640c47fc6" containerID="4bab0094af05fb1122a59c73ddc42eda68e0cabc1c6a94b4ea3c1dd8251c23fa" exitCode=0 Feb 17 16:18:26 crc kubenswrapper[4672]: I0217 16:18:26.017655 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gcmvn" event={"ID":"fd5a2c9d-3e7b-4525-a730-efd640c47fc6","Type":"ContainerDied","Data":"4bab0094af05fb1122a59c73ddc42eda68e0cabc1c6a94b4ea3c1dd8251c23fa"} Feb 17 16:18:27 crc kubenswrapper[4672]: I0217 16:18:27.028671 4672 generic.go:334] "Generic (PLEG): container finished" podID="fd5a2c9d-3e7b-4525-a730-efd640c47fc6" containerID="a0264a7d5ece1d20ad55aeb86a82f239fd419cfa493d569df96b52993527e409" exitCode=0 Feb 17 16:18:27 crc kubenswrapper[4672]: I0217 16:18:27.028711 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gcmvn" event={"ID":"fd5a2c9d-3e7b-4525-a730-efd640c47fc6","Type":"ContainerDied","Data":"a0264a7d5ece1d20ad55aeb86a82f239fd419cfa493d569df96b52993527e409"} Feb 17 16:18:27 crc kubenswrapper[4672]: I0217 16:18:27.565742 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:18:27 crc kubenswrapper[4672]: I0217 16:18:27.566095 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:18:28 crc kubenswrapper[4672]: I0217 16:18:28.038888 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gcmvn" event={"ID":"fd5a2c9d-3e7b-4525-a730-efd640c47fc6","Type":"ContainerStarted","Data":"d35138cefe864e62d1e72ad37cc6b88b09d1a9548c5746dc14732275314753b9"} Feb 17 16:18:28 crc kubenswrapper[4672]: I0217 16:18:28.038948 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gcmvn" event={"ID":"fd5a2c9d-3e7b-4525-a730-efd640c47fc6","Type":"ContainerStarted","Data":"f3bc3fffaef6b0c979f62c82594c774fa17932532a576425639b9f534063ac00"} Feb 17 16:18:28 crc kubenswrapper[4672]: I0217 16:18:28.038968 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gcmvn" event={"ID":"fd5a2c9d-3e7b-4525-a730-efd640c47fc6","Type":"ContainerStarted","Data":"872abcb802eb956a8cdf1b01866b7ae9b6416a562fea3b0d2619a25a549e7e78"} Feb 17 16:18:28 crc kubenswrapper[4672]: I0217 16:18:28.038985 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gcmvn" event={"ID":"fd5a2c9d-3e7b-4525-a730-efd640c47fc6","Type":"ContainerStarted","Data":"9f0dce772904f5304a27e8c0649b168c939fbd9d3e5484195b9cb1962ffcfb4a"} Feb 17 16:18:28 crc kubenswrapper[4672]: I0217 16:18:28.039000 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gcmvn" event={"ID":"fd5a2c9d-3e7b-4525-a730-efd640c47fc6","Type":"ContainerStarted","Data":"7cca46cbd9368b6abe4466a1db87cb3a486b44206d56e05474a175192699840b"} Feb 17 16:18:28 crc kubenswrapper[4672]: I0217 16:18:28.039015 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gcmvn" event={"ID":"fd5a2c9d-3e7b-4525-a730-efd640c47fc6","Type":"ContainerStarted","Data":"20d1918f4c42c6d665d02c4bb293ecd85f989f849aa55f333da58ad8cd8a5470"} Feb 17 16:18:28 crc kubenswrapper[4672]: I0217 16:18:28.039053 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:28 crc kubenswrapper[4672]: I0217 16:18:28.062304 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-gcmvn" podStartSLOduration=4.384078842 podStartE2EDuration="12.06228537s" podCreationTimestamp="2026-02-17 16:18:16 +0000 UTC" firstStartedPulling="2026-02-17 16:18:16.985685669 +0000 UTC m=+905.739774401" lastFinishedPulling="2026-02-17 16:18:24.663892157 +0000 UTC m=+913.417980929" observedRunningTime="2026-02-17 16:18:28.056998989 +0000 UTC m=+916.811087741" watchObservedRunningTime="2026-02-17 16:18:28.06228537 +0000 UTC m=+916.816374112" Feb 17 16:18:28 crc kubenswrapper[4672]: I0217 16:18:28.327175 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-k5j8q" Feb 17 16:18:31 crc kubenswrapper[4672]: I0217 16:18:31.162642 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-2rldp"] Feb 17 16:18:31 crc kubenswrapper[4672]: E0217 16:18:31.163254 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0fcc3ff-02c1-44db-87f6-64abab496fac" containerName="extract-utilities" Feb 17 16:18:31 crc kubenswrapper[4672]: I0217 16:18:31.163269 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0fcc3ff-02c1-44db-87f6-64abab496fac" containerName="extract-utilities" Feb 17 16:18:31 crc kubenswrapper[4672]: E0217 16:18:31.163285 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0fcc3ff-02c1-44db-87f6-64abab496fac" containerName="extract-content" Feb 17 16:18:31 crc kubenswrapper[4672]: I0217 16:18:31.163293 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0fcc3ff-02c1-44db-87f6-64abab496fac" containerName="extract-content" Feb 17 16:18:31 crc kubenswrapper[4672]: E0217 16:18:31.163311 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0fcc3ff-02c1-44db-87f6-64abab496fac" containerName="registry-server" Feb 17 16:18:31 crc kubenswrapper[4672]: I0217 16:18:31.163319 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0fcc3ff-02c1-44db-87f6-64abab496fac" containerName="registry-server" Feb 17 16:18:31 crc kubenswrapper[4672]: I0217 16:18:31.163475 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0fcc3ff-02c1-44db-87f6-64abab496fac" containerName="registry-server" Feb 17 16:18:31 crc kubenswrapper[4672]: I0217 16:18:31.163975 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2rldp" Feb 17 16:18:31 crc kubenswrapper[4672]: I0217 16:18:31.166949 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-cxpnx" Feb 17 16:18:31 crc kubenswrapper[4672]: I0217 16:18:31.168449 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 17 16:18:31 crc kubenswrapper[4672]: I0217 16:18:31.173305 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2rldp"] Feb 17 16:18:31 crc kubenswrapper[4672]: I0217 16:18:31.176363 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 17 16:18:31 crc kubenswrapper[4672]: I0217 16:18:31.324233 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkth6\" (UniqueName: \"kubernetes.io/projected/8fa03bbf-8174-41ab-94ee-88e5d46e0bd4-kube-api-access-jkth6\") pod \"openstack-operator-index-2rldp\" (UID: \"8fa03bbf-8174-41ab-94ee-88e5d46e0bd4\") " pod="openstack-operators/openstack-operator-index-2rldp" Feb 17 16:18:31 crc kubenswrapper[4672]: I0217 16:18:31.426048 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkth6\" (UniqueName: \"kubernetes.io/projected/8fa03bbf-8174-41ab-94ee-88e5d46e0bd4-kube-api-access-jkth6\") pod \"openstack-operator-index-2rldp\" (UID: \"8fa03bbf-8174-41ab-94ee-88e5d46e0bd4\") " pod="openstack-operators/openstack-operator-index-2rldp" Feb 17 16:18:31 crc kubenswrapper[4672]: I0217 16:18:31.454308 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkth6\" (UniqueName: \"kubernetes.io/projected/8fa03bbf-8174-41ab-94ee-88e5d46e0bd4-kube-api-access-jkth6\") pod \"openstack-operator-index-2rldp\" (UID: \"8fa03bbf-8174-41ab-94ee-88e5d46e0bd4\") " pod="openstack-operators/openstack-operator-index-2rldp" Feb 17 16:18:31 crc kubenswrapper[4672]: I0217 16:18:31.484013 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2rldp" Feb 17 16:18:31 crc kubenswrapper[4672]: I0217 16:18:31.739694 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:31 crc kubenswrapper[4672]: I0217 16:18:31.768857 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2rldp"] Feb 17 16:18:31 crc kubenswrapper[4672]: I0217 16:18:31.803086 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:32 crc kubenswrapper[4672]: I0217 16:18:32.087952 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2rldp" event={"ID":"8fa03bbf-8174-41ab-94ee-88e5d46e0bd4","Type":"ContainerStarted","Data":"4f72e3c03eddd5970b1b3063472b3ff9101b2c9d39e8c1b1180e8366fe9c5231"} Feb 17 16:18:35 crc kubenswrapper[4672]: I0217 16:18:35.110399 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2rldp" event={"ID":"8fa03bbf-8174-41ab-94ee-88e5d46e0bd4","Type":"ContainerStarted","Data":"24c66a493e2241587790ea358a5c544b267c97de13880367651d992b18a906d8"} Feb 17 16:18:35 crc kubenswrapper[4672]: I0217 16:18:35.135830 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-2rldp" podStartSLOduration=1.6137940670000002 podStartE2EDuration="4.13580733s" podCreationTimestamp="2026-02-17 16:18:31 +0000 UTC" firstStartedPulling="2026-02-17 16:18:31.779977897 +0000 UTC m=+920.534066639" lastFinishedPulling="2026-02-17 16:18:34.30199116 +0000 UTC m=+923.056079902" observedRunningTime="2026-02-17 16:18:35.129819151 +0000 UTC m=+923.883907903" watchObservedRunningTime="2026-02-17 16:18:35.13580733 +0000 UTC m=+923.889896082" Feb 17 16:18:35 crc kubenswrapper[4672]: I0217 16:18:35.329676 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-2rldp"] Feb 17 16:18:35 crc kubenswrapper[4672]: I0217 16:18:35.932459 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-b9pq7"] Feb 17 16:18:35 crc kubenswrapper[4672]: I0217 16:18:35.933805 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b9pq7" Feb 17 16:18:35 crc kubenswrapper[4672]: I0217 16:18:35.966719 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-b9pq7"] Feb 17 16:18:36 crc kubenswrapper[4672]: I0217 16:18:36.013170 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpdtm\" (UniqueName: \"kubernetes.io/projected/ab4aefbf-1133-48d6-af32-33ffaf8d787b-kube-api-access-rpdtm\") pod \"openstack-operator-index-b9pq7\" (UID: \"ab4aefbf-1133-48d6-af32-33ffaf8d787b\") " pod="openstack-operators/openstack-operator-index-b9pq7" Feb 17 16:18:36 crc kubenswrapper[4672]: I0217 16:18:36.114273 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpdtm\" (UniqueName: \"kubernetes.io/projected/ab4aefbf-1133-48d6-af32-33ffaf8d787b-kube-api-access-rpdtm\") pod \"openstack-operator-index-b9pq7\" (UID: \"ab4aefbf-1133-48d6-af32-33ffaf8d787b\") " pod="openstack-operators/openstack-operator-index-b9pq7" Feb 17 16:18:36 crc kubenswrapper[4672]: I0217 16:18:36.135262 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpdtm\" (UniqueName: \"kubernetes.io/projected/ab4aefbf-1133-48d6-af32-33ffaf8d787b-kube-api-access-rpdtm\") pod \"openstack-operator-index-b9pq7\" (UID: \"ab4aefbf-1133-48d6-af32-33ffaf8d787b\") " pod="openstack-operators/openstack-operator-index-b9pq7" Feb 17 16:18:36 crc kubenswrapper[4672]: I0217 16:18:36.263488 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b9pq7" Feb 17 16:18:36 crc kubenswrapper[4672]: I0217 16:18:36.739180 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-p85jn" Feb 17 16:18:36 crc kubenswrapper[4672]: I0217 16:18:36.819091 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-b9pq7"] Feb 17 16:18:36 crc kubenswrapper[4672]: I0217 16:18:36.844017 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-l8jng" Feb 17 16:18:37 crc kubenswrapper[4672]: I0217 16:18:37.125735 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b9pq7" event={"ID":"ab4aefbf-1133-48d6-af32-33ffaf8d787b","Type":"ContainerStarted","Data":"49cfa7496a1ed8b43d367cf2b19f925467962e8140d0db15cf599ec53f17b6e7"} Feb 17 16:18:37 crc kubenswrapper[4672]: I0217 16:18:37.125770 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b9pq7" event={"ID":"ab4aefbf-1133-48d6-af32-33ffaf8d787b","Type":"ContainerStarted","Data":"647111c01b614ceff3f8ecae545ad3a27d373bc5326f2448ac2a98a9d69d18ad"} Feb 17 16:18:37 crc kubenswrapper[4672]: I0217 16:18:37.125815 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-2rldp" podUID="8fa03bbf-8174-41ab-94ee-88e5d46e0bd4" containerName="registry-server" containerID="cri-o://24c66a493e2241587790ea358a5c544b267c97de13880367651d992b18a906d8" gracePeriod=2 Feb 17 16:18:37 crc kubenswrapper[4672]: I0217 16:18:37.147131 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-b9pq7" podStartSLOduration=2.084766335 podStartE2EDuration="2.147107609s" podCreationTimestamp="2026-02-17 16:18:35 +0000 UTC" firstStartedPulling="2026-02-17 16:18:36.8336186 +0000 UTC m=+925.587707342" lastFinishedPulling="2026-02-17 16:18:36.895959884 +0000 UTC m=+925.650048616" observedRunningTime="2026-02-17 16:18:37.145920957 +0000 UTC m=+925.900009709" watchObservedRunningTime="2026-02-17 16:18:37.147107609 +0000 UTC m=+925.901196351" Feb 17 16:18:37 crc kubenswrapper[4672]: I0217 16:18:37.501138 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2rldp" Feb 17 16:18:37 crc kubenswrapper[4672]: I0217 16:18:37.641084 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkth6\" (UniqueName: \"kubernetes.io/projected/8fa03bbf-8174-41ab-94ee-88e5d46e0bd4-kube-api-access-jkth6\") pod \"8fa03bbf-8174-41ab-94ee-88e5d46e0bd4\" (UID: \"8fa03bbf-8174-41ab-94ee-88e5d46e0bd4\") " Feb 17 16:18:37 crc kubenswrapper[4672]: I0217 16:18:37.646391 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fa03bbf-8174-41ab-94ee-88e5d46e0bd4-kube-api-access-jkth6" (OuterVolumeSpecName: "kube-api-access-jkth6") pod "8fa03bbf-8174-41ab-94ee-88e5d46e0bd4" (UID: "8fa03bbf-8174-41ab-94ee-88e5d46e0bd4"). InnerVolumeSpecName "kube-api-access-jkth6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:37 crc kubenswrapper[4672]: I0217 16:18:37.742460 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkth6\" (UniqueName: \"kubernetes.io/projected/8fa03bbf-8174-41ab-94ee-88e5d46e0bd4-kube-api-access-jkth6\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:38 crc kubenswrapper[4672]: I0217 16:18:38.138828 4672 generic.go:334] "Generic (PLEG): container finished" podID="8fa03bbf-8174-41ab-94ee-88e5d46e0bd4" containerID="24c66a493e2241587790ea358a5c544b267c97de13880367651d992b18a906d8" exitCode=0 Feb 17 16:18:38 crc kubenswrapper[4672]: I0217 16:18:38.138920 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2rldp" event={"ID":"8fa03bbf-8174-41ab-94ee-88e5d46e0bd4","Type":"ContainerDied","Data":"24c66a493e2241587790ea358a5c544b267c97de13880367651d992b18a906d8"} Feb 17 16:18:38 crc kubenswrapper[4672]: I0217 16:18:38.138961 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2rldp" Feb 17 16:18:38 crc kubenswrapper[4672]: I0217 16:18:38.139004 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2rldp" event={"ID":"8fa03bbf-8174-41ab-94ee-88e5d46e0bd4","Type":"ContainerDied","Data":"4f72e3c03eddd5970b1b3063472b3ff9101b2c9d39e8c1b1180e8366fe9c5231"} Feb 17 16:18:38 crc kubenswrapper[4672]: I0217 16:18:38.139042 4672 scope.go:117] "RemoveContainer" containerID="24c66a493e2241587790ea358a5c544b267c97de13880367651d992b18a906d8" Feb 17 16:18:38 crc kubenswrapper[4672]: I0217 16:18:38.161684 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-2rldp"] Feb 17 16:18:38 crc kubenswrapper[4672]: I0217 16:18:38.167673 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-2rldp"] Feb 17 16:18:38 crc kubenswrapper[4672]: I0217 16:18:38.171590 4672 scope.go:117] "RemoveContainer" containerID="24c66a493e2241587790ea358a5c544b267c97de13880367651d992b18a906d8" Feb 17 16:18:38 crc kubenswrapper[4672]: E0217 16:18:38.172003 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24c66a493e2241587790ea358a5c544b267c97de13880367651d992b18a906d8\": container with ID starting with 24c66a493e2241587790ea358a5c544b267c97de13880367651d992b18a906d8 not found: ID does not exist" containerID="24c66a493e2241587790ea358a5c544b267c97de13880367651d992b18a906d8" Feb 17 16:18:38 crc kubenswrapper[4672]: I0217 16:18:38.172049 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24c66a493e2241587790ea358a5c544b267c97de13880367651d992b18a906d8"} err="failed to get container status \"24c66a493e2241587790ea358a5c544b267c97de13880367651d992b18a906d8\": rpc error: code = NotFound desc = could not find container \"24c66a493e2241587790ea358a5c544b267c97de13880367651d992b18a906d8\": container with ID starting with 24c66a493e2241587790ea358a5c544b267c97de13880367651d992b18a906d8 not found: ID does not exist" Feb 17 16:18:39 crc kubenswrapper[4672]: I0217 16:18:39.952733 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fa03bbf-8174-41ab-94ee-88e5d46e0bd4" path="/var/lib/kubelet/pods/8fa03bbf-8174-41ab-94ee-88e5d46e0bd4/volumes" Feb 17 16:18:46 crc kubenswrapper[4672]: I0217 16:18:46.264396 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-b9pq7" Feb 17 16:18:46 crc kubenswrapper[4672]: I0217 16:18:46.265407 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-b9pq7" Feb 17 16:18:46 crc kubenswrapper[4672]: I0217 16:18:46.311055 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-b9pq7" Feb 17 16:18:46 crc kubenswrapper[4672]: I0217 16:18:46.746382 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-gcmvn" Feb 17 16:18:47 crc kubenswrapper[4672]: I0217 16:18:47.260085 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-b9pq7" Feb 17 16:18:52 crc kubenswrapper[4672]: I0217 16:18:52.397776 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx"] Feb 17 16:18:52 crc kubenswrapper[4672]: E0217 16:18:52.399936 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa03bbf-8174-41ab-94ee-88e5d46e0bd4" containerName="registry-server" Feb 17 16:18:52 crc kubenswrapper[4672]: I0217 16:18:52.400088 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa03bbf-8174-41ab-94ee-88e5d46e0bd4" containerName="registry-server" Feb 17 16:18:52 crc kubenswrapper[4672]: I0217 16:18:52.400439 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fa03bbf-8174-41ab-94ee-88e5d46e0bd4" containerName="registry-server" Feb 17 16:18:52 crc kubenswrapper[4672]: I0217 16:18:52.402396 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx" Feb 17 16:18:52 crc kubenswrapper[4672]: I0217 16:18:52.409130 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-xqv5s" Feb 17 16:18:52 crc kubenswrapper[4672]: I0217 16:18:52.409825 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx"] Feb 17 16:18:52 crc kubenswrapper[4672]: I0217 16:18:52.482356 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhnjq\" (UniqueName: \"kubernetes.io/projected/9e1e9225-bf20-4ce6-ba45-7577a5616754-kube-api-access-jhnjq\") pod \"1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx\" (UID: \"9e1e9225-bf20-4ce6-ba45-7577a5616754\") " pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx" Feb 17 16:18:52 crc kubenswrapper[4672]: I0217 16:18:52.482419 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e1e9225-bf20-4ce6-ba45-7577a5616754-bundle\") pod \"1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx\" (UID: \"9e1e9225-bf20-4ce6-ba45-7577a5616754\") " pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx" Feb 17 16:18:52 crc kubenswrapper[4672]: I0217 16:18:52.482590 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e1e9225-bf20-4ce6-ba45-7577a5616754-util\") pod \"1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx\" (UID: \"9e1e9225-bf20-4ce6-ba45-7577a5616754\") " pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx" Feb 17 16:18:52 crc kubenswrapper[4672]: I0217 16:18:52.583895 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhnjq\" (UniqueName: \"kubernetes.io/projected/9e1e9225-bf20-4ce6-ba45-7577a5616754-kube-api-access-jhnjq\") pod \"1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx\" (UID: \"9e1e9225-bf20-4ce6-ba45-7577a5616754\") " pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx" Feb 17 16:18:52 crc kubenswrapper[4672]: I0217 16:18:52.583985 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e1e9225-bf20-4ce6-ba45-7577a5616754-bundle\") pod \"1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx\" (UID: \"9e1e9225-bf20-4ce6-ba45-7577a5616754\") " pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx" Feb 17 16:18:52 crc kubenswrapper[4672]: I0217 16:18:52.584077 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e1e9225-bf20-4ce6-ba45-7577a5616754-util\") pod \"1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx\" (UID: \"9e1e9225-bf20-4ce6-ba45-7577a5616754\") " pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx" Feb 17 16:18:52 crc kubenswrapper[4672]: I0217 16:18:52.584869 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e1e9225-bf20-4ce6-ba45-7577a5616754-bundle\") pod \"1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx\" (UID: \"9e1e9225-bf20-4ce6-ba45-7577a5616754\") " pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx" Feb 17 16:18:52 crc kubenswrapper[4672]: I0217 16:18:52.584903 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e1e9225-bf20-4ce6-ba45-7577a5616754-util\") pod \"1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx\" (UID: \"9e1e9225-bf20-4ce6-ba45-7577a5616754\") " pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx" Feb 17 16:18:52 crc kubenswrapper[4672]: I0217 16:18:52.611148 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhnjq\" (UniqueName: \"kubernetes.io/projected/9e1e9225-bf20-4ce6-ba45-7577a5616754-kube-api-access-jhnjq\") pod \"1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx\" (UID: \"9e1e9225-bf20-4ce6-ba45-7577a5616754\") " pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx" Feb 17 16:18:52 crc kubenswrapper[4672]: I0217 16:18:52.726643 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx" Feb 17 16:18:53 crc kubenswrapper[4672]: I0217 16:18:53.009825 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx"] Feb 17 16:18:53 crc kubenswrapper[4672]: I0217 16:18:53.267477 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx" event={"ID":"9e1e9225-bf20-4ce6-ba45-7577a5616754","Type":"ContainerStarted","Data":"85f40fed956b35ad8c8e28f86e8181c3adb023210ad2b42613687caee0c3caff"} Feb 17 16:18:53 crc kubenswrapper[4672]: I0217 16:18:53.267788 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx" event={"ID":"9e1e9225-bf20-4ce6-ba45-7577a5616754","Type":"ContainerStarted","Data":"a8ef27cadd71038700ecf27db410bbc5a8aea7e3749c2b7b13cc881c2de4f7a5"} Feb 17 16:18:54 crc kubenswrapper[4672]: I0217 16:18:54.281109 4672 generic.go:334] "Generic (PLEG): container finished" podID="9e1e9225-bf20-4ce6-ba45-7577a5616754" containerID="85f40fed956b35ad8c8e28f86e8181c3adb023210ad2b42613687caee0c3caff" exitCode=0 Feb 17 16:18:54 crc kubenswrapper[4672]: I0217 16:18:54.281188 4672 generic.go:334] "Generic (PLEG): container finished" podID="9e1e9225-bf20-4ce6-ba45-7577a5616754" containerID="035f13f863df5138f3edd62f57c1d84d80094fa360aafbcaf8154d0afbf59f48" exitCode=0 Feb 17 16:18:54 crc kubenswrapper[4672]: I0217 16:18:54.281210 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx" event={"ID":"9e1e9225-bf20-4ce6-ba45-7577a5616754","Type":"ContainerDied","Data":"85f40fed956b35ad8c8e28f86e8181c3adb023210ad2b42613687caee0c3caff"} Feb 17 16:18:54 crc kubenswrapper[4672]: I0217 16:18:54.281293 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx" event={"ID":"9e1e9225-bf20-4ce6-ba45-7577a5616754","Type":"ContainerDied","Data":"035f13f863df5138f3edd62f57c1d84d80094fa360aafbcaf8154d0afbf59f48"} Feb 17 16:18:54 crc kubenswrapper[4672]: E0217 16:18:54.715384 4672 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e1e9225_bf20_4ce6_ba45_7577a5616754.slice/crio-conmon-7f15cd619010de6f9aa98b274b0d4b1e7b8791af2df7c99b25248cf4fb90efe9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e1e9225_bf20_4ce6_ba45_7577a5616754.slice/crio-7f15cd619010de6f9aa98b274b0d4b1e7b8791af2df7c99b25248cf4fb90efe9.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:18:55 crc kubenswrapper[4672]: I0217 16:18:55.297803 4672 generic.go:334] "Generic (PLEG): container finished" podID="9e1e9225-bf20-4ce6-ba45-7577a5616754" containerID="7f15cd619010de6f9aa98b274b0d4b1e7b8791af2df7c99b25248cf4fb90efe9" exitCode=0 Feb 17 16:18:55 crc kubenswrapper[4672]: I0217 16:18:55.298870 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx" event={"ID":"9e1e9225-bf20-4ce6-ba45-7577a5616754","Type":"ContainerDied","Data":"7f15cd619010de6f9aa98b274b0d4b1e7b8791af2df7c99b25248cf4fb90efe9"} Feb 17 16:18:56 crc kubenswrapper[4672]: I0217 16:18:56.674742 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx" Feb 17 16:18:56 crc kubenswrapper[4672]: I0217 16:18:56.763837 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e1e9225-bf20-4ce6-ba45-7577a5616754-bundle\") pod \"9e1e9225-bf20-4ce6-ba45-7577a5616754\" (UID: \"9e1e9225-bf20-4ce6-ba45-7577a5616754\") " Feb 17 16:18:56 crc kubenswrapper[4672]: I0217 16:18:56.763884 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhnjq\" (UniqueName: \"kubernetes.io/projected/9e1e9225-bf20-4ce6-ba45-7577a5616754-kube-api-access-jhnjq\") pod \"9e1e9225-bf20-4ce6-ba45-7577a5616754\" (UID: \"9e1e9225-bf20-4ce6-ba45-7577a5616754\") " Feb 17 16:18:56 crc kubenswrapper[4672]: I0217 16:18:56.763967 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e1e9225-bf20-4ce6-ba45-7577a5616754-util\") pod \"9e1e9225-bf20-4ce6-ba45-7577a5616754\" (UID: \"9e1e9225-bf20-4ce6-ba45-7577a5616754\") " Feb 17 16:18:56 crc kubenswrapper[4672]: I0217 16:18:56.764717 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e1e9225-bf20-4ce6-ba45-7577a5616754-bundle" (OuterVolumeSpecName: "bundle") pod "9e1e9225-bf20-4ce6-ba45-7577a5616754" (UID: "9e1e9225-bf20-4ce6-ba45-7577a5616754"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:56 crc kubenswrapper[4672]: I0217 16:18:56.785994 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e1e9225-bf20-4ce6-ba45-7577a5616754-kube-api-access-jhnjq" (OuterVolumeSpecName: "kube-api-access-jhnjq") pod "9e1e9225-bf20-4ce6-ba45-7577a5616754" (UID: "9e1e9225-bf20-4ce6-ba45-7577a5616754"). InnerVolumeSpecName "kube-api-access-jhnjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:56 crc kubenswrapper[4672]: I0217 16:18:56.786755 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e1e9225-bf20-4ce6-ba45-7577a5616754-util" (OuterVolumeSpecName: "util") pod "9e1e9225-bf20-4ce6-ba45-7577a5616754" (UID: "9e1e9225-bf20-4ce6-ba45-7577a5616754"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:56 crc kubenswrapper[4672]: I0217 16:18:56.865522 4672 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e1e9225-bf20-4ce6-ba45-7577a5616754-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:56 crc kubenswrapper[4672]: I0217 16:18:56.865556 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhnjq\" (UniqueName: \"kubernetes.io/projected/9e1e9225-bf20-4ce6-ba45-7577a5616754-kube-api-access-jhnjq\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:56 crc kubenswrapper[4672]: I0217 16:18:56.865566 4672 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e1e9225-bf20-4ce6-ba45-7577a5616754-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:57 crc kubenswrapper[4672]: I0217 16:18:57.316149 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx" event={"ID":"9e1e9225-bf20-4ce6-ba45-7577a5616754","Type":"ContainerDied","Data":"a8ef27cadd71038700ecf27db410bbc5a8aea7e3749c2b7b13cc881c2de4f7a5"} Feb 17 16:18:57 crc kubenswrapper[4672]: I0217 16:18:57.316187 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8ef27cadd71038700ecf27db410bbc5a8aea7e3749c2b7b13cc881c2de4f7a5" Feb 17 16:18:57 crc kubenswrapper[4672]: I0217 16:18:57.316210 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx" Feb 17 16:18:57 crc kubenswrapper[4672]: I0217 16:18:57.566322 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:18:57 crc kubenswrapper[4672]: I0217 16:18:57.566389 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:19:04 crc kubenswrapper[4672]: I0217 16:19:04.870499 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5b4d8b9dd-hnxb4"] Feb 17 16:19:04 crc kubenswrapper[4672]: E0217 16:19:04.872490 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e1e9225-bf20-4ce6-ba45-7577a5616754" containerName="extract" Feb 17 16:19:04 crc kubenswrapper[4672]: I0217 16:19:04.872534 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e1e9225-bf20-4ce6-ba45-7577a5616754" containerName="extract" Feb 17 16:19:04 crc kubenswrapper[4672]: E0217 16:19:04.872585 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e1e9225-bf20-4ce6-ba45-7577a5616754" containerName="pull" Feb 17 16:19:04 crc kubenswrapper[4672]: I0217 16:19:04.872601 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e1e9225-bf20-4ce6-ba45-7577a5616754" containerName="pull" Feb 17 16:19:04 crc kubenswrapper[4672]: E0217 16:19:04.872619 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e1e9225-bf20-4ce6-ba45-7577a5616754" containerName="util" Feb 17 16:19:04 crc kubenswrapper[4672]: I0217 16:19:04.872630 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e1e9225-bf20-4ce6-ba45-7577a5616754" containerName="util" Feb 17 16:19:04 crc kubenswrapper[4672]: I0217 16:19:04.872886 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e1e9225-bf20-4ce6-ba45-7577a5616754" containerName="extract" Feb 17 16:19:04 crc kubenswrapper[4672]: I0217 16:19:04.873850 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5b4d8b9dd-hnxb4" Feb 17 16:19:04 crc kubenswrapper[4672]: I0217 16:19:04.875981 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-7v6z2" Feb 17 16:19:04 crc kubenswrapper[4672]: I0217 16:19:04.883819 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5b4d8b9dd-hnxb4"] Feb 17 16:19:04 crc kubenswrapper[4672]: I0217 16:19:04.980764 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m84cl\" (UniqueName: \"kubernetes.io/projected/4a8609a9-e813-46c0-ad2b-64d97ee6c368-kube-api-access-m84cl\") pod \"openstack-operator-controller-init-5b4d8b9dd-hnxb4\" (UID: \"4a8609a9-e813-46c0-ad2b-64d97ee6c368\") " pod="openstack-operators/openstack-operator-controller-init-5b4d8b9dd-hnxb4" Feb 17 16:19:05 crc kubenswrapper[4672]: I0217 16:19:05.082246 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m84cl\" (UniqueName: \"kubernetes.io/projected/4a8609a9-e813-46c0-ad2b-64d97ee6c368-kube-api-access-m84cl\") pod \"openstack-operator-controller-init-5b4d8b9dd-hnxb4\" (UID: \"4a8609a9-e813-46c0-ad2b-64d97ee6c368\") " pod="openstack-operators/openstack-operator-controller-init-5b4d8b9dd-hnxb4" Feb 17 16:19:05 crc kubenswrapper[4672]: I0217 16:19:05.102711 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m84cl\" (UniqueName: \"kubernetes.io/projected/4a8609a9-e813-46c0-ad2b-64d97ee6c368-kube-api-access-m84cl\") pod \"openstack-operator-controller-init-5b4d8b9dd-hnxb4\" (UID: \"4a8609a9-e813-46c0-ad2b-64d97ee6c368\") " pod="openstack-operators/openstack-operator-controller-init-5b4d8b9dd-hnxb4" Feb 17 16:19:05 crc kubenswrapper[4672]: I0217 16:19:05.212045 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5b4d8b9dd-hnxb4" Feb 17 16:19:05 crc kubenswrapper[4672]: I0217 16:19:05.661153 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5b4d8b9dd-hnxb4"] Feb 17 16:19:05 crc kubenswrapper[4672]: I0217 16:19:05.669200 4672 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:19:06 crc kubenswrapper[4672]: I0217 16:19:06.389037 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5b4d8b9dd-hnxb4" event={"ID":"4a8609a9-e813-46c0-ad2b-64d97ee6c368","Type":"ContainerStarted","Data":"607045c322f7f3daa9e03201ddc6429b17fd8a893995ae38ad55d137abfcb4e9"} Feb 17 16:19:10 crc kubenswrapper[4672]: I0217 16:19:10.426719 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5b4d8b9dd-hnxb4" event={"ID":"4a8609a9-e813-46c0-ad2b-64d97ee6c368","Type":"ContainerStarted","Data":"f19f8a0f2ca7c67fbbfc6e4a263af5c6802e61b7b25e92787220f3206f9bfcc9"} Feb 17 16:19:10 crc kubenswrapper[4672]: I0217 16:19:10.427191 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5b4d8b9dd-hnxb4" Feb 17 16:19:10 crc kubenswrapper[4672]: I0217 16:19:10.454372 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5b4d8b9dd-hnxb4" podStartSLOduration=2.315990405 podStartE2EDuration="6.454352326s" podCreationTimestamp="2026-02-17 16:19:04 +0000 UTC" firstStartedPulling="2026-02-17 16:19:05.668969313 +0000 UTC m=+954.423058045" lastFinishedPulling="2026-02-17 16:19:09.807331204 +0000 UTC m=+958.561419966" observedRunningTime="2026-02-17 16:19:10.450832513 +0000 UTC m=+959.204921245" watchObservedRunningTime="2026-02-17 16:19:10.454352326 +0000 UTC m=+959.208441058" Feb 17 16:19:15 crc kubenswrapper[4672]: I0217 16:19:15.214432 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5b4d8b9dd-hnxb4" Feb 17 16:19:27 crc kubenswrapper[4672]: I0217 16:19:27.566115 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:19:27 crc kubenswrapper[4672]: I0217 16:19:27.566931 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:19:27 crc kubenswrapper[4672]: I0217 16:19:27.567001 4672 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:19:27 crc kubenswrapper[4672]: I0217 16:19:27.568290 4672 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"15aa63f02ee4cd25df0940b558fcaa7bcd640deeb41ec99378884cac7403f757"} pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:19:27 crc kubenswrapper[4672]: I0217 16:19:27.568401 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" containerID="cri-o://15aa63f02ee4cd25df0940b558fcaa7bcd640deeb41ec99378884cac7403f757" gracePeriod=600 Feb 17 16:19:28 crc kubenswrapper[4672]: I0217 16:19:28.568006 4672 generic.go:334] "Generic (PLEG): container finished" podID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerID="15aa63f02ee4cd25df0940b558fcaa7bcd640deeb41ec99378884cac7403f757" exitCode=0 Feb 17 16:19:28 crc kubenswrapper[4672]: I0217 16:19:28.568060 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerDied","Data":"15aa63f02ee4cd25df0940b558fcaa7bcd640deeb41ec99378884cac7403f757"} Feb 17 16:19:28 crc kubenswrapper[4672]: I0217 16:19:28.568553 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerStarted","Data":"6e5c44fe403356546654090676cb1aa54373e380600ecb186fac59fca3fb0ed3"} Feb 17 16:19:28 crc kubenswrapper[4672]: I0217 16:19:28.568580 4672 scope.go:117] "RemoveContainer" containerID="a296cbbb1d99319f19a06f749b112d1a27b0616f6d5daa613b86b37f30657f19" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.474587 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-67vt9"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.476190 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-67vt9" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.479692 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-6mvns" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.480608 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-4p9xd"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.481376 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4p9xd" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.483357 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-b987c" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.489158 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-67vt9"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.498344 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-4p9xd"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.512489 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-n8sch"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.513277 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-n8sch" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.516646 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-r6fng" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.533056 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-n8sch"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.543597 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-h75ck"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.544679 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-h75ck" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.547857 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-xbhh7" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.565683 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-h75ck"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.570566 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-kdl2h"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.571361 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kdl2h" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.576879 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-w84g8" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.587975 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-kdl2h"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.593623 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9sv2x"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.595455 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9sv2x" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.598703 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-56tqk" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.608817 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9sv2x"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.618791 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn4sm\" (UniqueName: \"kubernetes.io/projected/8747c08b-53c8-45dc-98b0-124e58820cdb-kube-api-access-nn4sm\") pod \"designate-operator-controller-manager-6d8bf5c495-n8sch\" (UID: \"8747c08b-53c8-45dc-98b0-124e58820cdb\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-n8sch" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.618844 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76c7d\" (UniqueName: \"kubernetes.io/projected/697b2176-1abc-4887-9ba9-32e6e667a8a0-kube-api-access-76c7d\") pod \"cinder-operator-controller-manager-5d946d989d-4p9xd\" (UID: \"697b2176-1abc-4887-9ba9-32e6e667a8a0\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4p9xd" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.618885 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5n9c\" (UniqueName: \"kubernetes.io/projected/f18754ba-fbb5-4741-a801-03326fd7714d-kube-api-access-m5n9c\") pod \"barbican-operator-controller-manager-868647ff47-67vt9\" (UID: \"f18754ba-fbb5-4741-a801-03326fd7714d\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-67vt9" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.641575 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-bqchl"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.642468 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqchl" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.652998 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.653095 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-nx85j" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.655181 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-bqchl"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.679727 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-4slcx"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.680604 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4slcx" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.682465 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-7tgjn"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.683234 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-7tgjn" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.684734 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-d58nn" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.687030 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-5b9nm" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.693698 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-4slcx"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.704160 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-bmw4m"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.704976 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-bmw4m" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.712888 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-tsqmp" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.716688 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-7tgjn"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.720437 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76c7d\" (UniqueName: \"kubernetes.io/projected/697b2176-1abc-4887-9ba9-32e6e667a8a0-kube-api-access-76c7d\") pod \"cinder-operator-controller-manager-5d946d989d-4p9xd\" (UID: \"697b2176-1abc-4887-9ba9-32e6e667a8a0\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4p9xd" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.720492 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4mls\" (UniqueName: \"kubernetes.io/projected/fa9ca2ad-545b-4125-a472-0aa969f560fd-kube-api-access-l4mls\") pod \"glance-operator-controller-manager-77987464f4-h75ck\" (UID: \"fa9ca2ad-545b-4125-a472-0aa969f560fd\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-h75ck" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.720571 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5n9c\" (UniqueName: \"kubernetes.io/projected/f18754ba-fbb5-4741-a801-03326fd7714d-kube-api-access-m5n9c\") pod \"barbican-operator-controller-manager-868647ff47-67vt9\" (UID: \"f18754ba-fbb5-4741-a801-03326fd7714d\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-67vt9" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.720617 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v952v\" (UniqueName: \"kubernetes.io/projected/246842cd-06e9-4793-96a5-9b0dad79ce08-kube-api-access-v952v\") pod \"heat-operator-controller-manager-69f49c598c-kdl2h\" (UID: \"246842cd-06e9-4793-96a5-9b0dad79ce08\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kdl2h" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.720640 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkf7f\" (UniqueName: \"kubernetes.io/projected/f554f9dc-7778-4116-8d09-205f2c3671fd-kube-api-access-xkf7f\") pod \"horizon-operator-controller-manager-5b9b8895d5-9sv2x\" (UID: \"f554f9dc-7778-4116-8d09-205f2c3671fd\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9sv2x" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.720671 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn4sm\" (UniqueName: \"kubernetes.io/projected/8747c08b-53c8-45dc-98b0-124e58820cdb-kube-api-access-nn4sm\") pod \"designate-operator-controller-manager-6d8bf5c495-n8sch\" (UID: \"8747c08b-53c8-45dc-98b0-124e58820cdb\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-n8sch" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.722801 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-rlvlw"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.724398 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rlvlw" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.740815 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-b84zr" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.761366 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn4sm\" (UniqueName: \"kubernetes.io/projected/8747c08b-53c8-45dc-98b0-124e58820cdb-kube-api-access-nn4sm\") pod \"designate-operator-controller-manager-6d8bf5c495-n8sch\" (UID: \"8747c08b-53c8-45dc-98b0-124e58820cdb\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-n8sch" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.764279 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5n9c\" (UniqueName: \"kubernetes.io/projected/f18754ba-fbb5-4741-a801-03326fd7714d-kube-api-access-m5n9c\") pod \"barbican-operator-controller-manager-868647ff47-67vt9\" (UID: \"f18754ba-fbb5-4741-a801-03326fd7714d\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-67vt9" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.769227 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-rlvlw"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.787668 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76c7d\" (UniqueName: \"kubernetes.io/projected/697b2176-1abc-4887-9ba9-32e6e667a8a0-kube-api-access-76c7d\") pod \"cinder-operator-controller-manager-5d946d989d-4p9xd\" (UID: \"697b2176-1abc-4887-9ba9-32e6e667a8a0\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4p9xd" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.808891 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-67vt9" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.809122 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-bmw4m"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.824589 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4p9xd" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.834125 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4mls\" (UniqueName: \"kubernetes.io/projected/fa9ca2ad-545b-4125-a472-0aa969f560fd-kube-api-access-l4mls\") pod \"glance-operator-controller-manager-77987464f4-h75ck\" (UID: \"fa9ca2ad-545b-4125-a472-0aa969f560fd\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-h75ck" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.834223 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spf4f\" (UniqueName: \"kubernetes.io/projected/ebef7502-75af-4d09-98eb-b3fbfb5bf0ad-kube-api-access-spf4f\") pod \"manila-operator-controller-manager-54f6768c69-bmw4m\" (UID: \"ebef7502-75af-4d09-98eb-b3fbfb5bf0ad\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-bmw4m" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.834267 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-cert\") pod \"infra-operator-controller-manager-79d975b745-bqchl\" (UID: \"8396e964-bc62-4fe3-9a1e-b965b0ca30f5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqchl" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.834316 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnrdp\" (UniqueName: \"kubernetes.io/projected/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-kube-api-access-wnrdp\") pod \"infra-operator-controller-manager-79d975b745-bqchl\" (UID: \"8396e964-bc62-4fe3-9a1e-b965b0ca30f5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqchl" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.834399 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5k2m\" (UniqueName: \"kubernetes.io/projected/a97a493d-5f21-4965-b7ad-aff4cffcfb37-kube-api-access-q5k2m\") pod \"keystone-operator-controller-manager-b4d948c87-7tgjn\" (UID: \"a97a493d-5f21-4965-b7ad-aff4cffcfb37\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-7tgjn" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.834472 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v952v\" (UniqueName: \"kubernetes.io/projected/246842cd-06e9-4793-96a5-9b0dad79ce08-kube-api-access-v952v\") pod \"heat-operator-controller-manager-69f49c598c-kdl2h\" (UID: \"246842cd-06e9-4793-96a5-9b0dad79ce08\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kdl2h" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.834531 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkf7f\" (UniqueName: \"kubernetes.io/projected/f554f9dc-7778-4116-8d09-205f2c3671fd-kube-api-access-xkf7f\") pod \"horizon-operator-controller-manager-5b9b8895d5-9sv2x\" (UID: \"f554f9dc-7778-4116-8d09-205f2c3671fd\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9sv2x" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.834661 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnb2d\" (UniqueName: \"kubernetes.io/projected/481c13a0-8cdd-4753-9bae-31d536cd4779-kube-api-access-hnb2d\") pod \"ironic-operator-controller-manager-554564d7fc-4slcx\" (UID: \"481c13a0-8cdd-4753-9bae-31d536cd4779\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4slcx" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.854887 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-n8sch" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.877709 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-mw56t"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.878737 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v952v\" (UniqueName: \"kubernetes.io/projected/246842cd-06e9-4793-96a5-9b0dad79ce08-kube-api-access-v952v\") pod \"heat-operator-controller-manager-69f49c598c-kdl2h\" (UID: \"246842cd-06e9-4793-96a5-9b0dad79ce08\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kdl2h" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.882855 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-mw56t" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.890297 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4mls\" (UniqueName: \"kubernetes.io/projected/fa9ca2ad-545b-4125-a472-0aa969f560fd-kube-api-access-l4mls\") pod \"glance-operator-controller-manager-77987464f4-h75ck\" (UID: \"fa9ca2ad-545b-4125-a472-0aa969f560fd\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-h75ck" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.891378 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-fmjxt" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.902897 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kdl2h" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.908816 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-gxfcs"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.909837 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gxfcs" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.914762 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-sgjbb" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.916742 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkf7f\" (UniqueName: \"kubernetes.io/projected/f554f9dc-7778-4116-8d09-205f2c3671fd-kube-api-access-xkf7f\") pod \"horizon-operator-controller-manager-5b9b8895d5-9sv2x\" (UID: \"f554f9dc-7778-4116-8d09-205f2c3671fd\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9sv2x" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.919747 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9sv2x" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.939562 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-mw56t"] Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.953264 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spf4f\" (UniqueName: \"kubernetes.io/projected/ebef7502-75af-4d09-98eb-b3fbfb5bf0ad-kube-api-access-spf4f\") pod \"manila-operator-controller-manager-54f6768c69-bmw4m\" (UID: \"ebef7502-75af-4d09-98eb-b3fbfb5bf0ad\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-bmw4m" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.953328 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-cert\") pod \"infra-operator-controller-manager-79d975b745-bqchl\" (UID: \"8396e964-bc62-4fe3-9a1e-b965b0ca30f5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqchl" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.953353 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnrdp\" (UniqueName: \"kubernetes.io/projected/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-kube-api-access-wnrdp\") pod \"infra-operator-controller-manager-79d975b745-bqchl\" (UID: \"8396e964-bc62-4fe3-9a1e-b965b0ca30f5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqchl" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.953396 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5k2m\" (UniqueName: \"kubernetes.io/projected/a97a493d-5f21-4965-b7ad-aff4cffcfb37-kube-api-access-q5k2m\") pod \"keystone-operator-controller-manager-b4d948c87-7tgjn\" (UID: \"a97a493d-5f21-4965-b7ad-aff4cffcfb37\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-7tgjn" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.953429 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggzqb\" (UniqueName: \"kubernetes.io/projected/820e1fb1-9bbe-47e8-a2d5-6e45235244b4-kube-api-access-ggzqb\") pod \"mariadb-operator-controller-manager-6994f66f48-rlvlw\" (UID: \"820e1fb1-9bbe-47e8-a2d5-6e45235244b4\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rlvlw" Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.953526 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnb2d\" (UniqueName: \"kubernetes.io/projected/481c13a0-8cdd-4753-9bae-31d536cd4779-kube-api-access-hnb2d\") pod \"ironic-operator-controller-manager-554564d7fc-4slcx\" (UID: \"481c13a0-8cdd-4753-9bae-31d536cd4779\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4slcx" Feb 17 16:19:53 crc kubenswrapper[4672]: E0217 16:19:53.954098 4672 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:19:53 crc kubenswrapper[4672]: E0217 16:19:53.954149 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-cert podName:8396e964-bc62-4fe3-9a1e-b965b0ca30f5 nodeName:}" failed. No retries permitted until 2026-02-17 16:19:54.45413114 +0000 UTC m=+1003.208219882 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-cert") pod "infra-operator-controller-manager-79d975b745-bqchl" (UID: "8396e964-bc62-4fe3-9a1e-b965b0ca30f5") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:19:53 crc kubenswrapper[4672]: I0217 16:19:53.995351 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnb2d\" (UniqueName: \"kubernetes.io/projected/481c13a0-8cdd-4753-9bae-31d536cd4779-kube-api-access-hnb2d\") pod \"ironic-operator-controller-manager-554564d7fc-4slcx\" (UID: \"481c13a0-8cdd-4753-9bae-31d536cd4779\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4slcx" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.005818 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnrdp\" (UniqueName: \"kubernetes.io/projected/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-kube-api-access-wnrdp\") pod \"infra-operator-controller-manager-79d975b745-bqchl\" (UID: \"8396e964-bc62-4fe3-9a1e-b965b0ca30f5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqchl" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.005113 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4slcx" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.009204 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5k2m\" (UniqueName: \"kubernetes.io/projected/a97a493d-5f21-4965-b7ad-aff4cffcfb37-kube-api-access-q5k2m\") pod \"keystone-operator-controller-manager-b4d948c87-7tgjn\" (UID: \"a97a493d-5f21-4965-b7ad-aff4cffcfb37\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-7tgjn" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.016173 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spf4f\" (UniqueName: \"kubernetes.io/projected/ebef7502-75af-4d09-98eb-b3fbfb5bf0ad-kube-api-access-spf4f\") pod \"manila-operator-controller-manager-54f6768c69-bmw4m\" (UID: \"ebef7502-75af-4d09-98eb-b3fbfb5bf0ad\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-bmw4m" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.035587 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-gxfcs"] Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.039968 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-7tgjn" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.055205 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbmdq\" (UniqueName: \"kubernetes.io/projected/ac8ba5c6-2841-4a02-8707-54be52de56f1-kube-api-access-hbmdq\") pod \"nova-operator-controller-manager-567668f5cf-gxfcs\" (UID: \"ac8ba5c6-2841-4a02-8707-54be52de56f1\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gxfcs" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.055262 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggzqb\" (UniqueName: \"kubernetes.io/projected/820e1fb1-9bbe-47e8-a2d5-6e45235244b4-kube-api-access-ggzqb\") pod \"mariadb-operator-controller-manager-6994f66f48-rlvlw\" (UID: \"820e1fb1-9bbe-47e8-a2d5-6e45235244b4\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rlvlw" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.055353 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjp4g\" (UniqueName: \"kubernetes.io/projected/a1ac6199-2cd8-48e9-9303-39fba36f1369-kube-api-access-qjp4g\") pod \"neutron-operator-controller-manager-64ddbf8bb-mw56t\" (UID: \"a1ac6199-2cd8-48e9-9303-39fba36f1369\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-mw56t" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.081188 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggzqb\" (UniqueName: \"kubernetes.io/projected/820e1fb1-9bbe-47e8-a2d5-6e45235244b4-kube-api-access-ggzqb\") pod \"mariadb-operator-controller-manager-6994f66f48-rlvlw\" (UID: \"820e1fb1-9bbe-47e8-a2d5-6e45235244b4\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rlvlw" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.081256 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-6xtt5"] Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.082227 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-6xtt5" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.088895 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-zh2p8" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.109203 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-bmw4m" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.111004 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-6xtt5"] Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.132598 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-ht2sv"] Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.133738 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-ht2sv" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.148001 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-9trzv" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.151433 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn"] Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.152330 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.154408 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-zlpzd" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.156559 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.158046 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjp4g\" (UniqueName: \"kubernetes.io/projected/a1ac6199-2cd8-48e9-9303-39fba36f1369-kube-api-access-qjp4g\") pod \"neutron-operator-controller-manager-64ddbf8bb-mw56t\" (UID: \"a1ac6199-2cd8-48e9-9303-39fba36f1369\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-mw56t" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.158106 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbmdq\" (UniqueName: \"kubernetes.io/projected/ac8ba5c6-2841-4a02-8707-54be52de56f1-kube-api-access-hbmdq\") pod \"nova-operator-controller-manager-567668f5cf-gxfcs\" (UID: \"ac8ba5c6-2841-4a02-8707-54be52de56f1\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gxfcs" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.177088 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-wck86"] Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.178095 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-wck86" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.178966 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-h75ck" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.187810 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-ht2sv"] Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.202806 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rlvlw" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.211309 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-kq8dp" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.211564 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-rjn5h"] Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.215821 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjp4g\" (UniqueName: \"kubernetes.io/projected/a1ac6199-2cd8-48e9-9303-39fba36f1369-kube-api-access-qjp4g\") pod \"neutron-operator-controller-manager-64ddbf8bb-mw56t\" (UID: \"a1ac6199-2cd8-48e9-9303-39fba36f1369\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-mw56t" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.216306 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rjn5h" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.215394 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbmdq\" (UniqueName: \"kubernetes.io/projected/ac8ba5c6-2841-4a02-8707-54be52de56f1-kube-api-access-hbmdq\") pod \"nova-operator-controller-manager-567668f5cf-gxfcs\" (UID: \"ac8ba5c6-2841-4a02-8707-54be52de56f1\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gxfcs" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.223108 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-tjn6z" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.248943 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-rjn5h"] Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.261117 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9krds\" (UniqueName: \"kubernetes.io/projected/ec59ef76-a144-4870-b714-4ddaeae5b741-kube-api-access-9krds\") pod \"placement-operator-controller-manager-8497b45c89-wck86\" (UID: \"ec59ef76-a144-4870-b714-4ddaeae5b741\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-wck86" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.261197 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqg66\" (UniqueName: \"kubernetes.io/projected/cd50b560-8522-43e7-bbb9-10c5097ee367-kube-api-access-tqg66\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn\" (UID: \"cd50b560-8522-43e7-bbb9-10c5097ee367\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.261245 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd50b560-8522-43e7-bbb9-10c5097ee367-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn\" (UID: \"cd50b560-8522-43e7-bbb9-10c5097ee367\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.261294 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gxkg\" (UniqueName: \"kubernetes.io/projected/9ea06f37-5c5b-45f1-b6ba-fb72f5e8f86a-kube-api-access-4gxkg\") pod \"ovn-operator-controller-manager-d44cf6b75-ht2sv\" (UID: \"9ea06f37-5c5b-45f1-b6ba-fb72f5e8f86a\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-ht2sv" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.261333 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mzp8\" (UniqueName: \"kubernetes.io/projected/c0c82835-0153-4ce1-be6a-9b748ced0671-kube-api-access-2mzp8\") pod \"octavia-operator-controller-manager-69f8888797-6xtt5\" (UID: \"c0c82835-0153-4ce1-be6a-9b748ced0671\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-6xtt5" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.286544 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn"] Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.310168 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-wck86"] Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.334884 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-mw56t" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.338021 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-c5g8f"] Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.340446 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-c5g8f" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.352027 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-g5gcz" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.363675 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqg66\" (UniqueName: \"kubernetes.io/projected/cd50b560-8522-43e7-bbb9-10c5097ee367-kube-api-access-tqg66\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn\" (UID: \"cd50b560-8522-43e7-bbb9-10c5097ee367\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.363745 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd50b560-8522-43e7-bbb9-10c5097ee367-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn\" (UID: \"cd50b560-8522-43e7-bbb9-10c5097ee367\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.363788 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gxkg\" (UniqueName: \"kubernetes.io/projected/9ea06f37-5c5b-45f1-b6ba-fb72f5e8f86a-kube-api-access-4gxkg\") pod \"ovn-operator-controller-manager-d44cf6b75-ht2sv\" (UID: \"9ea06f37-5c5b-45f1-b6ba-fb72f5e8f86a\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-ht2sv" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.363838 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mzp8\" (UniqueName: \"kubernetes.io/projected/c0c82835-0153-4ce1-be6a-9b748ced0671-kube-api-access-2mzp8\") pod \"octavia-operator-controller-manager-69f8888797-6xtt5\" (UID: \"c0c82835-0153-4ce1-be6a-9b748ced0671\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-6xtt5" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.363909 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfrj2\" (UniqueName: \"kubernetes.io/projected/4d12b414-59e2-49aa-9463-ae2061b1aa80-kube-api-access-mfrj2\") pod \"swift-operator-controller-manager-68f46476f-rjn5h\" (UID: \"4d12b414-59e2-49aa-9463-ae2061b1aa80\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-rjn5h" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.363933 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9krds\" (UniqueName: \"kubernetes.io/projected/ec59ef76-a144-4870-b714-4ddaeae5b741-kube-api-access-9krds\") pod \"placement-operator-controller-manager-8497b45c89-wck86\" (UID: \"ec59ef76-a144-4870-b714-4ddaeae5b741\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-wck86" Feb 17 16:19:54 crc kubenswrapper[4672]: E0217 16:19:54.364391 4672 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:19:54 crc kubenswrapper[4672]: E0217 16:19:54.364444 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd50b560-8522-43e7-bbb9-10c5097ee367-cert podName:cd50b560-8522-43e7-bbb9-10c5097ee367 nodeName:}" failed. No retries permitted until 2026-02-17 16:19:54.864428469 +0000 UTC m=+1003.618517201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd50b560-8522-43e7-bbb9-10c5097ee367-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn" (UID: "cd50b560-8522-43e7-bbb9-10c5097ee367") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.368315 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-c5g8f"] Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.376669 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gxfcs" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.392312 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9krds\" (UniqueName: \"kubernetes.io/projected/ec59ef76-a144-4870-b714-4ddaeae5b741-kube-api-access-9krds\") pod \"placement-operator-controller-manager-8497b45c89-wck86\" (UID: \"ec59ef76-a144-4870-b714-4ddaeae5b741\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-wck86" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.404148 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gxkg\" (UniqueName: \"kubernetes.io/projected/9ea06f37-5c5b-45f1-b6ba-fb72f5e8f86a-kube-api-access-4gxkg\") pod \"ovn-operator-controller-manager-d44cf6b75-ht2sv\" (UID: \"9ea06f37-5c5b-45f1-b6ba-fb72f5e8f86a\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-ht2sv" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.415048 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mzp8\" (UniqueName: \"kubernetes.io/projected/c0c82835-0153-4ce1-be6a-9b748ced0671-kube-api-access-2mzp8\") pod \"octavia-operator-controller-manager-69f8888797-6xtt5\" (UID: \"c0c82835-0153-4ce1-be6a-9b748ced0671\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-6xtt5" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.416866 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqg66\" (UniqueName: \"kubernetes.io/projected/cd50b560-8522-43e7-bbb9-10c5097ee367-kube-api-access-tqg66\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn\" (UID: \"cd50b560-8522-43e7-bbb9-10c5097ee367\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.441279 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-m45mw"] Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.442064 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-m45mw" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.444731 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-4kkww" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.460461 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-m45mw"] Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.466798 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-cert\") pod \"infra-operator-controller-manager-79d975b745-bqchl\" (UID: \"8396e964-bc62-4fe3-9a1e-b965b0ca30f5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqchl" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.466913 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfrj2\" (UniqueName: \"kubernetes.io/projected/4d12b414-59e2-49aa-9463-ae2061b1aa80-kube-api-access-mfrj2\") pod \"swift-operator-controller-manager-68f46476f-rjn5h\" (UID: \"4d12b414-59e2-49aa-9463-ae2061b1aa80\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-rjn5h" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.466995 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcmvt\" (UniqueName: \"kubernetes.io/projected/761b3282-6d8a-4613-8191-fe2e37822d19-kube-api-access-tcmvt\") pod \"telemetry-operator-controller-manager-5d7c6cd576-c5g8f\" (UID: \"761b3282-6d8a-4613-8191-fe2e37822d19\") " pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-c5g8f" Feb 17 16:19:54 crc kubenswrapper[4672]: E0217 16:19:54.467349 4672 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:19:54 crc kubenswrapper[4672]: E0217 16:19:54.467400 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-cert podName:8396e964-bc62-4fe3-9a1e-b965b0ca30f5 nodeName:}" failed. No retries permitted until 2026-02-17 16:19:55.467382776 +0000 UTC m=+1004.221471508 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-cert") pod "infra-operator-controller-manager-79d975b745-bqchl" (UID: "8396e964-bc62-4fe3-9a1e-b965b0ca30f5") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.476571 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-x7798"] Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.477180 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-6xtt5" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.477424 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-x7798" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.482246 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-jcx7t" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.485192 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-x7798"] Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.494398 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfrj2\" (UniqueName: \"kubernetes.io/projected/4d12b414-59e2-49aa-9463-ae2061b1aa80-kube-api-access-mfrj2\") pod \"swift-operator-controller-manager-68f46476f-rjn5h\" (UID: \"4d12b414-59e2-49aa-9463-ae2061b1aa80\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-rjn5h" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.568960 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw5rs\" (UniqueName: \"kubernetes.io/projected/8b28f180-8f69-4141-827f-2eb95e876b84-kube-api-access-rw5rs\") pod \"watcher-operator-controller-manager-5db88f68c-x7798\" (UID: \"8b28f180-8f69-4141-827f-2eb95e876b84\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-x7798" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.569032 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcmvt\" (UniqueName: \"kubernetes.io/projected/761b3282-6d8a-4613-8191-fe2e37822d19-kube-api-access-tcmvt\") pod \"telemetry-operator-controller-manager-5d7c6cd576-c5g8f\" (UID: \"761b3282-6d8a-4613-8191-fe2e37822d19\") " pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-c5g8f" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.569143 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nhvk\" (UniqueName: \"kubernetes.io/projected/72370045-528c-4239-8c6f-24f435b3736b-kube-api-access-2nhvk\") pod \"test-operator-controller-manager-7866795846-m45mw\" (UID: \"72370045-528c-4239-8c6f-24f435b3736b\") " pod="openstack-operators/test-operator-controller-manager-7866795846-m45mw" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.574971 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-ht2sv" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.575702 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh"] Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.579039 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.581544 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.582393 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.583925 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-wck86" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.599088 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xzcph" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.603350 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcmvt\" (UniqueName: \"kubernetes.io/projected/761b3282-6d8a-4613-8191-fe2e37822d19-kube-api-access-tcmvt\") pod \"telemetry-operator-controller-manager-5d7c6cd576-c5g8f\" (UID: \"761b3282-6d8a-4613-8191-fe2e37822d19\") " pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-c5g8f" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.606971 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh"] Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.622720 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rjn5h" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.625302 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wcmsp"] Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.631646 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wcmsp" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.634817 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wcmsp"] Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.636669 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jtbqg" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.672559 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw5rs\" (UniqueName: \"kubernetes.io/projected/8b28f180-8f69-4141-827f-2eb95e876b84-kube-api-access-rw5rs\") pod \"watcher-operator-controller-manager-5db88f68c-x7798\" (UID: \"8b28f180-8f69-4141-827f-2eb95e876b84\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-x7798" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.672627 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd266\" (UniqueName: \"kubernetes.io/projected/6be94508-f499-48af-b1c8-50a773fb53d1-kube-api-access-jd266\") pod \"openstack-operator-controller-manager-66554dbdcf-jm2nh\" (UID: \"6be94508-f499-48af-b1c8-50a773fb53d1\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.672728 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-webhook-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-jm2nh\" (UID: \"6be94508-f499-48af-b1c8-50a773fb53d1\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.672790 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-metrics-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-jm2nh\" (UID: \"6be94508-f499-48af-b1c8-50a773fb53d1\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.672867 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nhvk\" (UniqueName: \"kubernetes.io/projected/72370045-528c-4239-8c6f-24f435b3736b-kube-api-access-2nhvk\") pod \"test-operator-controller-manager-7866795846-m45mw\" (UID: \"72370045-528c-4239-8c6f-24f435b3736b\") " pod="openstack-operators/test-operator-controller-manager-7866795846-m45mw" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.685641 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-c5g8f" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.705178 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nhvk\" (UniqueName: \"kubernetes.io/projected/72370045-528c-4239-8c6f-24f435b3736b-kube-api-access-2nhvk\") pod \"test-operator-controller-manager-7866795846-m45mw\" (UID: \"72370045-528c-4239-8c6f-24f435b3736b\") " pod="openstack-operators/test-operator-controller-manager-7866795846-m45mw" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.710216 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw5rs\" (UniqueName: \"kubernetes.io/projected/8b28f180-8f69-4141-827f-2eb95e876b84-kube-api-access-rw5rs\") pod \"watcher-operator-controller-manager-5db88f68c-x7798\" (UID: \"8b28f180-8f69-4141-827f-2eb95e876b84\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-x7798" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.768866 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-m45mw" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.778330 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8w5s\" (UniqueName: \"kubernetes.io/projected/92490ad7-6905-4c57-9d64-e7b1acbb44eb-kube-api-access-x8w5s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wcmsp\" (UID: \"92490ad7-6905-4c57-9d64-e7b1acbb44eb\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wcmsp" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.778395 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd266\" (UniqueName: \"kubernetes.io/projected/6be94508-f499-48af-b1c8-50a773fb53d1-kube-api-access-jd266\") pod \"openstack-operator-controller-manager-66554dbdcf-jm2nh\" (UID: \"6be94508-f499-48af-b1c8-50a773fb53d1\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.778464 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-webhook-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-jm2nh\" (UID: \"6be94508-f499-48af-b1c8-50a773fb53d1\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.778497 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-metrics-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-jm2nh\" (UID: \"6be94508-f499-48af-b1c8-50a773fb53d1\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:19:54 crc kubenswrapper[4672]: E0217 16:19:54.778628 4672 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:19:54 crc kubenswrapper[4672]: E0217 16:19:54.778685 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-metrics-certs podName:6be94508-f499-48af-b1c8-50a773fb53d1 nodeName:}" failed. No retries permitted until 2026-02-17 16:19:55.278666162 +0000 UTC m=+1004.032754894 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-metrics-certs") pod "openstack-operator-controller-manager-66554dbdcf-jm2nh" (UID: "6be94508-f499-48af-b1c8-50a773fb53d1") : secret "metrics-server-cert" not found Feb 17 16:19:54 crc kubenswrapper[4672]: E0217 16:19:54.778995 4672 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:19:54 crc kubenswrapper[4672]: E0217 16:19:54.779027 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-webhook-certs podName:6be94508-f499-48af-b1c8-50a773fb53d1 nodeName:}" failed. No retries permitted until 2026-02-17 16:19:55.279020412 +0000 UTC m=+1004.033109144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-webhook-certs") pod "openstack-operator-controller-manager-66554dbdcf-jm2nh" (UID: "6be94508-f499-48af-b1c8-50a773fb53d1") : secret "webhook-server-cert" not found Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.810844 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-x7798" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.825346 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd266\" (UniqueName: \"kubernetes.io/projected/6be94508-f499-48af-b1c8-50a773fb53d1-kube-api-access-jd266\") pod \"openstack-operator-controller-manager-66554dbdcf-jm2nh\" (UID: \"6be94508-f499-48af-b1c8-50a773fb53d1\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.882113 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8w5s\" (UniqueName: \"kubernetes.io/projected/92490ad7-6905-4c57-9d64-e7b1acbb44eb-kube-api-access-x8w5s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wcmsp\" (UID: \"92490ad7-6905-4c57-9d64-e7b1acbb44eb\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wcmsp" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.882189 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd50b560-8522-43e7-bbb9-10c5097ee367-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn\" (UID: \"cd50b560-8522-43e7-bbb9-10c5097ee367\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn" Feb 17 16:19:54 crc kubenswrapper[4672]: E0217 16:19:54.882325 4672 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:19:54 crc kubenswrapper[4672]: E0217 16:19:54.882371 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd50b560-8522-43e7-bbb9-10c5097ee367-cert podName:cd50b560-8522-43e7-bbb9-10c5097ee367 nodeName:}" failed. No retries permitted until 2026-02-17 16:19:55.882356699 +0000 UTC m=+1004.636445421 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd50b560-8522-43e7-bbb9-10c5097ee367-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn" (UID: "cd50b560-8522-43e7-bbb9-10c5097ee367") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.902592 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8w5s\" (UniqueName: \"kubernetes.io/projected/92490ad7-6905-4c57-9d64-e7b1acbb44eb-kube-api-access-x8w5s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wcmsp\" (UID: \"92490ad7-6905-4c57-9d64-e7b1acbb44eb\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wcmsp" Feb 17 16:19:54 crc kubenswrapper[4672]: I0217 16:19:54.998033 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wcmsp" Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.297239 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-webhook-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-jm2nh\" (UID: \"6be94508-f499-48af-b1c8-50a773fb53d1\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.297648 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-metrics-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-jm2nh\" (UID: \"6be94508-f499-48af-b1c8-50a773fb53d1\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.297479 4672 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.297885 4672 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.297972 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-webhook-certs podName:6be94508-f499-48af-b1c8-50a773fb53d1 nodeName:}" failed. No retries permitted until 2026-02-17 16:19:56.297950078 +0000 UTC m=+1005.052038810 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-webhook-certs") pod "openstack-operator-controller-manager-66554dbdcf-jm2nh" (UID: "6be94508-f499-48af-b1c8-50a773fb53d1") : secret "webhook-server-cert" not found Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.298009 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-metrics-certs podName:6be94508-f499-48af-b1c8-50a773fb53d1 nodeName:}" failed. No retries permitted until 2026-02-17 16:19:56.29799449 +0000 UTC m=+1005.052083222 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-metrics-certs") pod "openstack-operator-controller-manager-66554dbdcf-jm2nh" (UID: "6be94508-f499-48af-b1c8-50a773fb53d1") : secret "metrics-server-cert" not found Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.333081 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-4slcx"] Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.343045 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9sv2x"] Feb 17 16:19:55 crc kubenswrapper[4672]: W0217 16:19:55.344417 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod481c13a0_8cdd_4753_9bae_31d536cd4779.slice/crio-e8f7a2cfffefa88215d26583618f4f04c39e03029514246b0ade0b507312f0a0 WatchSource:0}: Error finding container e8f7a2cfffefa88215d26583618f4f04c39e03029514246b0ade0b507312f0a0: Status 404 returned error can't find the container with id e8f7a2cfffefa88215d26583618f4f04c39e03029514246b0ade0b507312f0a0 Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.358286 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-67vt9"] Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.368584 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-7tgjn"] Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.375460 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-kdl2h"] Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.393988 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-n8sch"] Feb 17 16:19:55 crc kubenswrapper[4672]: W0217 16:19:55.402664 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf554f9dc_7778_4116_8d09_205f2c3671fd.slice/crio-e2c85d8f44bd09884e8cf6eb7d53fd047e7fdca439e991aaf0e2fa028eb17342 WatchSource:0}: Error finding container e2c85d8f44bd09884e8cf6eb7d53fd047e7fdca439e991aaf0e2fa028eb17342: Status 404 returned error can't find the container with id e2c85d8f44bd09884e8cf6eb7d53fd047e7fdca439e991aaf0e2fa028eb17342 Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.504653 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-cert\") pod \"infra-operator-controller-manager-79d975b745-bqchl\" (UID: \"8396e964-bc62-4fe3-9a1e-b965b0ca30f5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqchl" Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.505766 4672 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.505889 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-cert podName:8396e964-bc62-4fe3-9a1e-b965b0ca30f5 nodeName:}" failed. No retries permitted until 2026-02-17 16:19:57.505870986 +0000 UTC m=+1006.259959718 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-cert") pod "infra-operator-controller-manager-79d975b745-bqchl" (UID: "8396e964-bc62-4fe3-9a1e-b965b0ca30f5") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.578588 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-4p9xd"] Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.600539 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-rlvlw"] Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.630813 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-h75ck"] Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.655302 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-bmw4m"] Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.711258 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-wck86"] Feb 17 16:19:55 crc kubenswrapper[4672]: W0217 16:19:55.758664 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1ac6199_2cd8_48e9_9303_39fba36f1369.slice/crio-2d81c05d0543f9891c44708184111ca9f1e5bca29ec25192a427973a3f3eb2dd WatchSource:0}: Error finding container 2d81c05d0543f9891c44708184111ca9f1e5bca29ec25192a427973a3f3eb2dd: Status 404 returned error can't find the container with id 2d81c05d0543f9891c44708184111ca9f1e5bca29ec25192a427973a3f3eb2dd Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.761002 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qjp4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-mw56t_openstack-operators(a1ac6199-2cd8-48e9-9303-39fba36f1369): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.762241 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-mw56t" podUID="a1ac6199-2cd8-48e9-9303-39fba36f1369" Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.778711 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-6xtt5"] Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.781010 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-mw56t"] Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.849831 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-c5g8f"] Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.857587 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.18:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tcmvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5d7c6cd576-c5g8f_openstack-operators(761b3282-6d8a-4613-8191-fe2e37822d19): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.858983 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-c5g8f" podUID="761b3282-6d8a-4613-8191-fe2e37822d19" Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.889720 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-m45mw"] Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.893916 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4gxkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-ht2sv_openstack-operators(9ea06f37-5c5b-45f1-b6ba-fb72f5e8f86a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.896709 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-ht2sv" podUID="9ea06f37-5c5b-45f1-b6ba-fb72f5e8f86a" Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.899547 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-rjn5h"] Feb 17 16:19:55 crc kubenswrapper[4672]: W0217 16:19:55.901921 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72370045_528c_4239_8c6f_24f435b3736b.slice/crio-0666db4a99488e98db074c6601c7df007550ddee6c21d9a21bab92ed7333b1c1 WatchSource:0}: Error finding container 0666db4a99488e98db074c6601c7df007550ddee6c21d9a21bab92ed7333b1c1: Status 404 returned error can't find the container with id 0666db4a99488e98db074c6601c7df007550ddee6c21d9a21bab92ed7333b1c1 Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.905860 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-wck86" event={"ID":"ec59ef76-a144-4870-b714-4ddaeae5b741","Type":"ContainerStarted","Data":"e699f41f6559da598701a0757960ca4b4643ed87bfdc23614184347051f7d177"} Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.907029 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2nhvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-m45mw_openstack-operators(72370045-528c-4239-8c6f-24f435b3736b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.910038 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-m45mw" podUID="72370045-528c-4239-8c6f-24f435b3736b" Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.911390 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-bmw4m" event={"ID":"ebef7502-75af-4d09-98eb-b3fbfb5bf0ad","Type":"ContainerStarted","Data":"5e58804094d819748b93b4ceddbad5eeaad742d75cfc6f7d688082a63857715f"} Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.912364 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd50b560-8522-43e7-bbb9-10c5097ee367-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn\" (UID: \"cd50b560-8522-43e7-bbb9-10c5097ee367\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn" Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.916099 4672 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.916143 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd50b560-8522-43e7-bbb9-10c5097ee367-cert podName:cd50b560-8522-43e7-bbb9-10c5097ee367 nodeName:}" failed. No retries permitted until 2026-02-17 16:19:57.916124593 +0000 UTC m=+1006.670213325 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd50b560-8522-43e7-bbb9-10c5097ee367-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn" (UID: "cd50b560-8522-43e7-bbb9-10c5097ee367") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.932198 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x8w5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-wcmsp_openstack-operators(92490ad7-6905-4c57-9d64-e7b1acbb44eb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.932316 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rw5rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-x7798_openstack-operators(8b28f180-8f69-4141-827f-2eb95e876b84): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.932942 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-ht2sv"] Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.933012 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-7tgjn" event={"ID":"a97a493d-5f21-4965-b7ad-aff4cffcfb37","Type":"ContainerStarted","Data":"7f4e3511752f4f4c8000ae5a12bd86d1c627a37c98abfccf46c465c1753b406c"} Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.933348 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wcmsp" podUID="92490ad7-6905-4c57-9d64-e7b1acbb44eb" Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.933387 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-x7798" podUID="8b28f180-8f69-4141-827f-2eb95e876b84" Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.937164 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mfrj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-rjn5h_openstack-operators(4d12b414-59e2-49aa-9463-ae2061b1aa80): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.938279 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rjn5h" podUID="4d12b414-59e2-49aa-9463-ae2061b1aa80" Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.942019 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-x7798"] Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.948716 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hbmdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-gxfcs_openstack-operators(ac8ba5c6-2841-4a02-8707-54be52de56f1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.950696 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gxfcs" podUID="ac8ba5c6-2841-4a02-8707-54be52de56f1" Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.963078 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4slcx" event={"ID":"481c13a0-8cdd-4753-9bae-31d536cd4779","Type":"ContainerStarted","Data":"e8f7a2cfffefa88215d26583618f4f04c39e03029514246b0ade0b507312f0a0"} Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.963125 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-gxfcs"] Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.963164 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wcmsp"] Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.963178 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9sv2x" event={"ID":"f554f9dc-7778-4116-8d09-205f2c3671fd","Type":"ContainerStarted","Data":"e2c85d8f44bd09884e8cf6eb7d53fd047e7fdca439e991aaf0e2fa028eb17342"} Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.963191 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-h75ck" event={"ID":"fa9ca2ad-545b-4125-a472-0aa969f560fd","Type":"ContainerStarted","Data":"ba989aa7841839a5b5ca4e81e8334a8132e296b1e24bdb55d05ddbe37e161b5a"} Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.963201 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kdl2h" event={"ID":"246842cd-06e9-4793-96a5-9b0dad79ce08","Type":"ContainerStarted","Data":"1c2b57839743b1644e714e4877b161005e2ee2559b2676cb32577d958e5cc27a"} Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.964165 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-n8sch" event={"ID":"8747c08b-53c8-45dc-98b0-124e58820cdb","Type":"ContainerStarted","Data":"2b4e7577018ddfb6c6adf757bf2c3c9d90c27aee04f1324e43d63dc990540ecf"} Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.965291 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-mw56t" event={"ID":"a1ac6199-2cd8-48e9-9303-39fba36f1369","Type":"ContainerStarted","Data":"2d81c05d0543f9891c44708184111ca9f1e5bca29ec25192a427973a3f3eb2dd"} Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.966818 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-mw56t" podUID="a1ac6199-2cd8-48e9-9303-39fba36f1369" Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.968868 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.18:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-c5g8f" podUID="761b3282-6d8a-4613-8191-fe2e37822d19" Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.969628 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-c5g8f" event={"ID":"761b3282-6d8a-4613-8191-fe2e37822d19","Type":"ContainerStarted","Data":"5cc65061f61d8eebc37e0850b184678ebdd704a7c6b1da3b6a1639a272e5a960"} Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.970483 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rlvlw" event={"ID":"820e1fb1-9bbe-47e8-a2d5-6e45235244b4","Type":"ContainerStarted","Data":"0d7b27cb91517526a9e6342b5e95f71740cf6c3696eb8b4a3c82994389fe3f14"} Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.972372 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4p9xd" event={"ID":"697b2176-1abc-4887-9ba9-32e6e667a8a0","Type":"ContainerStarted","Data":"739e91cbef0142b6c2462d5eb380399dd1ede2613505fd8fc3ca4fe04baa4779"} Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.973529 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-67vt9" event={"ID":"f18754ba-fbb5-4741-a801-03326fd7714d","Type":"ContainerStarted","Data":"b4c188bcdc6eed37668919d0ed8b14e7beec23214c76b51d7760ee71e4080009"} Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.975293 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-ht2sv" event={"ID":"9ea06f37-5c5b-45f1-b6ba-fb72f5e8f86a","Type":"ContainerStarted","Data":"ea4c7898169182dd629287bf0818d6c0ad94340d2378c572e10138ce192dc893"} Feb 17 16:19:55 crc kubenswrapper[4672]: I0217 16:19:55.976502 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-6xtt5" event={"ID":"c0c82835-0153-4ce1-be6a-9b748ced0671","Type":"ContainerStarted","Data":"5145c34550120d4e283b962f9776753047cc5d36a3c11cfaa72f760ba4e0b5e0"} Feb 17 16:19:55 crc kubenswrapper[4672]: E0217 16:19:55.976655 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-ht2sv" podUID="9ea06f37-5c5b-45f1-b6ba-fb72f5e8f86a" Feb 17 16:19:56 crc kubenswrapper[4672]: I0217 16:19:56.323380 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-webhook-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-jm2nh\" (UID: \"6be94508-f499-48af-b1c8-50a773fb53d1\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:19:56 crc kubenswrapper[4672]: I0217 16:19:56.323433 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-metrics-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-jm2nh\" (UID: \"6be94508-f499-48af-b1c8-50a773fb53d1\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:19:56 crc kubenswrapper[4672]: E0217 16:19:56.323542 4672 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:19:56 crc kubenswrapper[4672]: E0217 16:19:56.323595 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-metrics-certs podName:6be94508-f499-48af-b1c8-50a773fb53d1 nodeName:}" failed. No retries permitted until 2026-02-17 16:19:58.323578807 +0000 UTC m=+1007.077667539 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-metrics-certs") pod "openstack-operator-controller-manager-66554dbdcf-jm2nh" (UID: "6be94508-f499-48af-b1c8-50a773fb53d1") : secret "metrics-server-cert" not found Feb 17 16:19:56 crc kubenswrapper[4672]: E0217 16:19:56.323594 4672 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:19:56 crc kubenswrapper[4672]: E0217 16:19:56.323635 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-webhook-certs podName:6be94508-f499-48af-b1c8-50a773fb53d1 nodeName:}" failed. No retries permitted until 2026-02-17 16:19:58.323623278 +0000 UTC m=+1007.077712010 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-webhook-certs") pod "openstack-operator-controller-manager-66554dbdcf-jm2nh" (UID: "6be94508-f499-48af-b1c8-50a773fb53d1") : secret "webhook-server-cert" not found Feb 17 16:19:57 crc kubenswrapper[4672]: I0217 16:19:57.002632 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-m45mw" event={"ID":"72370045-528c-4239-8c6f-24f435b3736b","Type":"ContainerStarted","Data":"0666db4a99488e98db074c6601c7df007550ddee6c21d9a21bab92ed7333b1c1"} Feb 17 16:19:57 crc kubenswrapper[4672]: E0217 16:19:57.004835 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-m45mw" podUID="72370045-528c-4239-8c6f-24f435b3736b" Feb 17 16:19:57 crc kubenswrapper[4672]: I0217 16:19:57.005547 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-x7798" event={"ID":"8b28f180-8f69-4141-827f-2eb95e876b84","Type":"ContainerStarted","Data":"c8dccdd66459ef5dec2694c09880934a23b626e720101f318611c2f1826b66ba"} Feb 17 16:19:57 crc kubenswrapper[4672]: E0217 16:19:57.006520 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-x7798" podUID="8b28f180-8f69-4141-827f-2eb95e876b84" Feb 17 16:19:57 crc kubenswrapper[4672]: I0217 16:19:57.007285 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wcmsp" event={"ID":"92490ad7-6905-4c57-9d64-e7b1acbb44eb","Type":"ContainerStarted","Data":"f12c83804997f8a2ec92cebbb4976ff0a5abbf287b473b660764899ef5dde8ff"} Feb 17 16:19:57 crc kubenswrapper[4672]: E0217 16:19:57.026181 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wcmsp" podUID="92490ad7-6905-4c57-9d64-e7b1acbb44eb" Feb 17 16:19:57 crc kubenswrapper[4672]: I0217 16:19:57.027536 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rjn5h" event={"ID":"4d12b414-59e2-49aa-9463-ae2061b1aa80","Type":"ContainerStarted","Data":"806519682f58188c198cf586af085cc0e79c765c698da46ea982e71748aecdd6"} Feb 17 16:19:57 crc kubenswrapper[4672]: E0217 16:19:57.028765 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rjn5h" podUID="4d12b414-59e2-49aa-9463-ae2061b1aa80" Feb 17 16:19:57 crc kubenswrapper[4672]: I0217 16:19:57.051394 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gxfcs" event={"ID":"ac8ba5c6-2841-4a02-8707-54be52de56f1","Type":"ContainerStarted","Data":"5d62cbbfec921d00f4b9c1b64bef84b1e2379cbef355440ab4e26e719c346f79"} Feb 17 16:19:57 crc kubenswrapper[4672]: E0217 16:19:57.057880 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-ht2sv" podUID="9ea06f37-5c5b-45f1-b6ba-fb72f5e8f86a" Feb 17 16:19:57 crc kubenswrapper[4672]: E0217 16:19:57.057970 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-mw56t" podUID="a1ac6199-2cd8-48e9-9303-39fba36f1369" Feb 17 16:19:57 crc kubenswrapper[4672]: E0217 16:19:57.058011 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gxfcs" podUID="ac8ba5c6-2841-4a02-8707-54be52de56f1" Feb 17 16:19:57 crc kubenswrapper[4672]: E0217 16:19:57.058061 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.18:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-c5g8f" podUID="761b3282-6d8a-4613-8191-fe2e37822d19" Feb 17 16:19:57 crc kubenswrapper[4672]: I0217 16:19:57.557156 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-cert\") pod \"infra-operator-controller-manager-79d975b745-bqchl\" (UID: \"8396e964-bc62-4fe3-9a1e-b965b0ca30f5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqchl" Feb 17 16:19:57 crc kubenswrapper[4672]: E0217 16:19:57.557585 4672 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:19:57 crc kubenswrapper[4672]: E0217 16:19:57.557633 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-cert podName:8396e964-bc62-4fe3-9a1e-b965b0ca30f5 nodeName:}" failed. No retries permitted until 2026-02-17 16:20:01.557619447 +0000 UTC m=+1010.311708179 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-cert") pod "infra-operator-controller-manager-79d975b745-bqchl" (UID: "8396e964-bc62-4fe3-9a1e-b965b0ca30f5") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:19:57 crc kubenswrapper[4672]: I0217 16:19:57.972106 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd50b560-8522-43e7-bbb9-10c5097ee367-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn\" (UID: \"cd50b560-8522-43e7-bbb9-10c5097ee367\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn" Feb 17 16:19:57 crc kubenswrapper[4672]: E0217 16:19:57.972321 4672 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:19:57 crc kubenswrapper[4672]: E0217 16:19:57.972364 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd50b560-8522-43e7-bbb9-10c5097ee367-cert podName:cd50b560-8522-43e7-bbb9-10c5097ee367 nodeName:}" failed. No retries permitted until 2026-02-17 16:20:01.972351323 +0000 UTC m=+1010.726440055 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd50b560-8522-43e7-bbb9-10c5097ee367-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn" (UID: "cd50b560-8522-43e7-bbb9-10c5097ee367") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:19:58 crc kubenswrapper[4672]: E0217 16:19:58.060824 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gxfcs" podUID="ac8ba5c6-2841-4a02-8707-54be52de56f1" Feb 17 16:19:58 crc kubenswrapper[4672]: E0217 16:19:58.060907 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-m45mw" podUID="72370045-528c-4239-8c6f-24f435b3736b" Feb 17 16:19:58 crc kubenswrapper[4672]: E0217 16:19:58.061015 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rjn5h" podUID="4d12b414-59e2-49aa-9463-ae2061b1aa80" Feb 17 16:19:58 crc kubenswrapper[4672]: E0217 16:19:58.061678 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wcmsp" podUID="92490ad7-6905-4c57-9d64-e7b1acbb44eb" Feb 17 16:19:58 crc kubenswrapper[4672]: E0217 16:19:58.061732 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-x7798" podUID="8b28f180-8f69-4141-827f-2eb95e876b84" Feb 17 16:19:58 crc kubenswrapper[4672]: I0217 16:19:58.409062 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-webhook-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-jm2nh\" (UID: \"6be94508-f499-48af-b1c8-50a773fb53d1\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:19:58 crc kubenswrapper[4672]: I0217 16:19:58.409128 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-metrics-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-jm2nh\" (UID: \"6be94508-f499-48af-b1c8-50a773fb53d1\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:19:58 crc kubenswrapper[4672]: E0217 16:19:58.409251 4672 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:19:58 crc kubenswrapper[4672]: E0217 16:19:58.409263 4672 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:19:58 crc kubenswrapper[4672]: E0217 16:19:58.409311 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-metrics-certs podName:6be94508-f499-48af-b1c8-50a773fb53d1 nodeName:}" failed. No retries permitted until 2026-02-17 16:20:02.409292848 +0000 UTC m=+1011.163381580 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-metrics-certs") pod "openstack-operator-controller-manager-66554dbdcf-jm2nh" (UID: "6be94508-f499-48af-b1c8-50a773fb53d1") : secret "metrics-server-cert" not found Feb 17 16:19:58 crc kubenswrapper[4672]: E0217 16:19:58.409346 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-webhook-certs podName:6be94508-f499-48af-b1c8-50a773fb53d1 nodeName:}" failed. No retries permitted until 2026-02-17 16:20:02.409326049 +0000 UTC m=+1011.163414851 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-webhook-certs") pod "openstack-operator-controller-manager-66554dbdcf-jm2nh" (UID: "6be94508-f499-48af-b1c8-50a773fb53d1") : secret "webhook-server-cert" not found Feb 17 16:20:01 crc kubenswrapper[4672]: I0217 16:20:01.561786 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-cert\") pod \"infra-operator-controller-manager-79d975b745-bqchl\" (UID: \"8396e964-bc62-4fe3-9a1e-b965b0ca30f5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqchl" Feb 17 16:20:01 crc kubenswrapper[4672]: E0217 16:20:01.562697 4672 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:20:01 crc kubenswrapper[4672]: E0217 16:20:01.562776 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-cert podName:8396e964-bc62-4fe3-9a1e-b965b0ca30f5 nodeName:}" failed. No retries permitted until 2026-02-17 16:20:09.562752243 +0000 UTC m=+1018.316841005 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-cert") pod "infra-operator-controller-manager-79d975b745-bqchl" (UID: "8396e964-bc62-4fe3-9a1e-b965b0ca30f5") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:20:02 crc kubenswrapper[4672]: I0217 16:20:02.072842 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd50b560-8522-43e7-bbb9-10c5097ee367-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn\" (UID: \"cd50b560-8522-43e7-bbb9-10c5097ee367\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn" Feb 17 16:20:02 crc kubenswrapper[4672]: E0217 16:20:02.073984 4672 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:20:02 crc kubenswrapper[4672]: E0217 16:20:02.074050 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd50b560-8522-43e7-bbb9-10c5097ee367-cert podName:cd50b560-8522-43e7-bbb9-10c5097ee367 nodeName:}" failed. No retries permitted until 2026-02-17 16:20:10.074033417 +0000 UTC m=+1018.828122159 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd50b560-8522-43e7-bbb9-10c5097ee367-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn" (UID: "cd50b560-8522-43e7-bbb9-10c5097ee367") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:20:02 crc kubenswrapper[4672]: I0217 16:20:02.480044 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-metrics-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-jm2nh\" (UID: \"6be94508-f499-48af-b1c8-50a773fb53d1\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:20:02 crc kubenswrapper[4672]: I0217 16:20:02.480194 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-webhook-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-jm2nh\" (UID: \"6be94508-f499-48af-b1c8-50a773fb53d1\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:20:02 crc kubenswrapper[4672]: E0217 16:20:02.480282 4672 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:20:02 crc kubenswrapper[4672]: E0217 16:20:02.480326 4672 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:20:02 crc kubenswrapper[4672]: E0217 16:20:02.480376 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-webhook-certs podName:6be94508-f499-48af-b1c8-50a773fb53d1 nodeName:}" failed. No retries permitted until 2026-02-17 16:20:10.48036124 +0000 UTC m=+1019.234449972 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-webhook-certs") pod "openstack-operator-controller-manager-66554dbdcf-jm2nh" (UID: "6be94508-f499-48af-b1c8-50a773fb53d1") : secret "webhook-server-cert" not found Feb 17 16:20:02 crc kubenswrapper[4672]: E0217 16:20:02.480422 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-metrics-certs podName:6be94508-f499-48af-b1c8-50a773fb53d1 nodeName:}" failed. No retries permitted until 2026-02-17 16:20:10.480385751 +0000 UTC m=+1019.234474523 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-metrics-certs") pod "openstack-operator-controller-manager-66554dbdcf-jm2nh" (UID: "6be94508-f499-48af-b1c8-50a773fb53d1") : secret "metrics-server-cert" not found Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.122832 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-bmw4m" event={"ID":"ebef7502-75af-4d09-98eb-b3fbfb5bf0ad","Type":"ContainerStarted","Data":"76fdc4a3b83d429d82108216583be72a8c2cc43091d5f03669ad4e04fc286005"} Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.123415 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-bmw4m" Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.126407 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4p9xd" event={"ID":"697b2176-1abc-4887-9ba9-32e6e667a8a0","Type":"ContainerStarted","Data":"f994c2320fcd6c4ac58395e8cf096356b009dc9fefae5bf15d691a4aeed0dca6"} Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.126626 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4p9xd" Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.128777 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-7tgjn" event={"ID":"a97a493d-5f21-4965-b7ad-aff4cffcfb37","Type":"ContainerStarted","Data":"06e5a27f0aabbfac59654cafa3907412521e1bf9ef92e039c966a88c99095895"} Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.128876 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-7tgjn" Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.130094 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-h75ck" event={"ID":"fa9ca2ad-545b-4125-a472-0aa969f560fd","Type":"ContainerStarted","Data":"f33f751a580982be95304419684807f1a1de36432d4d40fa969dddb54e28ce4f"} Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.130224 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-h75ck" Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.131097 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-67vt9" event={"ID":"f18754ba-fbb5-4741-a801-03326fd7714d","Type":"ContainerStarted","Data":"5c5bfbf154eb501212fbe54dbe903498ec164e8cc32aeb9f4710030d706e70d9"} Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.131195 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-67vt9" Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.133548 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kdl2h" event={"ID":"246842cd-06e9-4793-96a5-9b0dad79ce08","Type":"ContainerStarted","Data":"8d67d5ffd6f36c15f26d6dd46f5ba8835676c1c8501f9a74aa816927f2648782"} Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.133601 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kdl2h" Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.135292 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-n8sch" event={"ID":"8747c08b-53c8-45dc-98b0-124e58820cdb","Type":"ContainerStarted","Data":"0d698bb29890a84a6337d82b2efbb1d99a4cecb6dda19edc818c0a71fc3df73e"} Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.135557 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-n8sch" Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.137123 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-wck86" event={"ID":"ec59ef76-a144-4870-b714-4ddaeae5b741","Type":"ContainerStarted","Data":"c5faeaf30071605082e5a2f50a3876ee42d4a8d39c8b21079a6ac88f64485aa5"} Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.137248 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-wck86" Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.138342 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rlvlw" event={"ID":"820e1fb1-9bbe-47e8-a2d5-6e45235244b4","Type":"ContainerStarted","Data":"8ad76ffe50c0cb866b4c491e031901df4e5e081425bcea3835bdf3cd0e07990b"} Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.138406 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rlvlw" Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.142339 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-6xtt5" event={"ID":"c0c82835-0153-4ce1-be6a-9b748ced0671","Type":"ContainerStarted","Data":"c6e61f1a3f41b21159ebc30aff5969389be624dd81ab00346aaa5c52868c54e1"} Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.142445 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-6xtt5" Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.144163 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4slcx" event={"ID":"481c13a0-8cdd-4753-9bae-31d536cd4779","Type":"ContainerStarted","Data":"79612a4997ea6690f9e6c1a5d3a992143337e7e219eb0ec14ef2bbdf209e82aa"} Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.144250 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4slcx" Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.146363 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9sv2x" event={"ID":"f554f9dc-7778-4116-8d09-205f2c3671fd","Type":"ContainerStarted","Data":"22e2bcfd87eeca091ad5abd7d11444117fa8207ea829fff5ae669a25245b4ef0"} Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.212878 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-wck86" podStartSLOduration=3.66420704 podStartE2EDuration="14.212860824s" podCreationTimestamp="2026-02-17 16:19:53 +0000 UTC" firstStartedPulling="2026-02-17 16:19:55.721417396 +0000 UTC m=+1004.475506128" lastFinishedPulling="2026-02-17 16:20:06.27007118 +0000 UTC m=+1015.024159912" observedRunningTime="2026-02-17 16:20:07.212753881 +0000 UTC m=+1015.966842613" watchObservedRunningTime="2026-02-17 16:20:07.212860824 +0000 UTC m=+1015.966949556" Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.218208 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-bmw4m" podStartSLOduration=3.6099807139999998 podStartE2EDuration="14.218193765s" podCreationTimestamp="2026-02-17 16:19:53 +0000 UTC" firstStartedPulling="2026-02-17 16:19:55.661942941 +0000 UTC m=+1004.416031673" lastFinishedPulling="2026-02-17 16:20:06.270155992 +0000 UTC m=+1015.024244724" observedRunningTime="2026-02-17 16:20:07.166792773 +0000 UTC m=+1015.920881505" watchObservedRunningTime="2026-02-17 16:20:07.218193765 +0000 UTC m=+1015.972282497" Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.253162 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-h75ck" podStartSLOduration=3.56680948 podStartE2EDuration="14.253144731s" podCreationTimestamp="2026-02-17 16:19:53 +0000 UTC" firstStartedPulling="2026-02-17 16:19:55.61927225 +0000 UTC m=+1004.373360982" lastFinishedPulling="2026-02-17 16:20:06.305607501 +0000 UTC m=+1015.059696233" observedRunningTime="2026-02-17 16:20:07.251044315 +0000 UTC m=+1016.005133047" watchObservedRunningTime="2026-02-17 16:20:07.253144731 +0000 UTC m=+1016.007233463" Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.307905 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4p9xd" podStartSLOduration=3.626746638 podStartE2EDuration="14.307888091s" podCreationTimestamp="2026-02-17 16:19:53 +0000 UTC" firstStartedPulling="2026-02-17 16:19:55.616338013 +0000 UTC m=+1004.370426745" lastFinishedPulling="2026-02-17 16:20:06.297467956 +0000 UTC m=+1015.051568198" observedRunningTime="2026-02-17 16:20:07.295566764 +0000 UTC m=+1016.049655496" watchObservedRunningTime="2026-02-17 16:20:07.307888091 +0000 UTC m=+1016.061976823" Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.329326 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kdl2h" podStartSLOduration=3.5033256870000002 podStartE2EDuration="14.329308738s" podCreationTimestamp="2026-02-17 16:19:53 +0000 UTC" firstStartedPulling="2026-02-17 16:19:55.47960194 +0000 UTC m=+1004.233690672" lastFinishedPulling="2026-02-17 16:20:06.305584991 +0000 UTC m=+1015.059673723" observedRunningTime="2026-02-17 16:20:07.32407119 +0000 UTC m=+1016.078159922" watchObservedRunningTime="2026-02-17 16:20:07.329308738 +0000 UTC m=+1016.083397470" Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.361886 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-67vt9" podStartSLOduration=3.456026716 podStartE2EDuration="14.361863141s" podCreationTimestamp="2026-02-17 16:19:53 +0000 UTC" firstStartedPulling="2026-02-17 16:19:55.457870765 +0000 UTC m=+1004.211959497" lastFinishedPulling="2026-02-17 16:20:06.36370718 +0000 UTC m=+1015.117795922" observedRunningTime="2026-02-17 16:20:07.354705301 +0000 UTC m=+1016.108794033" watchObservedRunningTime="2026-02-17 16:20:07.361863141 +0000 UTC m=+1016.115951873" Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.402468 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-7tgjn" podStartSLOduration=3.467603862 podStartE2EDuration="14.402451046s" podCreationTimestamp="2026-02-17 16:19:53 +0000 UTC" firstStartedPulling="2026-02-17 16:19:55.479933069 +0000 UTC m=+1004.234021801" lastFinishedPulling="2026-02-17 16:20:06.414780233 +0000 UTC m=+1015.168868985" observedRunningTime="2026-02-17 16:20:07.399925879 +0000 UTC m=+1016.154014611" watchObservedRunningTime="2026-02-17 16:20:07.402451046 +0000 UTC m=+1016.156539778" Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.429171 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rlvlw" podStartSLOduration=3.714099112 podStartE2EDuration="14.429156733s" podCreationTimestamp="2026-02-17 16:19:53 +0000 UTC" firstStartedPulling="2026-02-17 16:19:55.619017664 +0000 UTC m=+1004.373106396" lastFinishedPulling="2026-02-17 16:20:06.334075285 +0000 UTC m=+1015.088164017" observedRunningTime="2026-02-17 16:20:07.427375866 +0000 UTC m=+1016.181464598" watchObservedRunningTime="2026-02-17 16:20:07.429156733 +0000 UTC m=+1016.183245465" Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.454706 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-6xtt5" podStartSLOduration=3.944488755 podStartE2EDuration="14.45468998s" podCreationTimestamp="2026-02-17 16:19:53 +0000 UTC" firstStartedPulling="2026-02-17 16:19:55.78043133 +0000 UTC m=+1004.534520062" lastFinishedPulling="2026-02-17 16:20:06.290632545 +0000 UTC m=+1015.044721287" observedRunningTime="2026-02-17 16:20:07.451901516 +0000 UTC m=+1016.205990248" watchObservedRunningTime="2026-02-17 16:20:07.45468998 +0000 UTC m=+1016.208778712" Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.520058 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4slcx" podStartSLOduration=3.579343072 podStartE2EDuration="14.520035461s" podCreationTimestamp="2026-02-17 16:19:53 +0000 UTC" firstStartedPulling="2026-02-17 16:19:55.364684176 +0000 UTC m=+1004.118772908" lastFinishedPulling="2026-02-17 16:20:06.305376565 +0000 UTC m=+1015.059465297" observedRunningTime="2026-02-17 16:20:07.497266408 +0000 UTC m=+1016.251355140" watchObservedRunningTime="2026-02-17 16:20:07.520035461 +0000 UTC m=+1016.274124213" Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.521252 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9sv2x" podStartSLOduration=3.73139719 podStartE2EDuration="14.521246373s" podCreationTimestamp="2026-02-17 16:19:53 +0000 UTC" firstStartedPulling="2026-02-17 16:19:55.48034651 +0000 UTC m=+1004.234435242" lastFinishedPulling="2026-02-17 16:20:06.270195693 +0000 UTC m=+1015.024284425" observedRunningTime="2026-02-17 16:20:07.5203981 +0000 UTC m=+1016.274486832" watchObservedRunningTime="2026-02-17 16:20:07.521246373 +0000 UTC m=+1016.275335135" Feb 17 16:20:07 crc kubenswrapper[4672]: I0217 16:20:07.539252 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-n8sch" podStartSLOduration=3.749303424 podStartE2EDuration="14.539236449s" podCreationTimestamp="2026-02-17 16:19:53 +0000 UTC" firstStartedPulling="2026-02-17 16:19:55.480139375 +0000 UTC m=+1004.234228107" lastFinishedPulling="2026-02-17 16:20:06.2700724 +0000 UTC m=+1015.024161132" observedRunningTime="2026-02-17 16:20:07.534551185 +0000 UTC m=+1016.288639917" watchObservedRunningTime="2026-02-17 16:20:07.539236449 +0000 UTC m=+1016.293325181" Feb 17 16:20:08 crc kubenswrapper[4672]: I0217 16:20:08.153223 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9sv2x" Feb 17 16:20:09 crc kubenswrapper[4672]: I0217 16:20:09.591588 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-cert\") pod \"infra-operator-controller-manager-79d975b745-bqchl\" (UID: \"8396e964-bc62-4fe3-9a1e-b965b0ca30f5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqchl" Feb 17 16:20:09 crc kubenswrapper[4672]: E0217 16:20:09.591814 4672 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:20:09 crc kubenswrapper[4672]: E0217 16:20:09.592047 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-cert podName:8396e964-bc62-4fe3-9a1e-b965b0ca30f5 nodeName:}" failed. No retries permitted until 2026-02-17 16:20:25.592029788 +0000 UTC m=+1034.346118540 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-cert") pod "infra-operator-controller-manager-79d975b745-bqchl" (UID: "8396e964-bc62-4fe3-9a1e-b965b0ca30f5") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:20:10 crc kubenswrapper[4672]: I0217 16:20:10.098408 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd50b560-8522-43e7-bbb9-10c5097ee367-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn\" (UID: \"cd50b560-8522-43e7-bbb9-10c5097ee367\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn" Feb 17 16:20:10 crc kubenswrapper[4672]: I0217 16:20:10.105152 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd50b560-8522-43e7-bbb9-10c5097ee367-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn\" (UID: \"cd50b560-8522-43e7-bbb9-10c5097ee367\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn" Feb 17 16:20:10 crc kubenswrapper[4672]: I0217 16:20:10.164956 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn" Feb 17 16:20:10 crc kubenswrapper[4672]: I0217 16:20:10.504113 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-webhook-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-jm2nh\" (UID: \"6be94508-f499-48af-b1c8-50a773fb53d1\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:20:10 crc kubenswrapper[4672]: I0217 16:20:10.504184 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-metrics-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-jm2nh\" (UID: \"6be94508-f499-48af-b1c8-50a773fb53d1\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:20:10 crc kubenswrapper[4672]: E0217 16:20:10.504366 4672 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:20:10 crc kubenswrapper[4672]: E0217 16:20:10.504463 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-webhook-certs podName:6be94508-f499-48af-b1c8-50a773fb53d1 nodeName:}" failed. No retries permitted until 2026-02-17 16:20:26.504437298 +0000 UTC m=+1035.258526030 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-webhook-certs") pod "openstack-operator-controller-manager-66554dbdcf-jm2nh" (UID: "6be94508-f499-48af-b1c8-50a773fb53d1") : secret "webhook-server-cert" not found Feb 17 16:20:10 crc kubenswrapper[4672]: I0217 16:20:10.512679 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-metrics-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-jm2nh\" (UID: \"6be94508-f499-48af-b1c8-50a773fb53d1\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:20:12 crc kubenswrapper[4672]: I0217 16:20:12.151746 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn"] Feb 17 16:20:12 crc kubenswrapper[4672]: I0217 16:20:12.190704 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-c5g8f" event={"ID":"761b3282-6d8a-4613-8191-fe2e37822d19","Type":"ContainerStarted","Data":"e40f53e66df206c9f2989f8bc8db4f5504af28f2e43e9699b20cda47014ea27a"} Feb 17 16:20:12 crc kubenswrapper[4672]: I0217 16:20:12.190948 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-c5g8f" Feb 17 16:20:12 crc kubenswrapper[4672]: I0217 16:20:12.193784 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-ht2sv" event={"ID":"9ea06f37-5c5b-45f1-b6ba-fb72f5e8f86a","Type":"ContainerStarted","Data":"19adc60812336f27710c58da85eeceecafa9a8eaf6824135f79464608d9f3dc9"} Feb 17 16:20:12 crc kubenswrapper[4672]: I0217 16:20:12.194001 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-ht2sv" Feb 17 16:20:12 crc kubenswrapper[4672]: I0217 16:20:12.195783 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn" event={"ID":"cd50b560-8522-43e7-bbb9-10c5097ee367","Type":"ContainerStarted","Data":"363d2025207bffefdcf441e3b8f3d79c3762025c1c30f2b246a71001d7fdf620"} Feb 17 16:20:12 crc kubenswrapper[4672]: I0217 16:20:12.215932 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-c5g8f" podStartSLOduration=3.261954495 podStartE2EDuration="19.215917096s" podCreationTimestamp="2026-02-17 16:19:53 +0000 UTC" firstStartedPulling="2026-02-17 16:19:55.857403928 +0000 UTC m=+1004.611492650" lastFinishedPulling="2026-02-17 16:20:11.811366519 +0000 UTC m=+1020.565455251" observedRunningTime="2026-02-17 16:20:12.213935163 +0000 UTC m=+1020.968023885" watchObservedRunningTime="2026-02-17 16:20:12.215917096 +0000 UTC m=+1020.970005828" Feb 17 16:20:12 crc kubenswrapper[4672]: I0217 16:20:12.232535 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-ht2sv" podStartSLOduration=3.413702783 podStartE2EDuration="19.232522715s" podCreationTimestamp="2026-02-17 16:19:53 +0000 UTC" firstStartedPulling="2026-02-17 16:19:55.893787461 +0000 UTC m=+1004.647876193" lastFinishedPulling="2026-02-17 16:20:11.712607393 +0000 UTC m=+1020.466696125" observedRunningTime="2026-02-17 16:20:12.227578195 +0000 UTC m=+1020.981666927" watchObservedRunningTime="2026-02-17 16:20:12.232522715 +0000 UTC m=+1020.986611447" Feb 17 16:20:13 crc kubenswrapper[4672]: I0217 16:20:13.815227 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-67vt9" Feb 17 16:20:13 crc kubenswrapper[4672]: I0217 16:20:13.880255 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-n8sch" Feb 17 16:20:13 crc kubenswrapper[4672]: I0217 16:20:13.882176 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4p9xd" Feb 17 16:20:13 crc kubenswrapper[4672]: I0217 16:20:13.910334 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-kdl2h" Feb 17 16:20:13 crc kubenswrapper[4672]: I0217 16:20:13.926654 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-9sv2x" Feb 17 16:20:14 crc kubenswrapper[4672]: I0217 16:20:14.007878 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4slcx" Feb 17 16:20:14 crc kubenswrapper[4672]: I0217 16:20:14.043312 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-7tgjn" Feb 17 16:20:14 crc kubenswrapper[4672]: I0217 16:20:14.112125 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-bmw4m" Feb 17 16:20:14 crc kubenswrapper[4672]: I0217 16:20:14.182680 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-h75ck" Feb 17 16:20:14 crc kubenswrapper[4672]: I0217 16:20:14.209554 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rlvlw" Feb 17 16:20:14 crc kubenswrapper[4672]: I0217 16:20:14.479919 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-6xtt5" Feb 17 16:20:14 crc kubenswrapper[4672]: I0217 16:20:14.586560 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-wck86" Feb 17 16:20:23 crc kubenswrapper[4672]: E0217 16:20:23.133755 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" Feb 17 16:20:23 crc kubenswrapper[4672]: E0217 16:20:23.134424 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2nhvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-m45mw_openstack-operators(72370045-528c-4239-8c6f-24f435b3736b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:20:23 crc kubenswrapper[4672]: E0217 16:20:23.135567 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7866795846-m45mw" podUID="72370045-528c-4239-8c6f-24f435b3736b" Feb 17 16:20:23 crc kubenswrapper[4672]: E0217 16:20:23.702487 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 17 16:20:23 crc kubenswrapper[4672]: E0217 16:20:23.702703 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hbmdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-gxfcs_openstack-operators(ac8ba5c6-2841-4a02-8707-54be52de56f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:20:23 crc kubenswrapper[4672]: E0217 16:20:23.704057 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gxfcs" podUID="ac8ba5c6-2841-4a02-8707-54be52de56f1" Feb 17 16:20:24 crc kubenswrapper[4672]: I0217 16:20:24.580825 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-ht2sv" Feb 17 16:20:24 crc kubenswrapper[4672]: I0217 16:20:24.690151 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5d7c6cd576-c5g8f" Feb 17 16:20:25 crc kubenswrapper[4672]: I0217 16:20:25.315855 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-mw56t" event={"ID":"a1ac6199-2cd8-48e9-9303-39fba36f1369","Type":"ContainerStarted","Data":"d5b4b4d12cf690a660c4ddd5ec69863ab75e3d9d66f522c0152c6676e5e2fadb"} Feb 17 16:20:25 crc kubenswrapper[4672]: I0217 16:20:25.316533 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-mw56t" Feb 17 16:20:25 crc kubenswrapper[4672]: I0217 16:20:25.319232 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-x7798" event={"ID":"8b28f180-8f69-4141-827f-2eb95e876b84","Type":"ContainerStarted","Data":"de90fa0ca88c6af767eb654131d71de38db9b17a9e5a769f0ad28dcc1fbe8434"} Feb 17 16:20:25 crc kubenswrapper[4672]: I0217 16:20:25.320649 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-x7798" Feb 17 16:20:25 crc kubenswrapper[4672]: I0217 16:20:25.323374 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wcmsp" event={"ID":"92490ad7-6905-4c57-9d64-e7b1acbb44eb","Type":"ContainerStarted","Data":"c26bcb728879b3438cb7cd08e1c89ae61c8f000fe8ca276bf665ac757f79510f"} Feb 17 16:20:25 crc kubenswrapper[4672]: I0217 16:20:25.324929 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rjn5h" event={"ID":"4d12b414-59e2-49aa-9463-ae2061b1aa80","Type":"ContainerStarted","Data":"b41aa85acf98c0ef0b19a9b21ae92687376804377528cf22c2c2fbd5564ceaf3"} Feb 17 16:20:25 crc kubenswrapper[4672]: I0217 16:20:25.325345 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rjn5h" Feb 17 16:20:25 crc kubenswrapper[4672]: I0217 16:20:25.327011 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn" event={"ID":"cd50b560-8522-43e7-bbb9-10c5097ee367","Type":"ContainerStarted","Data":"f7b37985f06c8844828e4ccc0beaeef6450562728474b82fa531b24a012a1e48"} Feb 17 16:20:25 crc kubenswrapper[4672]: I0217 16:20:25.327165 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn" Feb 17 16:20:25 crc kubenswrapper[4672]: I0217 16:20:25.339990 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-mw56t" podStartSLOduration=3.226883056 podStartE2EDuration="32.339973671s" podCreationTimestamp="2026-02-17 16:19:53 +0000 UTC" firstStartedPulling="2026-02-17 16:19:55.760892652 +0000 UTC m=+1004.514981384" lastFinishedPulling="2026-02-17 16:20:24.873983267 +0000 UTC m=+1033.628071999" observedRunningTime="2026-02-17 16:20:25.333933901 +0000 UTC m=+1034.088022643" watchObservedRunningTime="2026-02-17 16:20:25.339973671 +0000 UTC m=+1034.094062403" Feb 17 16:20:25 crc kubenswrapper[4672]: I0217 16:20:25.358428 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wcmsp" podStartSLOduration=2.486661556 podStartE2EDuration="31.358405229s" podCreationTimestamp="2026-02-17 16:19:54 +0000 UTC" firstStartedPulling="2026-02-17 16:19:55.932089646 +0000 UTC m=+1004.686178378" lastFinishedPulling="2026-02-17 16:20:24.803833319 +0000 UTC m=+1033.557922051" observedRunningTime="2026-02-17 16:20:25.347822939 +0000 UTC m=+1034.101911701" watchObservedRunningTime="2026-02-17 16:20:25.358405229 +0000 UTC m=+1034.112493981" Feb 17 16:20:25 crc kubenswrapper[4672]: I0217 16:20:25.363772 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-x7798" podStartSLOduration=2.405599339 podStartE2EDuration="31.363763211s" podCreationTimestamp="2026-02-17 16:19:54 +0000 UTC" firstStartedPulling="2026-02-17 16:19:55.932260231 +0000 UTC m=+1004.686349053" lastFinishedPulling="2026-02-17 16:20:24.890424193 +0000 UTC m=+1033.644512925" observedRunningTime="2026-02-17 16:20:25.363388731 +0000 UTC m=+1034.117477473" watchObservedRunningTime="2026-02-17 16:20:25.363763211 +0000 UTC m=+1034.117851953" Feb 17 16:20:25 crc kubenswrapper[4672]: I0217 16:20:25.392787 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn" podStartSLOduration=19.674338659 podStartE2EDuration="32.39277113s" podCreationTimestamp="2026-02-17 16:19:53 +0000 UTC" firstStartedPulling="2026-02-17 16:20:12.170756599 +0000 UTC m=+1020.924845321" lastFinishedPulling="2026-02-17 16:20:24.88918906 +0000 UTC m=+1033.643277792" observedRunningTime="2026-02-17 16:20:25.387010317 +0000 UTC m=+1034.141099049" watchObservedRunningTime="2026-02-17 16:20:25.39277113 +0000 UTC m=+1034.146859862" Feb 17 16:20:25 crc kubenswrapper[4672]: I0217 16:20:25.408401 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rjn5h" podStartSLOduration=3.479334222 podStartE2EDuration="32.408386213s" podCreationTimestamp="2026-02-17 16:19:53 +0000 UTC" firstStartedPulling="2026-02-17 16:19:55.937064118 +0000 UTC m=+1004.691152850" lastFinishedPulling="2026-02-17 16:20:24.866116119 +0000 UTC m=+1033.620204841" observedRunningTime="2026-02-17 16:20:25.404129401 +0000 UTC m=+1034.158218133" watchObservedRunningTime="2026-02-17 16:20:25.408386213 +0000 UTC m=+1034.162474945" Feb 17 16:20:25 crc kubenswrapper[4672]: I0217 16:20:25.612173 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-cert\") pod \"infra-operator-controller-manager-79d975b745-bqchl\" (UID: \"8396e964-bc62-4fe3-9a1e-b965b0ca30f5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqchl" Feb 17 16:20:25 crc kubenswrapper[4672]: I0217 16:20:25.617661 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8396e964-bc62-4fe3-9a1e-b965b0ca30f5-cert\") pod \"infra-operator-controller-manager-79d975b745-bqchl\" (UID: \"8396e964-bc62-4fe3-9a1e-b965b0ca30f5\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqchl" Feb 17 16:20:25 crc kubenswrapper[4672]: I0217 16:20:25.765704 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-nx85j" Feb 17 16:20:25 crc kubenswrapper[4672]: I0217 16:20:25.774799 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqchl" Feb 17 16:20:26 crc kubenswrapper[4672]: I0217 16:20:26.113155 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-bqchl"] Feb 17 16:20:26 crc kubenswrapper[4672]: W0217 16:20:26.113943 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8396e964_bc62_4fe3_9a1e_b965b0ca30f5.slice/crio-49af4f10a604f3409e2b030a77325b8f52ec7da7c8e29e22492f6b6262a053e0 WatchSource:0}: Error finding container 49af4f10a604f3409e2b030a77325b8f52ec7da7c8e29e22492f6b6262a053e0: Status 404 returned error can't find the container with id 49af4f10a604f3409e2b030a77325b8f52ec7da7c8e29e22492f6b6262a053e0 Feb 17 16:20:26 crc kubenswrapper[4672]: I0217 16:20:26.334588 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqchl" event={"ID":"8396e964-bc62-4fe3-9a1e-b965b0ca30f5","Type":"ContainerStarted","Data":"49af4f10a604f3409e2b030a77325b8f52ec7da7c8e29e22492f6b6262a053e0"} Feb 17 16:20:26 crc kubenswrapper[4672]: I0217 16:20:26.526498 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-webhook-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-jm2nh\" (UID: \"6be94508-f499-48af-b1c8-50a773fb53d1\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:20:26 crc kubenswrapper[4672]: I0217 16:20:26.534290 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6be94508-f499-48af-b1c8-50a773fb53d1-webhook-certs\") pod \"openstack-operator-controller-manager-66554dbdcf-jm2nh\" (UID: \"6be94508-f499-48af-b1c8-50a773fb53d1\") " pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:20:26 crc kubenswrapper[4672]: I0217 16:20:26.792001 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xzcph" Feb 17 16:20:26 crc kubenswrapper[4672]: I0217 16:20:26.800331 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:20:27 crc kubenswrapper[4672]: I0217 16:20:27.288760 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh"] Feb 17 16:20:27 crc kubenswrapper[4672]: I0217 16:20:27.343818 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" event={"ID":"6be94508-f499-48af-b1c8-50a773fb53d1","Type":"ContainerStarted","Data":"1492a2dcd78d945b84ecdbbf92049f4036465834f82b7389d879b290bd4f9dee"} Feb 17 16:20:28 crc kubenswrapper[4672]: I0217 16:20:28.356122 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqchl" event={"ID":"8396e964-bc62-4fe3-9a1e-b965b0ca30f5","Type":"ContainerStarted","Data":"4efc3f13aad3a87c8b196fd5b0fa34f5433d5913e4823157ea99792d9d06c503"} Feb 17 16:20:28 crc kubenswrapper[4672]: I0217 16:20:28.356261 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqchl" Feb 17 16:20:28 crc kubenswrapper[4672]: I0217 16:20:28.359131 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" event={"ID":"6be94508-f499-48af-b1c8-50a773fb53d1","Type":"ContainerStarted","Data":"9520e957c353671e03ecaebac650627c9ab543ac9de8285313da89621b8ac14b"} Feb 17 16:20:28 crc kubenswrapper[4672]: I0217 16:20:28.359372 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:20:28 crc kubenswrapper[4672]: I0217 16:20:28.385126 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqchl" podStartSLOduration=33.41280686 podStartE2EDuration="35.385103156s" podCreationTimestamp="2026-02-17 16:19:53 +0000 UTC" firstStartedPulling="2026-02-17 16:20:26.115733951 +0000 UTC m=+1034.869822683" lastFinishedPulling="2026-02-17 16:20:28.088030247 +0000 UTC m=+1036.842118979" observedRunningTime="2026-02-17 16:20:28.382328093 +0000 UTC m=+1037.136416875" watchObservedRunningTime="2026-02-17 16:20:28.385103156 +0000 UTC m=+1037.139191918" Feb 17 16:20:28 crc kubenswrapper[4672]: I0217 16:20:28.418884 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" podStartSLOduration=34.418855701 podStartE2EDuration="34.418855701s" podCreationTimestamp="2026-02-17 16:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:20:28.414550817 +0000 UTC m=+1037.168639599" watchObservedRunningTime="2026-02-17 16:20:28.418855701 +0000 UTC m=+1037.172944463" Feb 17 16:20:30 crc kubenswrapper[4672]: I0217 16:20:30.172827 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn" Feb 17 16:20:34 crc kubenswrapper[4672]: I0217 16:20:34.339848 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-mw56t" Feb 17 16:20:34 crc kubenswrapper[4672]: I0217 16:20:34.627444 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rjn5h" Feb 17 16:20:34 crc kubenswrapper[4672]: I0217 16:20:34.815599 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-x7798" Feb 17 16:20:35 crc kubenswrapper[4672]: I0217 16:20:35.783471 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqchl" Feb 17 16:20:36 crc kubenswrapper[4672]: I0217 16:20:36.829857 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-66554dbdcf-jm2nh" Feb 17 16:20:37 crc kubenswrapper[4672]: E0217 16:20:37.974744 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-m45mw" podUID="72370045-528c-4239-8c6f-24f435b3736b" Feb 17 16:20:37 crc kubenswrapper[4672]: E0217 16:20:37.974925 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gxfcs" podUID="ac8ba5c6-2841-4a02-8707-54be52de56f1" Feb 17 16:20:50 crc kubenswrapper[4672]: I0217 16:20:50.548380 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-m45mw" event={"ID":"72370045-528c-4239-8c6f-24f435b3736b","Type":"ContainerStarted","Data":"6149b928c745eef8717b06bf704bfdf271ce15c1f4cd7722b7f7610912716b07"} Feb 17 16:20:50 crc kubenswrapper[4672]: I0217 16:20:50.551416 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gxfcs" event={"ID":"ac8ba5c6-2841-4a02-8707-54be52de56f1","Type":"ContainerStarted","Data":"2cc4deee63744a28d41dea5738894e362e0dda3ea477c1c92017dbda5c12ed36"} Feb 17 16:20:50 crc kubenswrapper[4672]: I0217 16:20:50.551903 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gxfcs" Feb 17 16:20:50 crc kubenswrapper[4672]: I0217 16:20:50.575576 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-m45mw" podStartSLOduration=4.041372351 podStartE2EDuration="57.575557904s" podCreationTimestamp="2026-02-17 16:19:53 +0000 UTC" firstStartedPulling="2026-02-17 16:19:55.906898889 +0000 UTC m=+1004.660987621" lastFinishedPulling="2026-02-17 16:20:49.441084402 +0000 UTC m=+1058.195173174" observedRunningTime="2026-02-17 16:20:50.573632323 +0000 UTC m=+1059.327721055" watchObservedRunningTime="2026-02-17 16:20:50.575557904 +0000 UTC m=+1059.329646636" Feb 17 16:20:54 crc kubenswrapper[4672]: I0217 16:20:54.380269 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gxfcs" Feb 17 16:20:54 crc kubenswrapper[4672]: I0217 16:20:54.404855 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gxfcs" podStartSLOduration=7.881895827 podStartE2EDuration="1m1.404832322s" podCreationTimestamp="2026-02-17 16:19:53 +0000 UTC" firstStartedPulling="2026-02-17 16:19:55.948394968 +0000 UTC m=+1004.702483690" lastFinishedPulling="2026-02-17 16:20:49.471331453 +0000 UTC m=+1058.225420185" observedRunningTime="2026-02-17 16:20:50.591305971 +0000 UTC m=+1059.345394713" watchObservedRunningTime="2026-02-17 16:20:54.404832322 +0000 UTC m=+1063.158921064" Feb 17 16:20:54 crc kubenswrapper[4672]: I0217 16:20:54.771164 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-m45mw" Feb 17 16:20:54 crc kubenswrapper[4672]: I0217 16:20:54.774645 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-m45mw" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.572720 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rjb2z"] Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.574821 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rjb2z" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.576791 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.578228 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.580345 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xw85\" (UniqueName: \"kubernetes.io/projected/6991c3ec-74a9-4191-812c-1798521a6411-kube-api-access-9xw85\") pod \"dnsmasq-dns-675f4bcbfc-rjb2z\" (UID: \"6991c3ec-74a9-4191-812c-1798521a6411\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rjb2z" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.580422 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6991c3ec-74a9-4191-812c-1798521a6411-config\") pod \"dnsmasq-dns-675f4bcbfc-rjb2z\" (UID: \"6991c3ec-74a9-4191-812c-1798521a6411\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rjb2z" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.582393 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rjb2z"] Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.582458 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.582737 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-j27s5" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.636773 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pjffq"] Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.638809 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pjffq" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.641719 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.663407 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pjffq"] Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.681612 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xw85\" (UniqueName: \"kubernetes.io/projected/6991c3ec-74a9-4191-812c-1798521a6411-kube-api-access-9xw85\") pod \"dnsmasq-dns-675f4bcbfc-rjb2z\" (UID: \"6991c3ec-74a9-4191-812c-1798521a6411\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rjb2z" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.681674 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6991c3ec-74a9-4191-812c-1798521a6411-config\") pod \"dnsmasq-dns-675f4bcbfc-rjb2z\" (UID: \"6991c3ec-74a9-4191-812c-1798521a6411\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rjb2z" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.683150 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6991c3ec-74a9-4191-812c-1798521a6411-config\") pod \"dnsmasq-dns-675f4bcbfc-rjb2z\" (UID: \"6991c3ec-74a9-4191-812c-1798521a6411\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rjb2z" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.706635 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xw85\" (UniqueName: \"kubernetes.io/projected/6991c3ec-74a9-4191-812c-1798521a6411-kube-api-access-9xw85\") pod \"dnsmasq-dns-675f4bcbfc-rjb2z\" (UID: \"6991c3ec-74a9-4191-812c-1798521a6411\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rjb2z" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.785320 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn2mp\" (UniqueName: \"kubernetes.io/projected/7474d3f0-4950-4a9b-a384-d0bb442fbd84-kube-api-access-sn2mp\") pod \"dnsmasq-dns-78dd6ddcc-pjffq\" (UID: \"7474d3f0-4950-4a9b-a384-d0bb442fbd84\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pjffq" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.785432 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7474d3f0-4950-4a9b-a384-d0bb442fbd84-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pjffq\" (UID: \"7474d3f0-4950-4a9b-a384-d0bb442fbd84\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pjffq" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.785464 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7474d3f0-4950-4a9b-a384-d0bb442fbd84-config\") pod \"dnsmasq-dns-78dd6ddcc-pjffq\" (UID: \"7474d3f0-4950-4a9b-a384-d0bb442fbd84\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pjffq" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.886360 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn2mp\" (UniqueName: \"kubernetes.io/projected/7474d3f0-4950-4a9b-a384-d0bb442fbd84-kube-api-access-sn2mp\") pod \"dnsmasq-dns-78dd6ddcc-pjffq\" (UID: \"7474d3f0-4950-4a9b-a384-d0bb442fbd84\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pjffq" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.886454 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7474d3f0-4950-4a9b-a384-d0bb442fbd84-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pjffq\" (UID: \"7474d3f0-4950-4a9b-a384-d0bb442fbd84\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pjffq" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.886473 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7474d3f0-4950-4a9b-a384-d0bb442fbd84-config\") pod \"dnsmasq-dns-78dd6ddcc-pjffq\" (UID: \"7474d3f0-4950-4a9b-a384-d0bb442fbd84\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pjffq" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.887170 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7474d3f0-4950-4a9b-a384-d0bb442fbd84-config\") pod \"dnsmasq-dns-78dd6ddcc-pjffq\" (UID: \"7474d3f0-4950-4a9b-a384-d0bb442fbd84\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pjffq" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.889146 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.896398 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-j27s5" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.897582 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7474d3f0-4950-4a9b-a384-d0bb442fbd84-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pjffq\" (UID: \"7474d3f0-4950-4a9b-a384-d0bb442fbd84\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pjffq" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.904809 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rjb2z" Feb 17 16:21:11 crc kubenswrapper[4672]: I0217 16:21:11.966926 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn2mp\" (UniqueName: \"kubernetes.io/projected/7474d3f0-4950-4a9b-a384-d0bb442fbd84-kube-api-access-sn2mp\") pod \"dnsmasq-dns-78dd6ddcc-pjffq\" (UID: \"7474d3f0-4950-4a9b-a384-d0bb442fbd84\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pjffq" Feb 17 16:21:12 crc kubenswrapper[4672]: I0217 16:21:12.260576 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pjffq" Feb 17 16:21:12 crc kubenswrapper[4672]: I0217 16:21:12.416485 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rjb2z"] Feb 17 16:21:12 crc kubenswrapper[4672]: W0217 16:21:12.433696 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6991c3ec_74a9_4191_812c_1798521a6411.slice/crio-e0ff70b46e7092b82b58ee2b59f980ddcf4e7788b46964794b0009e806e39998 WatchSource:0}: Error finding container e0ff70b46e7092b82b58ee2b59f980ddcf4e7788b46964794b0009e806e39998: Status 404 returned error can't find the container with id e0ff70b46e7092b82b58ee2b59f980ddcf4e7788b46964794b0009e806e39998 Feb 17 16:21:12 crc kubenswrapper[4672]: I0217 16:21:12.708609 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pjffq"] Feb 17 16:21:12 crc kubenswrapper[4672]: I0217 16:21:12.773183 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-pjffq" event={"ID":"7474d3f0-4950-4a9b-a384-d0bb442fbd84","Type":"ContainerStarted","Data":"6a715c442d71a6db9277b64430eb4ad001abfb33d2aa3dca270c12f80b19d70c"} Feb 17 16:21:12 crc kubenswrapper[4672]: I0217 16:21:12.774760 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-rjb2z" event={"ID":"6991c3ec-74a9-4191-812c-1798521a6411","Type":"ContainerStarted","Data":"e0ff70b46e7092b82b58ee2b59f980ddcf4e7788b46964794b0009e806e39998"} Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.152874 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rjb2z"] Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.190499 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-284jx"] Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.191659 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-284jx" Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.221556 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-284jx"] Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.332504 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2b98\" (UniqueName: \"kubernetes.io/projected/d7d4f70d-2ce7-493f-bfe4-53b8157d295c-kube-api-access-s2b98\") pod \"dnsmasq-dns-666b6646f7-284jx\" (UID: \"d7d4f70d-2ce7-493f-bfe4-53b8157d295c\") " pod="openstack/dnsmasq-dns-666b6646f7-284jx" Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.332600 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7d4f70d-2ce7-493f-bfe4-53b8157d295c-config\") pod \"dnsmasq-dns-666b6646f7-284jx\" (UID: \"d7d4f70d-2ce7-493f-bfe4-53b8157d295c\") " pod="openstack/dnsmasq-dns-666b6646f7-284jx" Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.332648 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7d4f70d-2ce7-493f-bfe4-53b8157d295c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-284jx\" (UID: \"d7d4f70d-2ce7-493f-bfe4-53b8157d295c\") " pod="openstack/dnsmasq-dns-666b6646f7-284jx" Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.434005 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7d4f70d-2ce7-493f-bfe4-53b8157d295c-config\") pod \"dnsmasq-dns-666b6646f7-284jx\" (UID: \"d7d4f70d-2ce7-493f-bfe4-53b8157d295c\") " pod="openstack/dnsmasq-dns-666b6646f7-284jx" Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.434086 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7d4f70d-2ce7-493f-bfe4-53b8157d295c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-284jx\" (UID: \"d7d4f70d-2ce7-493f-bfe4-53b8157d295c\") " pod="openstack/dnsmasq-dns-666b6646f7-284jx" Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.434129 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2b98\" (UniqueName: \"kubernetes.io/projected/d7d4f70d-2ce7-493f-bfe4-53b8157d295c-kube-api-access-s2b98\") pod \"dnsmasq-dns-666b6646f7-284jx\" (UID: \"d7d4f70d-2ce7-493f-bfe4-53b8157d295c\") " pod="openstack/dnsmasq-dns-666b6646f7-284jx" Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.435497 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7d4f70d-2ce7-493f-bfe4-53b8157d295c-config\") pod \"dnsmasq-dns-666b6646f7-284jx\" (UID: \"d7d4f70d-2ce7-493f-bfe4-53b8157d295c\") " pod="openstack/dnsmasq-dns-666b6646f7-284jx" Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.435684 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7d4f70d-2ce7-493f-bfe4-53b8157d295c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-284jx\" (UID: \"d7d4f70d-2ce7-493f-bfe4-53b8157d295c\") " pod="openstack/dnsmasq-dns-666b6646f7-284jx" Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.460342 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2b98\" (UniqueName: \"kubernetes.io/projected/d7d4f70d-2ce7-493f-bfe4-53b8157d295c-kube-api-access-s2b98\") pod \"dnsmasq-dns-666b6646f7-284jx\" (UID: \"d7d4f70d-2ce7-493f-bfe4-53b8157d295c\") " pod="openstack/dnsmasq-dns-666b6646f7-284jx" Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.505352 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pjffq"] Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.530238 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-284jx" Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.543652 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-htwhx"] Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.545637 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-htwhx" Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.556046 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-htwhx"] Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.737952 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d64c91f-b138-4fa6-bb58-9d31c4c65861-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-htwhx\" (UID: \"7d64c91f-b138-4fa6-bb58-9d31c4c65861\") " pod="openstack/dnsmasq-dns-57d769cc4f-htwhx" Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.738328 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q2mv\" (UniqueName: \"kubernetes.io/projected/7d64c91f-b138-4fa6-bb58-9d31c4c65861-kube-api-access-2q2mv\") pod \"dnsmasq-dns-57d769cc4f-htwhx\" (UID: \"7d64c91f-b138-4fa6-bb58-9d31c4c65861\") " pod="openstack/dnsmasq-dns-57d769cc4f-htwhx" Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.738360 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d64c91f-b138-4fa6-bb58-9d31c4c65861-config\") pod \"dnsmasq-dns-57d769cc4f-htwhx\" (UID: \"7d64c91f-b138-4fa6-bb58-9d31c4c65861\") " pod="openstack/dnsmasq-dns-57d769cc4f-htwhx" Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.839638 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d64c91f-b138-4fa6-bb58-9d31c4c65861-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-htwhx\" (UID: \"7d64c91f-b138-4fa6-bb58-9d31c4c65861\") " pod="openstack/dnsmasq-dns-57d769cc4f-htwhx" Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.839719 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q2mv\" (UniqueName: \"kubernetes.io/projected/7d64c91f-b138-4fa6-bb58-9d31c4c65861-kube-api-access-2q2mv\") pod \"dnsmasq-dns-57d769cc4f-htwhx\" (UID: \"7d64c91f-b138-4fa6-bb58-9d31c4c65861\") " pod="openstack/dnsmasq-dns-57d769cc4f-htwhx" Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.839754 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d64c91f-b138-4fa6-bb58-9d31c4c65861-config\") pod \"dnsmasq-dns-57d769cc4f-htwhx\" (UID: \"7d64c91f-b138-4fa6-bb58-9d31c4c65861\") " pod="openstack/dnsmasq-dns-57d769cc4f-htwhx" Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.840999 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d64c91f-b138-4fa6-bb58-9d31c4c65861-config\") pod \"dnsmasq-dns-57d769cc4f-htwhx\" (UID: \"7d64c91f-b138-4fa6-bb58-9d31c4c65861\") " pod="openstack/dnsmasq-dns-57d769cc4f-htwhx" Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.841524 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d64c91f-b138-4fa6-bb58-9d31c4c65861-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-htwhx\" (UID: \"7d64c91f-b138-4fa6-bb58-9d31c4c65861\") " pod="openstack/dnsmasq-dns-57d769cc4f-htwhx" Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.872276 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q2mv\" (UniqueName: \"kubernetes.io/projected/7d64c91f-b138-4fa6-bb58-9d31c4c65861-kube-api-access-2q2mv\") pod \"dnsmasq-dns-57d769cc4f-htwhx\" (UID: \"7d64c91f-b138-4fa6-bb58-9d31c4c65861\") " pod="openstack/dnsmasq-dns-57d769cc4f-htwhx" Feb 17 16:21:14 crc kubenswrapper[4672]: I0217 16:21:14.882624 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-htwhx" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.080600 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-284jx"] Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.360160 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.365558 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.372247 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.373115 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.373540 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.373728 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.373879 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.373931 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-kgd9v" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.375221 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.377712 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.383472 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-htwhx"] Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.557957 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3068e639-1b58-4971-bf3e-c321ff88289b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.558014 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3068e639-1b58-4971-bf3e-c321ff88289b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.558041 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2ba6f626-16b3-4af7-837d-88c617ee5155\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ba6f626-16b3-4af7-837d-88c617ee5155\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.558070 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.558096 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3068e639-1b58-4971-bf3e-c321ff88289b-config-data\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.558267 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.558325 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkx6l\" (UniqueName: \"kubernetes.io/projected/3068e639-1b58-4971-bf3e-c321ff88289b-kube-api-access-mkx6l\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.558349 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3068e639-1b58-4971-bf3e-c321ff88289b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.558386 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.558432 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3068e639-1b58-4971-bf3e-c321ff88289b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.558737 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.660223 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3068e639-1b58-4971-bf3e-c321ff88289b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.661168 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.661260 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3068e639-1b58-4971-bf3e-c321ff88289b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.661281 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3068e639-1b58-4971-bf3e-c321ff88289b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.661313 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2ba6f626-16b3-4af7-837d-88c617ee5155\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ba6f626-16b3-4af7-837d-88c617ee5155\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.661348 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.661381 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3068e639-1b58-4971-bf3e-c321ff88289b-config-data\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.661404 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.661426 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkx6l\" (UniqueName: \"kubernetes.io/projected/3068e639-1b58-4971-bf3e-c321ff88289b-kube-api-access-mkx6l\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.661447 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.661465 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3068e639-1b58-4971-bf3e-c321ff88289b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.665312 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3068e639-1b58-4971-bf3e-c321ff88289b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.666103 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3068e639-1b58-4971-bf3e-c321ff88289b-config-data\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.666355 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.667791 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.668852 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3068e639-1b58-4971-bf3e-c321ff88289b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.669785 4672 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.669824 4672 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2ba6f626-16b3-4af7-837d-88c617ee5155\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ba6f626-16b3-4af7-837d-88c617ee5155\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/346e7e88f6122aec89ca532feb4a65d3c17e46d11b652f2eb4c6a257471d0b1f/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.676650 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3068e639-1b58-4971-bf3e-c321ff88289b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.688696 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.693185 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkx6l\" (UniqueName: \"kubernetes.io/projected/3068e639-1b58-4971-bf3e-c321ff88289b-kube-api-access-mkx6l\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.693842 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.696972 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.697393 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.700313 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3068e639-1b58-4971-bf3e-c321ff88289b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.702481 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.702710 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.702875 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.705803 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.705845 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.706081 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.714929 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bp98c" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.729873 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.774856 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2ba6f626-16b3-4af7-837d-88c617ee5155\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ba6f626-16b3-4af7-837d-88c617ee5155\") pod \"rabbitmq-server-0\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " pod="openstack/rabbitmq-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.849983 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-284jx" event={"ID":"d7d4f70d-2ce7-493f-bfe4-53b8157d295c","Type":"ContainerStarted","Data":"b0f8b8d0706ba0a5a03b4e0c0450f10db634483f4c442ec99b0c2c3131c52fa5"} Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.856540 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-htwhx" event={"ID":"7d64c91f-b138-4fa6-bb58-9d31c4c65861","Type":"ContainerStarted","Data":"bfe77f867a4d404ec12bc0bb2d8289c90e8b5f6b26b5322d1403ced0db32e3f8"} Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.866674 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.866795 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.866896 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.866966 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.866991 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.867024 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.867085 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.867111 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.867152 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.867569 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsm65\" (UniqueName: \"kubernetes.io/projected/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-kube-api-access-lsm65\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.867710 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.969175 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.969228 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.969252 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.969306 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.969362 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsm65\" (UniqueName: \"kubernetes.io/projected/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-kube-api-access-lsm65\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.969385 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.969416 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.969465 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.969497 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.969550 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.969572 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.970538 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.970912 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.970924 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.971052 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.971097 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.975098 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.977063 4672 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.977104 4672 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/70972079a6ad7ad29c2dd1359cd5a4462575bbc49aff6da85b5dda1e965af91e/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.986790 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsm65\" (UniqueName: \"kubernetes.io/projected/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-kube-api-access-lsm65\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.987310 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.989306 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.991557 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:15 crc kubenswrapper[4672]: I0217 16:21:15.994197 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:21:16 crc kubenswrapper[4672]: I0217 16:21:16.020097 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b\") pod \"rabbitmq-cell1-server-0\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:16 crc kubenswrapper[4672]: I0217 16:21:16.078879 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:21:16 crc kubenswrapper[4672]: I0217 16:21:16.819093 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 17 16:21:16 crc kubenswrapper[4672]: I0217 16:21:16.820218 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 16:21:16 crc kubenswrapper[4672]: I0217 16:21:16.823188 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-t28nj" Feb 17 16:21:16 crc kubenswrapper[4672]: I0217 16:21:16.827371 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 17 16:21:16 crc kubenswrapper[4672]: I0217 16:21:16.827475 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 17 16:21:16 crc kubenswrapper[4672]: I0217 16:21:16.827666 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 17 16:21:16 crc kubenswrapper[4672]: I0217 16:21:16.830040 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 17 16:21:16 crc kubenswrapper[4672]: I0217 16:21:16.833991 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 16:21:16 crc kubenswrapper[4672]: I0217 16:21:16.985975 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/322bd505-c790-49c2-8ffa-0cb97cf40d7c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:16 crc kubenswrapper[4672]: I0217 16:21:16.986351 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/322bd505-c790-49c2-8ffa-0cb97cf40d7c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:16 crc kubenswrapper[4672]: I0217 16:21:16.986405 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/322bd505-c790-49c2-8ffa-0cb97cf40d7c-kolla-config\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:16 crc kubenswrapper[4672]: I0217 16:21:16.986531 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz2jl\" (UniqueName: \"kubernetes.io/projected/322bd505-c790-49c2-8ffa-0cb97cf40d7c-kube-api-access-sz2jl\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:16 crc kubenswrapper[4672]: I0217 16:21:16.986625 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-913dd34f-fbec-4d44-8f4b-07b100523444\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-913dd34f-fbec-4d44-8f4b-07b100523444\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:16 crc kubenswrapper[4672]: I0217 16:21:16.986713 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322bd505-c790-49c2-8ffa-0cb97cf40d7c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:16 crc kubenswrapper[4672]: I0217 16:21:16.986735 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/322bd505-c790-49c2-8ffa-0cb97cf40d7c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:16 crc kubenswrapper[4672]: I0217 16:21:16.986760 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/322bd505-c790-49c2-8ffa-0cb97cf40d7c-config-data-default\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:17 crc kubenswrapper[4672]: I0217 16:21:17.087992 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/322bd505-c790-49c2-8ffa-0cb97cf40d7c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:17 crc kubenswrapper[4672]: I0217 16:21:17.088052 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/322bd505-c790-49c2-8ffa-0cb97cf40d7c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:17 crc kubenswrapper[4672]: I0217 16:21:17.088076 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/322bd505-c790-49c2-8ffa-0cb97cf40d7c-kolla-config\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:17 crc kubenswrapper[4672]: I0217 16:21:17.088108 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz2jl\" (UniqueName: \"kubernetes.io/projected/322bd505-c790-49c2-8ffa-0cb97cf40d7c-kube-api-access-sz2jl\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:17 crc kubenswrapper[4672]: I0217 16:21:17.088140 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-913dd34f-fbec-4d44-8f4b-07b100523444\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-913dd34f-fbec-4d44-8f4b-07b100523444\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:17 crc kubenswrapper[4672]: I0217 16:21:17.088175 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322bd505-c790-49c2-8ffa-0cb97cf40d7c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:17 crc kubenswrapper[4672]: I0217 16:21:17.088194 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/322bd505-c790-49c2-8ffa-0cb97cf40d7c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:17 crc kubenswrapper[4672]: I0217 16:21:17.088215 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/322bd505-c790-49c2-8ffa-0cb97cf40d7c-config-data-default\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:17 crc kubenswrapper[4672]: I0217 16:21:17.089129 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/322bd505-c790-49c2-8ffa-0cb97cf40d7c-kolla-config\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:17 crc kubenswrapper[4672]: I0217 16:21:17.089361 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/322bd505-c790-49c2-8ffa-0cb97cf40d7c-config-data-default\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:17 crc kubenswrapper[4672]: I0217 16:21:17.089746 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/322bd505-c790-49c2-8ffa-0cb97cf40d7c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:17 crc kubenswrapper[4672]: I0217 16:21:17.089924 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/322bd505-c790-49c2-8ffa-0cb97cf40d7c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:17 crc kubenswrapper[4672]: I0217 16:21:17.093341 4672 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:21:17 crc kubenswrapper[4672]: I0217 16:21:17.093363 4672 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-913dd34f-fbec-4d44-8f4b-07b100523444\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-913dd34f-fbec-4d44-8f4b-07b100523444\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7bb9586afb47ad6effff632da355aabcd1ab569d7d745ffa34895d6f4daa8516/globalmount\"" pod="openstack/openstack-galera-0" Feb 17 16:21:17 crc kubenswrapper[4672]: I0217 16:21:17.100317 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/322bd505-c790-49c2-8ffa-0cb97cf40d7c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:17 crc kubenswrapper[4672]: I0217 16:21:17.101752 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322bd505-c790-49c2-8ffa-0cb97cf40d7c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:17 crc kubenswrapper[4672]: I0217 16:21:17.107380 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz2jl\" (UniqueName: \"kubernetes.io/projected/322bd505-c790-49c2-8ffa-0cb97cf40d7c-kube-api-access-sz2jl\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:17 crc kubenswrapper[4672]: I0217 16:21:17.125397 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-913dd34f-fbec-4d44-8f4b-07b100523444\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-913dd34f-fbec-4d44-8f4b-07b100523444\") pod \"openstack-galera-0\" (UID: \"322bd505-c790-49c2-8ffa-0cb97cf40d7c\") " pod="openstack/openstack-galera-0" Feb 17 16:21:17 crc kubenswrapper[4672]: I0217 16:21:17.198708 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.307063 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.308917 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.314050 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.314591 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.314835 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-fbkd6" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.314981 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.339072 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.419335 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/164bb24e-646b-4404-92f5-912254ac1421-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.419432 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/164bb24e-646b-4404-92f5-912254ac1421-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.419482 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/164bb24e-646b-4404-92f5-912254ac1421-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.419562 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/164bb24e-646b-4404-92f5-912254ac1421-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.419584 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6e0bab83-9b3d-44fc-b56e-85d1665822f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e0bab83-9b3d-44fc-b56e-85d1665822f3\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.419609 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glk4w\" (UniqueName: \"kubernetes.io/projected/164bb24e-646b-4404-92f5-912254ac1421-kube-api-access-glk4w\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.419638 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/164bb24e-646b-4404-92f5-912254ac1421-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.419691 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/164bb24e-646b-4404-92f5-912254ac1421-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.521180 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/164bb24e-646b-4404-92f5-912254ac1421-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.521237 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6e0bab83-9b3d-44fc-b56e-85d1665822f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e0bab83-9b3d-44fc-b56e-85d1665822f3\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.521279 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glk4w\" (UniqueName: \"kubernetes.io/projected/164bb24e-646b-4404-92f5-912254ac1421-kube-api-access-glk4w\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.521324 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/164bb24e-646b-4404-92f5-912254ac1421-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.521348 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/164bb24e-646b-4404-92f5-912254ac1421-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.521422 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/164bb24e-646b-4404-92f5-912254ac1421-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.521458 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/164bb24e-646b-4404-92f5-912254ac1421-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.521528 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/164bb24e-646b-4404-92f5-912254ac1421-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.522161 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/164bb24e-646b-4404-92f5-912254ac1421-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.522794 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/164bb24e-646b-4404-92f5-912254ac1421-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.522843 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/164bb24e-646b-4404-92f5-912254ac1421-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.523883 4672 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.523914 4672 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6e0bab83-9b3d-44fc-b56e-85d1665822f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e0bab83-9b3d-44fc-b56e-85d1665822f3\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/890b3bc90054704ce642d1e700dd4c01fcddcffe9e182f77b6e48f843c9e9c8b/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.525661 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/164bb24e-646b-4404-92f5-912254ac1421-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.529214 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/164bb24e-646b-4404-92f5-912254ac1421-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.538877 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/164bb24e-646b-4404-92f5-912254ac1421-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.544329 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glk4w\" (UniqueName: \"kubernetes.io/projected/164bb24e-646b-4404-92f5-912254ac1421-kube-api-access-glk4w\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.573364 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.574581 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.577209 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.577480 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.577706 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-j5gm7" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.600133 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6e0bab83-9b3d-44fc-b56e-85d1665822f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e0bab83-9b3d-44fc-b56e-85d1665822f3\") pod \"openstack-cell1-galera-0\" (UID: \"164bb24e-646b-4404-92f5-912254ac1421\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.604266 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.643939 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.725707 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/abbf2ccd-ce83-432b-9e9d-7f39d2483aee-memcached-tls-certs\") pod \"memcached-0\" (UID: \"abbf2ccd-ce83-432b-9e9d-7f39d2483aee\") " pod="openstack/memcached-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.725742 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbf2ccd-ce83-432b-9e9d-7f39d2483aee-combined-ca-bundle\") pod \"memcached-0\" (UID: \"abbf2ccd-ce83-432b-9e9d-7f39d2483aee\") " pod="openstack/memcached-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.725813 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abbf2ccd-ce83-432b-9e9d-7f39d2483aee-config-data\") pod \"memcached-0\" (UID: \"abbf2ccd-ce83-432b-9e9d-7f39d2483aee\") " pod="openstack/memcached-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.725836 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khtst\" (UniqueName: \"kubernetes.io/projected/abbf2ccd-ce83-432b-9e9d-7f39d2483aee-kube-api-access-khtst\") pod \"memcached-0\" (UID: \"abbf2ccd-ce83-432b-9e9d-7f39d2483aee\") " pod="openstack/memcached-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.725885 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/abbf2ccd-ce83-432b-9e9d-7f39d2483aee-kolla-config\") pod \"memcached-0\" (UID: \"abbf2ccd-ce83-432b-9e9d-7f39d2483aee\") " pod="openstack/memcached-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.827590 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/abbf2ccd-ce83-432b-9e9d-7f39d2483aee-kolla-config\") pod \"memcached-0\" (UID: \"abbf2ccd-ce83-432b-9e9d-7f39d2483aee\") " pod="openstack/memcached-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.827666 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/abbf2ccd-ce83-432b-9e9d-7f39d2483aee-memcached-tls-certs\") pod \"memcached-0\" (UID: \"abbf2ccd-ce83-432b-9e9d-7f39d2483aee\") " pod="openstack/memcached-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.827683 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbf2ccd-ce83-432b-9e9d-7f39d2483aee-combined-ca-bundle\") pod \"memcached-0\" (UID: \"abbf2ccd-ce83-432b-9e9d-7f39d2483aee\") " pod="openstack/memcached-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.827735 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abbf2ccd-ce83-432b-9e9d-7f39d2483aee-config-data\") pod \"memcached-0\" (UID: \"abbf2ccd-ce83-432b-9e9d-7f39d2483aee\") " pod="openstack/memcached-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.827754 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khtst\" (UniqueName: \"kubernetes.io/projected/abbf2ccd-ce83-432b-9e9d-7f39d2483aee-kube-api-access-khtst\") pod \"memcached-0\" (UID: \"abbf2ccd-ce83-432b-9e9d-7f39d2483aee\") " pod="openstack/memcached-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.828609 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/abbf2ccd-ce83-432b-9e9d-7f39d2483aee-kolla-config\") pod \"memcached-0\" (UID: \"abbf2ccd-ce83-432b-9e9d-7f39d2483aee\") " pod="openstack/memcached-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.828673 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abbf2ccd-ce83-432b-9e9d-7f39d2483aee-config-data\") pod \"memcached-0\" (UID: \"abbf2ccd-ce83-432b-9e9d-7f39d2483aee\") " pod="openstack/memcached-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.840527 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/abbf2ccd-ce83-432b-9e9d-7f39d2483aee-memcached-tls-certs\") pod \"memcached-0\" (UID: \"abbf2ccd-ce83-432b-9e9d-7f39d2483aee\") " pod="openstack/memcached-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.844290 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbf2ccd-ce83-432b-9e9d-7f39d2483aee-combined-ca-bundle\") pod \"memcached-0\" (UID: \"abbf2ccd-ce83-432b-9e9d-7f39d2483aee\") " pod="openstack/memcached-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.849374 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khtst\" (UniqueName: \"kubernetes.io/projected/abbf2ccd-ce83-432b-9e9d-7f39d2483aee-kube-api-access-khtst\") pod \"memcached-0\" (UID: \"abbf2ccd-ce83-432b-9e9d-7f39d2483aee\") " pod="openstack/memcached-0" Feb 17 16:21:18 crc kubenswrapper[4672]: I0217 16:21:18.921840 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.133148 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.135464 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.137850 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-4wsz2" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.146170 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.267037 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrvq5\" (UniqueName: \"kubernetes.io/projected/0494473e-5e65-47bf-b3a3-6d8c7b27243f-kube-api-access-wrvq5\") pod \"kube-state-metrics-0\" (UID: \"0494473e-5e65-47bf-b3a3-6d8c7b27243f\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.368412 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrvq5\" (UniqueName: \"kubernetes.io/projected/0494473e-5e65-47bf-b3a3-6d8c7b27243f-kube-api-access-wrvq5\") pod \"kube-state-metrics-0\" (UID: \"0494473e-5e65-47bf-b3a3-6d8c7b27243f\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.390038 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrvq5\" (UniqueName: \"kubernetes.io/projected/0494473e-5e65-47bf-b3a3-6d8c7b27243f-kube-api-access-wrvq5\") pod \"kube-state-metrics-0\" (UID: \"0494473e-5e65-47bf-b3a3-6d8c7b27243f\") " pod="openstack/kube-state-metrics-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.454728 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.810227 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.812248 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.815402 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.815500 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.815677 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-xf9fz" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.816003 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.816851 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.832499 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.876586 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/91c936b2-eda8-4075-bcec-4c56d31cda1d-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"91c936b2-eda8-4075-bcec-4c56d31cda1d\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.876636 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/91c936b2-eda8-4075-bcec-4c56d31cda1d-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"91c936b2-eda8-4075-bcec-4c56d31cda1d\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.876658 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/91c936b2-eda8-4075-bcec-4c56d31cda1d-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"91c936b2-eda8-4075-bcec-4c56d31cda1d\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.876690 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/91c936b2-eda8-4075-bcec-4c56d31cda1d-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"91c936b2-eda8-4075-bcec-4c56d31cda1d\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.876748 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/91c936b2-eda8-4075-bcec-4c56d31cda1d-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"91c936b2-eda8-4075-bcec-4c56d31cda1d\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.876768 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/91c936b2-eda8-4075-bcec-4c56d31cda1d-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"91c936b2-eda8-4075-bcec-4c56d31cda1d\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.876807 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xbjw\" (UniqueName: \"kubernetes.io/projected/91c936b2-eda8-4075-bcec-4c56d31cda1d-kube-api-access-5xbjw\") pod \"alertmanager-metric-storage-0\" (UID: \"91c936b2-eda8-4075-bcec-4c56d31cda1d\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.978552 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/91c936b2-eda8-4075-bcec-4c56d31cda1d-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"91c936b2-eda8-4075-bcec-4c56d31cda1d\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.978745 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/91c936b2-eda8-4075-bcec-4c56d31cda1d-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"91c936b2-eda8-4075-bcec-4c56d31cda1d\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.978880 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xbjw\" (UniqueName: \"kubernetes.io/projected/91c936b2-eda8-4075-bcec-4c56d31cda1d-kube-api-access-5xbjw\") pod \"alertmanager-metric-storage-0\" (UID: \"91c936b2-eda8-4075-bcec-4c56d31cda1d\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.979019 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/91c936b2-eda8-4075-bcec-4c56d31cda1d-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"91c936b2-eda8-4075-bcec-4c56d31cda1d\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.979124 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/91c936b2-eda8-4075-bcec-4c56d31cda1d-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"91c936b2-eda8-4075-bcec-4c56d31cda1d\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.979220 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/91c936b2-eda8-4075-bcec-4c56d31cda1d-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"91c936b2-eda8-4075-bcec-4c56d31cda1d\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.979336 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/91c936b2-eda8-4075-bcec-4c56d31cda1d-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"91c936b2-eda8-4075-bcec-4c56d31cda1d\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.981372 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/91c936b2-eda8-4075-bcec-4c56d31cda1d-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"91c936b2-eda8-4075-bcec-4c56d31cda1d\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.985300 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/91c936b2-eda8-4075-bcec-4c56d31cda1d-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"91c936b2-eda8-4075-bcec-4c56d31cda1d\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.987991 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/91c936b2-eda8-4075-bcec-4c56d31cda1d-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"91c936b2-eda8-4075-bcec-4c56d31cda1d\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.988839 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/91c936b2-eda8-4075-bcec-4c56d31cda1d-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"91c936b2-eda8-4075-bcec-4c56d31cda1d\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.991934 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/91c936b2-eda8-4075-bcec-4c56d31cda1d-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"91c936b2-eda8-4075-bcec-4c56d31cda1d\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.992551 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/91c936b2-eda8-4075-bcec-4c56d31cda1d-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"91c936b2-eda8-4075-bcec-4c56d31cda1d\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:21:21 crc kubenswrapper[4672]: I0217 16:21:21.998191 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xbjw\" (UniqueName: \"kubernetes.io/projected/91c936b2-eda8-4075-bcec-4c56d31cda1d-kube-api-access-5xbjw\") pod \"alertmanager-metric-storage-0\" (UID: \"91c936b2-eda8-4075-bcec-4c56d31cda1d\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.130541 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.453432 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.455909 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.460721 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.460743 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.460912 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.461003 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.461004 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.461107 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.461185 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.468109 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-49mpb" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.478990 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.590291 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/878cc257-0a03-44ea-ae70-356195dc5427-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.590612 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/878cc257-0a03-44ea-ae70-356195dc5427-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.590663 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.590687 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/878cc257-0a03-44ea-ae70-356195dc5427-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.590733 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/878cc257-0a03-44ea-ae70-356195dc5427-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.590759 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgzb2\" (UniqueName: \"kubernetes.io/projected/878cc257-0a03-44ea-ae70-356195dc5427-kube-api-access-kgzb2\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.590798 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/878cc257-0a03-44ea-ae70-356195dc5427-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.590843 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/878cc257-0a03-44ea-ae70-356195dc5427-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.590872 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/878cc257-0a03-44ea-ae70-356195dc5427-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.590906 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/878cc257-0a03-44ea-ae70-356195dc5427-config\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.692262 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/878cc257-0a03-44ea-ae70-356195dc5427-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.692305 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/878cc257-0a03-44ea-ae70-356195dc5427-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.692344 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.692362 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/878cc257-0a03-44ea-ae70-356195dc5427-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.692398 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/878cc257-0a03-44ea-ae70-356195dc5427-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.692416 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgzb2\" (UniqueName: \"kubernetes.io/projected/878cc257-0a03-44ea-ae70-356195dc5427-kube-api-access-kgzb2\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.692448 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/878cc257-0a03-44ea-ae70-356195dc5427-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.692480 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/878cc257-0a03-44ea-ae70-356195dc5427-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.692498 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/878cc257-0a03-44ea-ae70-356195dc5427-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.692542 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/878cc257-0a03-44ea-ae70-356195dc5427-config\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.694159 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/878cc257-0a03-44ea-ae70-356195dc5427-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.696905 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/878cc257-0a03-44ea-ae70-356195dc5427-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.697666 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/878cc257-0a03-44ea-ae70-356195dc5427-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.698003 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/878cc257-0a03-44ea-ae70-356195dc5427-config\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.698659 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/878cc257-0a03-44ea-ae70-356195dc5427-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.699258 4672 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.699294 4672 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/703a962647886df8f581706a29afc229b08eaf30613cfc7e75745da710408f03/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.707237 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/878cc257-0a03-44ea-ae70-356195dc5427-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.707765 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/878cc257-0a03-44ea-ae70-356195dc5427-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.711156 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/878cc257-0a03-44ea-ae70-356195dc5427-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.713661 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgzb2\" (UniqueName: \"kubernetes.io/projected/878cc257-0a03-44ea-ae70-356195dc5427-kube-api-access-kgzb2\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.756916 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38\") pod \"prometheus-metric-storage-0\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:22 crc kubenswrapper[4672]: I0217 16:21:22.779481 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.327048 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-q9cd6"] Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.328053 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q9cd6" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.332487 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.338763 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-26fxw" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.338962 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.339738 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q9cd6"] Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.347497 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-8nlcn"] Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.349479 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.383300 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-8nlcn"] Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.425182 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a3267a9e-18a1-49f9-bda5-8dcb1467446a-var-run\") pod \"ovn-controller-ovs-8nlcn\" (UID: \"a3267a9e-18a1-49f9-bda5-8dcb1467446a\") " pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.425242 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a3267a9e-18a1-49f9-bda5-8dcb1467446a-scripts\") pod \"ovn-controller-ovs-8nlcn\" (UID: \"a3267a9e-18a1-49f9-bda5-8dcb1467446a\") " pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.425316 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/12b377dd-1f13-4af0-81d6-635d39cc528c-ovn-controller-tls-certs\") pod \"ovn-controller-q9cd6\" (UID: \"12b377dd-1f13-4af0-81d6-635d39cc528c\") " pod="openstack/ovn-controller-q9cd6" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.425337 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a3267a9e-18a1-49f9-bda5-8dcb1467446a-etc-ovs\") pod \"ovn-controller-ovs-8nlcn\" (UID: \"a3267a9e-18a1-49f9-bda5-8dcb1467446a\") " pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.425375 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12b377dd-1f13-4af0-81d6-635d39cc528c-scripts\") pod \"ovn-controller-q9cd6\" (UID: \"12b377dd-1f13-4af0-81d6-635d39cc528c\") " pod="openstack/ovn-controller-q9cd6" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.425398 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12b377dd-1f13-4af0-81d6-635d39cc528c-var-run\") pod \"ovn-controller-q9cd6\" (UID: \"12b377dd-1f13-4af0-81d6-635d39cc528c\") " pod="openstack/ovn-controller-q9cd6" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.425415 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74kvx\" (UniqueName: \"kubernetes.io/projected/12b377dd-1f13-4af0-81d6-635d39cc528c-kube-api-access-74kvx\") pod \"ovn-controller-q9cd6\" (UID: \"12b377dd-1f13-4af0-81d6-635d39cc528c\") " pod="openstack/ovn-controller-q9cd6" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.425433 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a3267a9e-18a1-49f9-bda5-8dcb1467446a-var-lib\") pod \"ovn-controller-ovs-8nlcn\" (UID: \"a3267a9e-18a1-49f9-bda5-8dcb1467446a\") " pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.425477 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvc2d\" (UniqueName: \"kubernetes.io/projected/a3267a9e-18a1-49f9-bda5-8dcb1467446a-kube-api-access-pvc2d\") pod \"ovn-controller-ovs-8nlcn\" (UID: \"a3267a9e-18a1-49f9-bda5-8dcb1467446a\") " pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.425533 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a3267a9e-18a1-49f9-bda5-8dcb1467446a-var-log\") pod \"ovn-controller-ovs-8nlcn\" (UID: \"a3267a9e-18a1-49f9-bda5-8dcb1467446a\") " pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.425549 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/12b377dd-1f13-4af0-81d6-635d39cc528c-var-run-ovn\") pod \"ovn-controller-q9cd6\" (UID: \"12b377dd-1f13-4af0-81d6-635d39cc528c\") " pod="openstack/ovn-controller-q9cd6" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.425709 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/12b377dd-1f13-4af0-81d6-635d39cc528c-var-log-ovn\") pod \"ovn-controller-q9cd6\" (UID: \"12b377dd-1f13-4af0-81d6-635d39cc528c\") " pod="openstack/ovn-controller-q9cd6" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.425750 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b377dd-1f13-4af0-81d6-635d39cc528c-combined-ca-bundle\") pod \"ovn-controller-q9cd6\" (UID: \"12b377dd-1f13-4af0-81d6-635d39cc528c\") " pod="openstack/ovn-controller-q9cd6" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.527484 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a3267a9e-18a1-49f9-bda5-8dcb1467446a-var-log\") pod \"ovn-controller-ovs-8nlcn\" (UID: \"a3267a9e-18a1-49f9-bda5-8dcb1467446a\") " pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.527572 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/12b377dd-1f13-4af0-81d6-635d39cc528c-var-run-ovn\") pod \"ovn-controller-q9cd6\" (UID: \"12b377dd-1f13-4af0-81d6-635d39cc528c\") " pod="openstack/ovn-controller-q9cd6" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.527608 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/12b377dd-1f13-4af0-81d6-635d39cc528c-var-log-ovn\") pod \"ovn-controller-q9cd6\" (UID: \"12b377dd-1f13-4af0-81d6-635d39cc528c\") " pod="openstack/ovn-controller-q9cd6" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.528195 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b377dd-1f13-4af0-81d6-635d39cc528c-combined-ca-bundle\") pod \"ovn-controller-q9cd6\" (UID: \"12b377dd-1f13-4af0-81d6-635d39cc528c\") " pod="openstack/ovn-controller-q9cd6" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.528150 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/12b377dd-1f13-4af0-81d6-635d39cc528c-var-log-ovn\") pod \"ovn-controller-q9cd6\" (UID: \"12b377dd-1f13-4af0-81d6-635d39cc528c\") " pod="openstack/ovn-controller-q9cd6" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.528275 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a3267a9e-18a1-49f9-bda5-8dcb1467446a-var-run\") pod \"ovn-controller-ovs-8nlcn\" (UID: \"a3267a9e-18a1-49f9-bda5-8dcb1467446a\") " pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.528296 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/12b377dd-1f13-4af0-81d6-635d39cc528c-var-run-ovn\") pod \"ovn-controller-q9cd6\" (UID: \"12b377dd-1f13-4af0-81d6-635d39cc528c\") " pod="openstack/ovn-controller-q9cd6" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.528401 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a3267a9e-18a1-49f9-bda5-8dcb1467446a-var-run\") pod \"ovn-controller-ovs-8nlcn\" (UID: \"a3267a9e-18a1-49f9-bda5-8dcb1467446a\") " pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.528449 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a3267a9e-18a1-49f9-bda5-8dcb1467446a-scripts\") pod \"ovn-controller-ovs-8nlcn\" (UID: \"a3267a9e-18a1-49f9-bda5-8dcb1467446a\") " pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.528490 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a3267a9e-18a1-49f9-bda5-8dcb1467446a-var-log\") pod \"ovn-controller-ovs-8nlcn\" (UID: \"a3267a9e-18a1-49f9-bda5-8dcb1467446a\") " pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.530336 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a3267a9e-18a1-49f9-bda5-8dcb1467446a-scripts\") pod \"ovn-controller-ovs-8nlcn\" (UID: \"a3267a9e-18a1-49f9-bda5-8dcb1467446a\") " pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.530396 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/12b377dd-1f13-4af0-81d6-635d39cc528c-ovn-controller-tls-certs\") pod \"ovn-controller-q9cd6\" (UID: \"12b377dd-1f13-4af0-81d6-635d39cc528c\") " pod="openstack/ovn-controller-q9cd6" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.530431 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a3267a9e-18a1-49f9-bda5-8dcb1467446a-etc-ovs\") pod \"ovn-controller-ovs-8nlcn\" (UID: \"a3267a9e-18a1-49f9-bda5-8dcb1467446a\") " pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.530452 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12b377dd-1f13-4af0-81d6-635d39cc528c-scripts\") pod \"ovn-controller-q9cd6\" (UID: \"12b377dd-1f13-4af0-81d6-635d39cc528c\") " pod="openstack/ovn-controller-q9cd6" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.530473 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12b377dd-1f13-4af0-81d6-635d39cc528c-var-run\") pod \"ovn-controller-q9cd6\" (UID: \"12b377dd-1f13-4af0-81d6-635d39cc528c\") " pod="openstack/ovn-controller-q9cd6" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.530491 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74kvx\" (UniqueName: \"kubernetes.io/projected/12b377dd-1f13-4af0-81d6-635d39cc528c-kube-api-access-74kvx\") pod \"ovn-controller-q9cd6\" (UID: \"12b377dd-1f13-4af0-81d6-635d39cc528c\") " pod="openstack/ovn-controller-q9cd6" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.530527 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a3267a9e-18a1-49f9-bda5-8dcb1467446a-var-lib\") pod \"ovn-controller-ovs-8nlcn\" (UID: \"a3267a9e-18a1-49f9-bda5-8dcb1467446a\") " pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.530659 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a3267a9e-18a1-49f9-bda5-8dcb1467446a-etc-ovs\") pod \"ovn-controller-ovs-8nlcn\" (UID: \"a3267a9e-18a1-49f9-bda5-8dcb1467446a\") " pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.530732 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvc2d\" (UniqueName: \"kubernetes.io/projected/a3267a9e-18a1-49f9-bda5-8dcb1467446a-kube-api-access-pvc2d\") pod \"ovn-controller-ovs-8nlcn\" (UID: \"a3267a9e-18a1-49f9-bda5-8dcb1467446a\") " pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.530777 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12b377dd-1f13-4af0-81d6-635d39cc528c-var-run\") pod \"ovn-controller-q9cd6\" (UID: \"12b377dd-1f13-4af0-81d6-635d39cc528c\") " pod="openstack/ovn-controller-q9cd6" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.530877 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a3267a9e-18a1-49f9-bda5-8dcb1467446a-var-lib\") pod \"ovn-controller-ovs-8nlcn\" (UID: \"a3267a9e-18a1-49f9-bda5-8dcb1467446a\") " pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.533854 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b377dd-1f13-4af0-81d6-635d39cc528c-combined-ca-bundle\") pod \"ovn-controller-q9cd6\" (UID: \"12b377dd-1f13-4af0-81d6-635d39cc528c\") " pod="openstack/ovn-controller-q9cd6" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.534077 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12b377dd-1f13-4af0-81d6-635d39cc528c-scripts\") pod \"ovn-controller-q9cd6\" (UID: \"12b377dd-1f13-4af0-81d6-635d39cc528c\") " pod="openstack/ovn-controller-q9cd6" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.539709 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/12b377dd-1f13-4af0-81d6-635d39cc528c-ovn-controller-tls-certs\") pod \"ovn-controller-q9cd6\" (UID: \"12b377dd-1f13-4af0-81d6-635d39cc528c\") " pod="openstack/ovn-controller-q9cd6" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.549409 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvc2d\" (UniqueName: \"kubernetes.io/projected/a3267a9e-18a1-49f9-bda5-8dcb1467446a-kube-api-access-pvc2d\") pod \"ovn-controller-ovs-8nlcn\" (UID: \"a3267a9e-18a1-49f9-bda5-8dcb1467446a\") " pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.557172 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74kvx\" (UniqueName: \"kubernetes.io/projected/12b377dd-1f13-4af0-81d6-635d39cc528c-kube-api-access-74kvx\") pod \"ovn-controller-q9cd6\" (UID: \"12b377dd-1f13-4af0-81d6-635d39cc528c\") " pod="openstack/ovn-controller-q9cd6" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.655637 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q9cd6" Feb 17 16:21:24 crc kubenswrapper[4672]: I0217 16:21:24.674482 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.385920 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.387546 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.389566 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.389602 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.389621 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-fvtnm" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.389687 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.393322 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.414277 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.480033 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h28n\" (UniqueName: \"kubernetes.io/projected/44577c92-aff9-433c-aece-3021a8e85377-kube-api-access-6h28n\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.480118 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44577c92-aff9-433c-aece-3021a8e85377-config\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.480158 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44577c92-aff9-433c-aece-3021a8e85377-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.480183 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-008651c4-f8c5-49bc-a563-8734496262c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-008651c4-f8c5-49bc-a563-8734496262c4\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.480202 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/44577c92-aff9-433c-aece-3021a8e85377-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.480241 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44577c92-aff9-433c-aece-3021a8e85377-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.480259 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/44577c92-aff9-433c-aece-3021a8e85377-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.480283 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44577c92-aff9-433c-aece-3021a8e85377-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.566054 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.566119 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.581345 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-008651c4-f8c5-49bc-a563-8734496262c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-008651c4-f8c5-49bc-a563-8734496262c4\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.581394 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/44577c92-aff9-433c-aece-3021a8e85377-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.581453 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44577c92-aff9-433c-aece-3021a8e85377-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.581476 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/44577c92-aff9-433c-aece-3021a8e85377-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.581526 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44577c92-aff9-433c-aece-3021a8e85377-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.581568 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h28n\" (UniqueName: \"kubernetes.io/projected/44577c92-aff9-433c-aece-3021a8e85377-kube-api-access-6h28n\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.581611 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44577c92-aff9-433c-aece-3021a8e85377-config\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.581646 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44577c92-aff9-433c-aece-3021a8e85377-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.588710 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/44577c92-aff9-433c-aece-3021a8e85377-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.589198 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/44577c92-aff9-433c-aece-3021a8e85377-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.592997 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44577c92-aff9-433c-aece-3021a8e85377-config\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.593028 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44577c92-aff9-433c-aece-3021a8e85377-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.601246 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44577c92-aff9-433c-aece-3021a8e85377-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.606173 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44577c92-aff9-433c-aece-3021a8e85377-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.608942 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h28n\" (UniqueName: \"kubernetes.io/projected/44577c92-aff9-433c-aece-3021a8e85377-kube-api-access-6h28n\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.609038 4672 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.609076 4672 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-008651c4-f8c5-49bc-a563-8734496262c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-008651c4-f8c5-49bc-a563-8734496262c4\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bc16a61a5ba9c39f424de72eb4fba20afd4059eb7129131f6d79ae43fa9bd1f5/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.667665 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-008651c4-f8c5-49bc-a563-8734496262c4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-008651c4-f8c5-49bc-a563-8734496262c4\") pod \"ovsdbserver-sb-0\" (UID: \"44577c92-aff9-433c-aece-3021a8e85377\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:27 crc kubenswrapper[4672]: I0217 16:21:27.713272 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 16:21:28 crc kubenswrapper[4672]: I0217 16:21:28.930047 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 16:21:28 crc kubenswrapper[4672]: I0217 16:21:28.931270 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:28 crc kubenswrapper[4672]: I0217 16:21:28.937295 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 17 16:21:28 crc kubenswrapper[4672]: I0217 16:21:28.937639 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 17 16:21:28 crc kubenswrapper[4672]: I0217 16:21:28.937806 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-npg9g" Feb 17 16:21:28 crc kubenswrapper[4672]: I0217 16:21:28.937955 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 17 16:21:28 crc kubenswrapper[4672]: I0217 16:21:28.942047 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.003315 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68117379-9c1b-497f-8d3a-39bddb5a76dc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.003385 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68117379-9c1b-497f-8d3a-39bddb5a76dc-config\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.003467 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68117379-9c1b-497f-8d3a-39bddb5a76dc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.003552 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-296227d2-95e2-41c4-9623-37f771491f8b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-296227d2-95e2-41c4-9623-37f771491f8b\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.003578 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68117379-9c1b-497f-8d3a-39bddb5a76dc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.003641 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68117379-9c1b-497f-8d3a-39bddb5a76dc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.003721 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mprg\" (UniqueName: \"kubernetes.io/projected/68117379-9c1b-497f-8d3a-39bddb5a76dc-kube-api-access-4mprg\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.003765 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/68117379-9c1b-497f-8d3a-39bddb5a76dc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.108664 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68117379-9c1b-497f-8d3a-39bddb5a76dc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.108770 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-296227d2-95e2-41c4-9623-37f771491f8b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-296227d2-95e2-41c4-9623-37f771491f8b\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.108798 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68117379-9c1b-497f-8d3a-39bddb5a76dc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.108822 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68117379-9c1b-497f-8d3a-39bddb5a76dc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.108845 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mprg\" (UniqueName: \"kubernetes.io/projected/68117379-9c1b-497f-8d3a-39bddb5a76dc-kube-api-access-4mprg\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.108864 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/68117379-9c1b-497f-8d3a-39bddb5a76dc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.108905 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68117379-9c1b-497f-8d3a-39bddb5a76dc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.108936 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68117379-9c1b-497f-8d3a-39bddb5a76dc-config\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.109768 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68117379-9c1b-497f-8d3a-39bddb5a76dc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.109864 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68117379-9c1b-497f-8d3a-39bddb5a76dc-config\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.110607 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68117379-9c1b-497f-8d3a-39bddb5a76dc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.114313 4672 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.114348 4672 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-296227d2-95e2-41c4-9623-37f771491f8b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-296227d2-95e2-41c4-9623-37f771491f8b\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3434fddcd9c564204f8eeda5cbfdd4cab14495d6f3423b35f5b31839799eb125/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.116495 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/68117379-9c1b-497f-8d3a-39bddb5a76dc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.118321 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68117379-9c1b-497f-8d3a-39bddb5a76dc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.122079 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68117379-9c1b-497f-8d3a-39bddb5a76dc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.124658 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mprg\" (UniqueName: \"kubernetes.io/projected/68117379-9c1b-497f-8d3a-39bddb5a76dc-kube-api-access-4mprg\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.149616 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-296227d2-95e2-41c4-9623-37f771491f8b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-296227d2-95e2-41c4-9623-37f771491f8b\") pod \"ovsdbserver-nb-0\" (UID: \"68117379-9c1b-497f-8d3a-39bddb5a76dc\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:29 crc kubenswrapper[4672]: I0217 16:21:29.278503 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.465663 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7"] Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.467259 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.472225 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.472484 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-4qp2x" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.472578 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.472800 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.472920 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.494009 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7"] Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.557662 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/2e52d03d-9616-4c46-b7c9-d090f4a43a93-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-gwjj7\" (UID: \"2e52d03d-9616-4c46-b7c9-d090f4a43a93\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.557725 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/2e52d03d-9616-4c46-b7c9-d090f4a43a93-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-gwjj7\" (UID: \"2e52d03d-9616-4c46-b7c9-d090f4a43a93\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.558233 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e52d03d-9616-4c46-b7c9-d090f4a43a93-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-gwjj7\" (UID: \"2e52d03d-9616-4c46-b7c9-d090f4a43a93\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.558286 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e52d03d-9616-4c46-b7c9-d090f4a43a93-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-gwjj7\" (UID: \"2e52d03d-9616-4c46-b7c9-d090f4a43a93\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.558344 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlvwn\" (UniqueName: \"kubernetes.io/projected/2e52d03d-9616-4c46-b7c9-d090f4a43a93-kube-api-access-hlvwn\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-gwjj7\" (UID: \"2e52d03d-9616-4c46-b7c9-d090f4a43a93\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.625492 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4"] Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.627274 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.633051 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.633073 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.633162 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.639800 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4"] Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.661955 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e52d03d-9616-4c46-b7c9-d090f4a43a93-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-gwjj7\" (UID: \"2e52d03d-9616-4c46-b7c9-d090f4a43a93\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.660144 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e52d03d-9616-4c46-b7c9-d090f4a43a93-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-gwjj7\" (UID: \"2e52d03d-9616-4c46-b7c9-d090f4a43a93\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.663004 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e52d03d-9616-4c46-b7c9-d090f4a43a93-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-gwjj7\" (UID: \"2e52d03d-9616-4c46-b7c9-d090f4a43a93\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.663064 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlvwn\" (UniqueName: \"kubernetes.io/projected/2e52d03d-9616-4c46-b7c9-d090f4a43a93-kube-api-access-hlvwn\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-gwjj7\" (UID: \"2e52d03d-9616-4c46-b7c9-d090f4a43a93\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.663115 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/2e52d03d-9616-4c46-b7c9-d090f4a43a93-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-gwjj7\" (UID: \"2e52d03d-9616-4c46-b7c9-d090f4a43a93\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.663137 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/2e52d03d-9616-4c46-b7c9-d090f4a43a93-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-gwjj7\" (UID: \"2e52d03d-9616-4c46-b7c9-d090f4a43a93\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.665705 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e52d03d-9616-4c46-b7c9-d090f4a43a93-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-gwjj7\" (UID: \"2e52d03d-9616-4c46-b7c9-d090f4a43a93\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.679928 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/2e52d03d-9616-4c46-b7c9-d090f4a43a93-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-gwjj7\" (UID: \"2e52d03d-9616-4c46-b7c9-d090f4a43a93\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.681094 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/2e52d03d-9616-4c46-b7c9-d090f4a43a93-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-gwjj7\" (UID: \"2e52d03d-9616-4c46-b7c9-d090f4a43a93\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.718468 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlvwn\" (UniqueName: \"kubernetes.io/projected/2e52d03d-9616-4c46-b7c9-d090f4a43a93-kube-api-access-hlvwn\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-gwjj7\" (UID: \"2e52d03d-9616-4c46-b7c9-d090f4a43a93\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.720567 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr"] Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.722612 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.725325 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.725599 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.750835 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr"] Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.764885 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/7ce7f56b-68cd-42a8-bbfe-588269b90802-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-r97r4\" (UID: \"7ce7f56b-68cd-42a8-bbfe-588269b90802\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.764966 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ce7f56b-68cd-42a8-bbfe-588269b90802-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-r97r4\" (UID: \"7ce7f56b-68cd-42a8-bbfe-588269b90802\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.765034 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/7ce7f56b-68cd-42a8-bbfe-588269b90802-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-r97r4\" (UID: \"7ce7f56b-68cd-42a8-bbfe-588269b90802\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.765064 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/7ce7f56b-68cd-42a8-bbfe-588269b90802-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-r97r4\" (UID: \"7ce7f56b-68cd-42a8-bbfe-588269b90802\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.765086 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ce7f56b-68cd-42a8-bbfe-588269b90802-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-r97r4\" (UID: \"7ce7f56b-68cd-42a8-bbfe-588269b90802\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.765105 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzqrz\" (UniqueName: \"kubernetes.io/projected/7ce7f56b-68cd-42a8-bbfe-588269b90802-kube-api-access-lzqrz\") pod \"cloudkitty-lokistack-querier-58c84b5844-r97r4\" (UID: \"7ce7f56b-68cd-42a8-bbfe-588269b90802\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.790925 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.820780 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n"] Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.821782 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.825889 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.826084 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.826277 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.826390 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.826840 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.832700 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.837625 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n"] Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.847858 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6"] Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.849343 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.852023 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-bw4ll" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.863351 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6"] Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.867479 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ce7f56b-68cd-42a8-bbfe-588269b90802-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-r97r4\" (UID: \"7ce7f56b-68cd-42a8-bbfe-588269b90802\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.867629 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr\" (UID: \"2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.867660 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/7ce7f56b-68cd-42a8-bbfe-588269b90802-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-r97r4\" (UID: \"7ce7f56b-68cd-42a8-bbfe-588269b90802\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.867692 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr\" (UID: \"2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.867713 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/7ce7f56b-68cd-42a8-bbfe-588269b90802-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-r97r4\" (UID: \"7ce7f56b-68cd-42a8-bbfe-588269b90802\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.868365 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ce7f56b-68cd-42a8-bbfe-588269b90802-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-r97r4\" (UID: \"7ce7f56b-68cd-42a8-bbfe-588269b90802\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.868429 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzqrz\" (UniqueName: \"kubernetes.io/projected/7ce7f56b-68cd-42a8-bbfe-588269b90802-kube-api-access-lzqrz\") pod \"cloudkitty-lokistack-querier-58c84b5844-r97r4\" (UID: \"7ce7f56b-68cd-42a8-bbfe-588269b90802\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.868451 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/7ce7f56b-68cd-42a8-bbfe-588269b90802-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-r97r4\" (UID: \"7ce7f56b-68cd-42a8-bbfe-588269b90802\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.868502 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxm55\" (UniqueName: \"kubernetes.io/projected/2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf-kube-api-access-zxm55\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr\" (UID: \"2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.868563 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr\" (UID: \"2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.868585 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr\" (UID: \"2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.869731 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ce7f56b-68cd-42a8-bbfe-588269b90802-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-r97r4\" (UID: \"7ce7f56b-68cd-42a8-bbfe-588269b90802\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.869716 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ce7f56b-68cd-42a8-bbfe-588269b90802-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-r97r4\" (UID: \"7ce7f56b-68cd-42a8-bbfe-588269b90802\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.872485 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/7ce7f56b-68cd-42a8-bbfe-588269b90802-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-r97r4\" (UID: \"7ce7f56b-68cd-42a8-bbfe-588269b90802\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.887720 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/7ce7f56b-68cd-42a8-bbfe-588269b90802-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-r97r4\" (UID: \"7ce7f56b-68cd-42a8-bbfe-588269b90802\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.894169 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzqrz\" (UniqueName: \"kubernetes.io/projected/7ce7f56b-68cd-42a8-bbfe-588269b90802-kube-api-access-lzqrz\") pod \"cloudkitty-lokistack-querier-58c84b5844-r97r4\" (UID: \"7ce7f56b-68cd-42a8-bbfe-588269b90802\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.895156 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/7ce7f56b-68cd-42a8-bbfe-588269b90802-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-r97r4\" (UID: \"7ce7f56b-68cd-42a8-bbfe-588269b90802\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.966220 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.970095 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b01fa86f-90fb-4b04-9bea-681cb6385a05-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.970142 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr\" (UID: \"2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.970169 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr\" (UID: \"2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.970211 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/b01fa86f-90fb-4b04-9bea-681cb6385a05-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.970246 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b01fa86f-90fb-4b04-9bea-681cb6385a05-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.970270 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41bcd30f-d987-4e6c-ab80-4bff10853442-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.970290 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/41bcd30f-d987-4e6c-ab80-4bff10853442-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.970312 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b01fa86f-90fb-4b04-9bea-681cb6385a05-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.970331 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/b01fa86f-90fb-4b04-9bea-681cb6385a05-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.970358 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr\" (UID: \"2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.970379 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/b01fa86f-90fb-4b04-9bea-681cb6385a05-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.970408 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr\" (UID: \"2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.970429 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/41bcd30f-d987-4e6c-ab80-4bff10853442-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.970448 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/41bcd30f-d987-4e6c-ab80-4bff10853442-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.970467 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfmwm\" (UniqueName: \"kubernetes.io/projected/41bcd30f-d987-4e6c-ab80-4bff10853442-kube-api-access-cfmwm\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.970486 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqqgg\" (UniqueName: \"kubernetes.io/projected/b01fa86f-90fb-4b04-9bea-681cb6385a05-kube-api-access-mqqgg\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.970522 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/b01fa86f-90fb-4b04-9bea-681cb6385a05-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.970550 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxm55\" (UniqueName: \"kubernetes.io/projected/2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf-kube-api-access-zxm55\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr\" (UID: \"2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.970567 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41bcd30f-d987-4e6c-ab80-4bff10853442-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.970586 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/41bcd30f-d987-4e6c-ab80-4bff10853442-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.970604 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/41bcd30f-d987-4e6c-ab80-4bff10853442-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.970622 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b01fa86f-90fb-4b04-9bea-681cb6385a05-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.970638 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41bcd30f-d987-4e6c-ab80-4bff10853442-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.971566 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr\" (UID: \"2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.972272 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr\" (UID: \"2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.974384 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr\" (UID: \"2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.975109 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr\" (UID: \"2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" Feb 17 16:21:31 crc kubenswrapper[4672]: I0217 16:21:31.992697 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxm55\" (UniqueName: \"kubernetes.io/projected/2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf-kube-api-access-zxm55\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr\" (UID: \"2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.071614 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/b01fa86f-90fb-4b04-9bea-681cb6385a05-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.071675 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/b01fa86f-90fb-4b04-9bea-681cb6385a05-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.071724 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/41bcd30f-d987-4e6c-ab80-4bff10853442-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.072116 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/41bcd30f-d987-4e6c-ab80-4bff10853442-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.072142 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfmwm\" (UniqueName: \"kubernetes.io/projected/41bcd30f-d987-4e6c-ab80-4bff10853442-kube-api-access-cfmwm\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.072491 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqqgg\" (UniqueName: \"kubernetes.io/projected/b01fa86f-90fb-4b04-9bea-681cb6385a05-kube-api-access-mqqgg\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.072533 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/b01fa86f-90fb-4b04-9bea-681cb6385a05-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.072562 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41bcd30f-d987-4e6c-ab80-4bff10853442-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.072578 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/41bcd30f-d987-4e6c-ab80-4bff10853442-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.072629 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/41bcd30f-d987-4e6c-ab80-4bff10853442-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.072645 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b01fa86f-90fb-4b04-9bea-681cb6385a05-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.072679 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41bcd30f-d987-4e6c-ab80-4bff10853442-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.072701 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b01fa86f-90fb-4b04-9bea-681cb6385a05-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.072768 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/b01fa86f-90fb-4b04-9bea-681cb6385a05-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.072806 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b01fa86f-90fb-4b04-9bea-681cb6385a05-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.072832 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/b01fa86f-90fb-4b04-9bea-681cb6385a05-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.072859 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/41bcd30f-d987-4e6c-ab80-4bff10853442-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.072878 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41bcd30f-d987-4e6c-ab80-4bff10853442-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.072901 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b01fa86f-90fb-4b04-9bea-681cb6385a05-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.073438 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/b01fa86f-90fb-4b04-9bea-681cb6385a05-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.073683 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41bcd30f-d987-4e6c-ab80-4bff10853442-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.073779 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b01fa86f-90fb-4b04-9bea-681cb6385a05-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.074244 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/41bcd30f-d987-4e6c-ab80-4bff10853442-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:32 crc kubenswrapper[4672]: E0217 16:21:32.074323 4672 secret.go:188] Couldn't get secret openstack/cloudkitty-lokistack-gateway-http: secret "cloudkitty-lokistack-gateway-http" not found Feb 17 16:21:32 crc kubenswrapper[4672]: E0217 16:21:32.074364 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41bcd30f-d987-4e6c-ab80-4bff10853442-tls-secret podName:41bcd30f-d987-4e6c-ab80-4bff10853442 nodeName:}" failed. No retries permitted until 2026-02-17 16:21:32.57435052 +0000 UTC m=+1101.328439372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/41bcd30f-d987-4e6c-ab80-4bff10853442-tls-secret") pod "cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" (UID: "41bcd30f-d987-4e6c-ab80-4bff10853442") : secret "cloudkitty-lokistack-gateway-http" not found Feb 17 16:21:32 crc kubenswrapper[4672]: E0217 16:21:32.074583 4672 secret.go:188] Couldn't get secret openstack/cloudkitty-lokistack-gateway-http: secret "cloudkitty-lokistack-gateway-http" not found Feb 17 16:21:32 crc kubenswrapper[4672]: E0217 16:21:32.074614 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b01fa86f-90fb-4b04-9bea-681cb6385a05-tls-secret podName:b01fa86f-90fb-4b04-9bea-681cb6385a05 nodeName:}" failed. No retries permitted until 2026-02-17 16:21:32.574607377 +0000 UTC m=+1101.328696109 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/b01fa86f-90fb-4b04-9bea-681cb6385a05-tls-secret") pod "cloudkitty-lokistack-gateway-7f8685b49f-hszn6" (UID: "b01fa86f-90fb-4b04-9bea-681cb6385a05") : secret "cloudkitty-lokistack-gateway-http" not found Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.074655 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/41bcd30f-d987-4e6c-ab80-4bff10853442-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.074966 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41bcd30f-d987-4e6c-ab80-4bff10853442-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.075489 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b01fa86f-90fb-4b04-9bea-681cb6385a05-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.075743 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b01fa86f-90fb-4b04-9bea-681cb6385a05-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.075917 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/b01fa86f-90fb-4b04-9bea-681cb6385a05-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.076063 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/41bcd30f-d987-4e6c-ab80-4bff10853442-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.076248 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41bcd30f-d987-4e6c-ab80-4bff10853442-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.076886 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/41bcd30f-d987-4e6c-ab80-4bff10853442-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.077108 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.077820 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/b01fa86f-90fb-4b04-9bea-681cb6385a05-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.087578 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfmwm\" (UniqueName: \"kubernetes.io/projected/41bcd30f-d987-4e6c-ab80-4bff10853442-kube-api-access-cfmwm\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.089358 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqqgg\" (UniqueName: \"kubernetes.io/projected/b01fa86f-90fb-4b04-9bea-681cb6385a05-kube-api-access-mqqgg\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.581746 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b01fa86f-90fb-4b04-9bea-681cb6385a05-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.581837 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/41bcd30f-d987-4e6c-ab80-4bff10853442-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.586291 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b01fa86f-90fb-4b04-9bea-681cb6385a05-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hszn6\" (UID: \"b01fa86f-90fb-4b04-9bea-681cb6385a05\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.586423 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/41bcd30f-d987-4e6c-ab80-4bff10853442-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-zpb9n\" (UID: \"41bcd30f-d987-4e6c-ab80-4bff10853442\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.602276 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.603610 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.606093 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.606623 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.614627 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.683854 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52x4b\" (UniqueName: \"kubernetes.io/projected/3acacae4-cbf8-43e1-a2af-3e1bf95be39b-kube-api-access-52x4b\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.683925 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/3acacae4-cbf8-43e1-a2af-3e1bf95be39b-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.684001 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3acacae4-cbf8-43e1-a2af-3e1bf95be39b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.684052 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.684073 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.684289 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3acacae4-cbf8-43e1-a2af-3e1bf95be39b-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.684310 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/3acacae4-cbf8-43e1-a2af-3e1bf95be39b-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.684338 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/3acacae4-cbf8-43e1-a2af-3e1bf95be39b-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.708164 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.710753 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.718346 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.723990 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.724866 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.745296 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.764316 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.772229 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.773376 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.775699 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.776149 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.790180 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.790268 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52x4b\" (UniqueName: \"kubernetes.io/projected/3acacae4-cbf8-43e1-a2af-3e1bf95be39b-kube-api-access-52x4b\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.790317 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/e6cf604e-3c10-4dd3-b6a7-6e6126705e3c-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.790413 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6cf604e-3c10-4dd3-b6a7-6e6126705e3c-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.790456 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/e6cf604e-3c10-4dd3-b6a7-6e6126705e3c-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.790496 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/3acacae4-cbf8-43e1-a2af-3e1bf95be39b-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.790592 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6cf604e-3c10-4dd3-b6a7-6e6126705e3c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.790669 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k22r6\" (UniqueName: \"kubernetes.io/projected/e6cf604e-3c10-4dd3-b6a7-6e6126705e3c-kube-api-access-k22r6\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.790725 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3acacae4-cbf8-43e1-a2af-3e1bf95be39b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.790757 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.790829 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.790863 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.790910 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3acacae4-cbf8-43e1-a2af-3e1bf95be39b-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.790940 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/3acacae4-cbf8-43e1-a2af-3e1bf95be39b-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.790979 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/3acacae4-cbf8-43e1-a2af-3e1bf95be39b-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.791039 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/e6cf604e-3c10-4dd3-b6a7-6e6126705e3c-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.794743 4672 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.794992 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3acacae4-cbf8-43e1-a2af-3e1bf95be39b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.795685 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3acacae4-cbf8-43e1-a2af-3e1bf95be39b-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.799847 4672 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.817996 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/3acacae4-cbf8-43e1-a2af-3e1bf95be39b-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.822034 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/3acacae4-cbf8-43e1-a2af-3e1bf95be39b-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.823548 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/3acacae4-cbf8-43e1-a2af-3e1bf95be39b-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.860352 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52x4b\" (UniqueName: \"kubernetes.io/projected/3acacae4-cbf8-43e1-a2af-3e1bf95be39b-kube-api-access-52x4b\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.865012 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.871817 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3acacae4-cbf8-43e1-a2af-3e1bf95be39b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.895693 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73efd99e-65c1-4c17-90aa-562d35719b17-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"73efd99e-65c1-4c17-90aa-562d35719b17\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.895752 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6cf604e-3c10-4dd3-b6a7-6e6126705e3c-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.895784 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/e6cf604e-3c10-4dd3-b6a7-6e6126705e3c-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.895810 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6cf604e-3c10-4dd3-b6a7-6e6126705e3c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.895835 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/73efd99e-65c1-4c17-90aa-562d35719b17-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"73efd99e-65c1-4c17-90aa-562d35719b17\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.895867 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k22r6\" (UniqueName: \"kubernetes.io/projected/e6cf604e-3c10-4dd3-b6a7-6e6126705e3c-kube-api-access-k22r6\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.895898 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.895933 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/73efd99e-65c1-4c17-90aa-562d35719b17-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"73efd99e-65c1-4c17-90aa-562d35719b17\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.895972 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"73efd99e-65c1-4c17-90aa-562d35719b17\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.895996 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/e6cf604e-3c10-4dd3-b6a7-6e6126705e3c-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.896019 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/73efd99e-65c1-4c17-90aa-562d35719b17-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"73efd99e-65c1-4c17-90aa-562d35719b17\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.896065 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45m2f\" (UniqueName: \"kubernetes.io/projected/73efd99e-65c1-4c17-90aa-562d35719b17-kube-api-access-45m2f\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"73efd99e-65c1-4c17-90aa-562d35719b17\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.896081 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/e6cf604e-3c10-4dd3-b6a7-6e6126705e3c-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.896119 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73efd99e-65c1-4c17-90aa-562d35719b17-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"73efd99e-65c1-4c17-90aa-562d35719b17\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.896954 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6cf604e-3c10-4dd3-b6a7-6e6126705e3c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.897238 4672 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.896752 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6cf604e-3c10-4dd3-b6a7-6e6126705e3c-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.905560 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/e6cf604e-3c10-4dd3-b6a7-6e6126705e3c-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.907393 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/e6cf604e-3c10-4dd3-b6a7-6e6126705e3c-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.912734 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/e6cf604e-3c10-4dd3-b6a7-6e6126705e3c-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.921176 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k22r6\" (UniqueName: \"kubernetes.io/projected/e6cf604e-3c10-4dd3-b6a7-6e6126705e3c-kube-api-access-k22r6\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.929453 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.953759 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.998411 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/73efd99e-65c1-4c17-90aa-562d35719b17-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"73efd99e-65c1-4c17-90aa-562d35719b17\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.998819 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/73efd99e-65c1-4c17-90aa-562d35719b17-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"73efd99e-65c1-4c17-90aa-562d35719b17\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.998878 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"73efd99e-65c1-4c17-90aa-562d35719b17\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.998915 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/73efd99e-65c1-4c17-90aa-562d35719b17-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"73efd99e-65c1-4c17-90aa-562d35719b17\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.998967 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45m2f\" (UniqueName: \"kubernetes.io/projected/73efd99e-65c1-4c17-90aa-562d35719b17-kube-api-access-45m2f\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"73efd99e-65c1-4c17-90aa-562d35719b17\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.998989 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73efd99e-65c1-4c17-90aa-562d35719b17-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"73efd99e-65c1-4c17-90aa-562d35719b17\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.999015 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73efd99e-65c1-4c17-90aa-562d35719b17-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"73efd99e-65c1-4c17-90aa-562d35719b17\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.999183 4672 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"73efd99e-65c1-4c17-90aa-562d35719b17\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:32 crc kubenswrapper[4672]: I0217 16:21:32.999930 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73efd99e-65c1-4c17-90aa-562d35719b17-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"73efd99e-65c1-4c17-90aa-562d35719b17\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:33 crc kubenswrapper[4672]: I0217 16:21:33.002034 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/73efd99e-65c1-4c17-90aa-562d35719b17-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"73efd99e-65c1-4c17-90aa-562d35719b17\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:33 crc kubenswrapper[4672]: I0217 16:21:33.004151 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73efd99e-65c1-4c17-90aa-562d35719b17-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"73efd99e-65c1-4c17-90aa-562d35719b17\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:33 crc kubenswrapper[4672]: I0217 16:21:33.006174 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/73efd99e-65c1-4c17-90aa-562d35719b17-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"73efd99e-65c1-4c17-90aa-562d35719b17\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:33 crc kubenswrapper[4672]: I0217 16:21:33.012483 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/73efd99e-65c1-4c17-90aa-562d35719b17-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"73efd99e-65c1-4c17-90aa-562d35719b17\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:33 crc kubenswrapper[4672]: I0217 16:21:33.016155 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45m2f\" (UniqueName: \"kubernetes.io/projected/73efd99e-65c1-4c17-90aa-562d35719b17-kube-api-access-45m2f\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"73efd99e-65c1-4c17-90aa-562d35719b17\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:33 crc kubenswrapper[4672]: I0217 16:21:33.031653 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"73efd99e-65c1-4c17-90aa-562d35719b17\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:33 crc kubenswrapper[4672]: I0217 16:21:33.034017 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:21:33 crc kubenswrapper[4672]: I0217 16:21:33.224924 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:21:33 crc kubenswrapper[4672]: I0217 16:21:33.248469 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:21:37 crc kubenswrapper[4672]: I0217 16:21:37.070251 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5467b054-ae2f-4852-8d68-f9ba7cd2bdab","Type":"ContainerStarted","Data":"c68c63ef4b5f69c59ac8418dfc5f19e709593b1166aef17d859be15327370e9c"} Feb 17 16:21:37 crc kubenswrapper[4672]: I0217 16:21:37.473253 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:21:37 crc kubenswrapper[4672]: E0217 16:21:37.479943 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 16:21:37 crc kubenswrapper[4672]: E0217 16:21:37.480172 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9xw85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-rjb2z_openstack(6991c3ec-74a9-4191-812c-1798521a6411): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:21:37 crc kubenswrapper[4672]: E0217 16:21:37.481664 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-rjb2z" podUID="6991c3ec-74a9-4191-812c-1798521a6411" Feb 17 16:21:37 crc kubenswrapper[4672]: I0217 16:21:37.501586 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 16:21:37 crc kubenswrapper[4672]: I0217 16:21:37.636857 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 16:21:37 crc kubenswrapper[4672]: W0217 16:21:37.867957 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod878cc257_0a03_44ea_ae70_356195dc5427.slice/crio-7b5909351e992c394722dc301303b2f74942536825b506aed07a09cfe0864708 WatchSource:0}: Error finding container 7b5909351e992c394722dc301303b2f74942536825b506aed07a09cfe0864708: Status 404 returned error can't find the container with id 7b5909351e992c394722dc301303b2f74942536825b506aed07a09cfe0864708 Feb 17 16:21:37 crc kubenswrapper[4672]: I0217 16:21:37.985895 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 17 16:21:37 crc kubenswrapper[4672]: I0217 16:21:37.985939 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q9cd6"] Feb 17 16:21:37 crc kubenswrapper[4672]: I0217 16:21:37.991677 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:21:38 crc kubenswrapper[4672]: I0217 16:21:38.001648 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 16:21:38 crc kubenswrapper[4672]: I0217 16:21:38.008236 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 17 16:21:38 crc kubenswrapper[4672]: E0217 16:21:38.053006 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 16:21:38 crc kubenswrapper[4672]: E0217 16:21:38.053318 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sn2mp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-pjffq_openstack(7474d3f0-4950-4a9b-a384-d0bb442fbd84): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:21:38 crc kubenswrapper[4672]: E0217 16:21:38.058622 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-pjffq" podUID="7474d3f0-4950-4a9b-a384-d0bb442fbd84" Feb 17 16:21:38 crc kubenswrapper[4672]: E0217 16:21:38.089669 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 16:21:38 crc kubenswrapper[4672]: E0217 16:21:38.089808 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2b98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-284jx_openstack(d7d4f70d-2ce7-493f-bfe4-53b8157d295c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:21:38 crc kubenswrapper[4672]: E0217 16:21:38.091547 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-284jx" podUID="d7d4f70d-2ce7-493f-bfe4-53b8157d295c" Feb 17 16:21:38 crc kubenswrapper[4672]: I0217 16:21:38.098828 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:21:38 crc kubenswrapper[4672]: I0217 16:21:38.185731 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"878cc257-0a03-44ea-ae70-356195dc5427","Type":"ContainerStarted","Data":"7b5909351e992c394722dc301303b2f74942536825b506aed07a09cfe0864708"} Feb 17 16:21:38 crc kubenswrapper[4672]: I0217 16:21:38.210260 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"322bd505-c790-49c2-8ffa-0cb97cf40d7c","Type":"ContainerStarted","Data":"18cc527b006ec9f32bdbe0f35833694f84a58652ed12c32eef6cfba545a27f4e"} Feb 17 16:21:38 crc kubenswrapper[4672]: I0217 16:21:38.217477 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"abbf2ccd-ce83-432b-9e9d-7f39d2483aee","Type":"ContainerStarted","Data":"15c12f2ae67d16f29628116eb069d776b3adeb50df6ff95db371ad0a2a6459fc"} Feb 17 16:21:38 crc kubenswrapper[4672]: I0217 16:21:38.220777 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q9cd6" event={"ID":"12b377dd-1f13-4af0-81d6-635d39cc528c","Type":"ContainerStarted","Data":"2edb1ad8536da02a1a0e33f195c78470f25574723993f378724aec6b5c580633"} Feb 17 16:21:38 crc kubenswrapper[4672]: I0217 16:21:38.221633 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c","Type":"ContainerStarted","Data":"3da2016eaf3288c6353c8ab41f80897baebffc3e366dc5e05ada102aab09b488"} Feb 17 16:21:38 crc kubenswrapper[4672]: W0217 16:21:38.380379 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3068e639_1b58_4971_bf3e_c321ff88289b.slice/crio-4029f29e2e8251dbfdbbade279a804f76b0c90787f3793456d5bd7f15117fa4b WatchSource:0}: Error finding container 4029f29e2e8251dbfdbbade279a804f76b0c90787f3793456d5bd7f15117fa4b: Status 404 returned error can't find the container with id 4029f29e2e8251dbfdbbade279a804f76b0c90787f3793456d5bd7f15117fa4b Feb 17 16:21:38 crc kubenswrapper[4672]: I0217 16:21:38.899504 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr"] Feb 17 16:21:38 crc kubenswrapper[4672]: W0217 16:21:38.903440 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dcda2dc_3e7d_45a5_b95e_cd4b5242b1cf.slice/crio-898f9f5aae6d696c53c0581f1b88027e142dfb2feb21663140f1b0b8c9728642 WatchSource:0}: Error finding container 898f9f5aae6d696c53c0581f1b88027e142dfb2feb21663140f1b0b8c9728642: Status 404 returned error can't find the container with id 898f9f5aae6d696c53c0581f1b88027e142dfb2feb21663140f1b0b8c9728642 Feb 17 16:21:38 crc kubenswrapper[4672]: W0217 16:21:38.906103 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3267a9e_18a1_49f9_bda5_8dcb1467446a.slice/crio-10c24815d63fd5a603d4c90aa56f004c2555e9258384342981909362507e59cb WatchSource:0}: Error finding container 10c24815d63fd5a603d4c90aa56f004c2555e9258384342981909362507e59cb: Status 404 returned error can't find the container with id 10c24815d63fd5a603d4c90aa56f004c2555e9258384342981909362507e59cb Feb 17 16:21:38 crc kubenswrapper[4672]: I0217 16:21:38.909325 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-8nlcn"] Feb 17 16:21:38 crc kubenswrapper[4672]: I0217 16:21:38.972794 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rjb2z" Feb 17 16:21:38 crc kubenswrapper[4672]: I0217 16:21:38.980010 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pjffq" Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.129101 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7474d3f0-4950-4a9b-a384-d0bb442fbd84-dns-svc\") pod \"7474d3f0-4950-4a9b-a384-d0bb442fbd84\" (UID: \"7474d3f0-4950-4a9b-a384-d0bb442fbd84\") " Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.129263 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6991c3ec-74a9-4191-812c-1798521a6411-config\") pod \"6991c3ec-74a9-4191-812c-1798521a6411\" (UID: \"6991c3ec-74a9-4191-812c-1798521a6411\") " Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.129300 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xw85\" (UniqueName: \"kubernetes.io/projected/6991c3ec-74a9-4191-812c-1798521a6411-kube-api-access-9xw85\") pod \"6991c3ec-74a9-4191-812c-1798521a6411\" (UID: \"6991c3ec-74a9-4191-812c-1798521a6411\") " Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.129315 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn2mp\" (UniqueName: \"kubernetes.io/projected/7474d3f0-4950-4a9b-a384-d0bb442fbd84-kube-api-access-sn2mp\") pod \"7474d3f0-4950-4a9b-a384-d0bb442fbd84\" (UID: \"7474d3f0-4950-4a9b-a384-d0bb442fbd84\") " Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.129399 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7474d3f0-4950-4a9b-a384-d0bb442fbd84-config\") pod \"7474d3f0-4950-4a9b-a384-d0bb442fbd84\" (UID: \"7474d3f0-4950-4a9b-a384-d0bb442fbd84\") " Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.129706 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7474d3f0-4950-4a9b-a384-d0bb442fbd84-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7474d3f0-4950-4a9b-a384-d0bb442fbd84" (UID: "7474d3f0-4950-4a9b-a384-d0bb442fbd84"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.130179 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7474d3f0-4950-4a9b-a384-d0bb442fbd84-config" (OuterVolumeSpecName: "config") pod "7474d3f0-4950-4a9b-a384-d0bb442fbd84" (UID: "7474d3f0-4950-4a9b-a384-d0bb442fbd84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.130670 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6991c3ec-74a9-4191-812c-1798521a6411-config" (OuterVolumeSpecName: "config") pod "6991c3ec-74a9-4191-812c-1798521a6411" (UID: "6991c3ec-74a9-4191-812c-1798521a6411"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.134611 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7474d3f0-4950-4a9b-a384-d0bb442fbd84-kube-api-access-sn2mp" (OuterVolumeSpecName: "kube-api-access-sn2mp") pod "7474d3f0-4950-4a9b-a384-d0bb442fbd84" (UID: "7474d3f0-4950-4a9b-a384-d0bb442fbd84"). InnerVolumeSpecName "kube-api-access-sn2mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.135094 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6991c3ec-74a9-4191-812c-1798521a6411-kube-api-access-9xw85" (OuterVolumeSpecName: "kube-api-access-9xw85") pod "6991c3ec-74a9-4191-812c-1798521a6411" (UID: "6991c3ec-74a9-4191-812c-1798521a6411"). InnerVolumeSpecName "kube-api-access-9xw85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.225875 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4"] Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.231480 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7474d3f0-4950-4a9b-a384-d0bb442fbd84-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.231526 4672 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7474d3f0-4950-4a9b-a384-d0bb442fbd84-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.231541 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6991c3ec-74a9-4191-812c-1798521a6411-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.231552 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xw85\" (UniqueName: \"kubernetes.io/projected/6991c3ec-74a9-4191-812c-1798521a6411-kube-api-access-9xw85\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.231567 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn2mp\" (UniqueName: \"kubernetes.io/projected/7474d3f0-4950-4a9b-a384-d0bb442fbd84-kube-api-access-sn2mp\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.237993 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n"] Feb 17 16:21:39 crc kubenswrapper[4672]: W0217 16:21:39.241963 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3acacae4_cbf8_43e1_a2af_3e1bf95be39b.slice/crio-cba7b3e53e8a2f243b62f4bbb48c58a6da4da661d5c4ab1673e80f030a3bf060 WatchSource:0}: Error finding container cba7b3e53e8a2f243b62f4bbb48c58a6da4da661d5c4ab1673e80f030a3bf060: Status 404 returned error can't find the container with id cba7b3e53e8a2f243b62f4bbb48c58a6da4da661d5c4ab1673e80f030a3bf060 Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.245570 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6"] Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.246646 4672 generic.go:334] "Generic (PLEG): container finished" podID="7d64c91f-b138-4fa6-bb58-9d31c4c65861" containerID="5ce9f01ccdcd65f6574eb892925ca8d7dd7e846849f97831c1a5b8f15f6e0c51" exitCode=0 Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.246704 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-htwhx" event={"ID":"7d64c91f-b138-4fa6-bb58-9d31c4c65861","Type":"ContainerDied","Data":"5ce9f01ccdcd65f6574eb892925ca8d7dd7e846849f97831c1a5b8f15f6e0c51"} Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.251620 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 17 16:21:39 crc kubenswrapper[4672]: W0217 16:21:39.253623 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb01fa86f_90fb_4b04_9bea_681cb6385a05.slice/crio-0c08f35a04125d1af1b66c8ee70709dcc4ed498d4695f529df43a654281e7b77 WatchSource:0}: Error finding container 0c08f35a04125d1af1b66c8ee70709dcc4ed498d4695f529df43a654281e7b77: Status 404 returned error can't find the container with id 0c08f35a04125d1af1b66c8ee70709dcc4ed498d4695f529df43a654281e7b77 Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.253782 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-pjffq" event={"ID":"7474d3f0-4950-4a9b-a384-d0bb442fbd84","Type":"ContainerDied","Data":"6a715c442d71a6db9277b64430eb4ad001abfb33d2aa3dca270c12f80b19d70c"} Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.253863 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pjffq" Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.257288 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.263612 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" event={"ID":"2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf","Type":"ContainerStarted","Data":"898f9f5aae6d696c53c0581f1b88027e142dfb2feb21663140f1b0b8c9728642"} Feb 17 16:21:39 crc kubenswrapper[4672]: W0217 16:21:39.264501 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e52d03d_9616_4c46_b7c9_d090f4a43a93.slice/crio-710d642139bcaf79940ef2be4479007763671bc3d3599de342ceb94e39b4b58a WatchSource:0}: Error finding container 710d642139bcaf79940ef2be4479007763671bc3d3599de342ceb94e39b4b58a: Status 404 returned error can't find the container with id 710d642139bcaf79940ef2be4479007763671bc3d3599de342ceb94e39b4b58a Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.266500 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7"] Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.276244 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3068e639-1b58-4971-bf3e-c321ff88289b","Type":"ContainerStarted","Data":"4029f29e2e8251dbfdbbade279a804f76b0c90787f3793456d5bd7f15117fa4b"} Feb 17 16:21:39 crc kubenswrapper[4672]: E0217 16:21:39.276896 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-index-gateway,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=index-gateway -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-index-gateway-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-index-gateway-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45m2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-index-gateway-0_openstack(73efd99e-65c1-4c17-90aa-562d35719b17): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:21:39 crc kubenswrapper[4672]: E0217 16:21:39.278635 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-index-gateway\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="73efd99e-65c1-4c17-90aa-562d35719b17" Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.282459 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" event={"ID":"7ce7f56b-68cd-42a8-bbfe-588269b90802","Type":"ContainerStarted","Data":"95f5aaf7aac0691364e7107a9d7d95d7c2884ff0048fe5782135ed565bf2c942"} Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.285064 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-8nlcn" event={"ID":"a3267a9e-18a1-49f9-bda5-8dcb1467446a","Type":"ContainerStarted","Data":"10c24815d63fd5a603d4c90aa56f004c2555e9258384342981909362507e59cb"} Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.290564 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-rjb2z" event={"ID":"6991c3ec-74a9-4191-812c-1798521a6411","Type":"ContainerDied","Data":"e0ff70b46e7092b82b58ee2b59f980ddcf4e7788b46964794b0009e806e39998"} Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.290627 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rjb2z" Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.292676 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"91c936b2-eda8-4075-bcec-4c56d31cda1d","Type":"ContainerStarted","Data":"204942a1038c253c0266b5da4ce883119e0b1d8e4ce77aaa3b3a7cf21824182a"} Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.296025 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"164bb24e-646b-4404-92f5-912254ac1421","Type":"ContainerStarted","Data":"55e4e01049bc749a9d898d15275d217f183381850130010b78bda8313c9c24d7"} Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.301055 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" event={"ID":"41bcd30f-d987-4e6c-ab80-4bff10853442","Type":"ContainerStarted","Data":"6edfb067170ddee1e123121731a034fb74a3cf126323ade32164c941a36c090f"} Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.305399 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0494473e-5e65-47bf-b3a3-6d8c7b27243f","Type":"ContainerStarted","Data":"160a5dcd6c50e13d075b639333cde720e6d5debf6bc30c633f6cb619b7a51a7f"} Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.327745 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pjffq"] Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.369280 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pjffq"] Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.391815 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rjb2z"] Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.391861 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rjb2z"] Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.930894 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 16:21:39 crc kubenswrapper[4672]: W0217 16:21:39.951318 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68117379_9c1b_497f_8d3a_39bddb5a76dc.slice/crio-2736192db0f2c0e0b21bd9280e17bd87a7e68be1f739facc16b1e8f3b841a75e WatchSource:0}: Error finding container 2736192db0f2c0e0b21bd9280e17bd87a7e68be1f739facc16b1e8f3b841a75e: Status 404 returned error can't find the container with id 2736192db0f2c0e0b21bd9280e17bd87a7e68be1f739facc16b1e8f3b841a75e Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.959142 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6991c3ec-74a9-4191-812c-1798521a6411" path="/var/lib/kubelet/pods/6991c3ec-74a9-4191-812c-1798521a6411/volumes" Feb 17 16:21:39 crc kubenswrapper[4672]: I0217 16:21:39.959491 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7474d3f0-4950-4a9b-a384-d0bb442fbd84" path="/var/lib/kubelet/pods/7474d3f0-4950-4a9b-a384-d0bb442fbd84/volumes" Feb 17 16:21:40 crc kubenswrapper[4672]: I0217 16:21:40.313236 4672 generic.go:334] "Generic (PLEG): container finished" podID="d7d4f70d-2ce7-493f-bfe4-53b8157d295c" containerID="e6c35cf61f79539a5088ee1b53ccb7a89817689a5536f14f87b481b7c40ea1ea" exitCode=0 Feb 17 16:21:40 crc kubenswrapper[4672]: I0217 16:21:40.313304 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-284jx" event={"ID":"d7d4f70d-2ce7-493f-bfe4-53b8157d295c","Type":"ContainerDied","Data":"e6c35cf61f79539a5088ee1b53ccb7a89817689a5536f14f87b481b7c40ea1ea"} Feb 17 16:21:40 crc kubenswrapper[4672]: I0217 16:21:40.316639 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"73efd99e-65c1-4c17-90aa-562d35719b17","Type":"ContainerStarted","Data":"8147ff0d4af43294dea1881a1e52c25a8b0f101fcd9172601b7bac0767bf765a"} Feb 17 16:21:40 crc kubenswrapper[4672]: E0217 16:21:40.318832 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-index-gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="73efd99e-65c1-4c17-90aa-562d35719b17" Feb 17 16:21:40 crc kubenswrapper[4672]: I0217 16:21:40.320610 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"68117379-9c1b-497f-8d3a-39bddb5a76dc","Type":"ContainerStarted","Data":"2736192db0f2c0e0b21bd9280e17bd87a7e68be1f739facc16b1e8f3b841a75e"} Feb 17 16:21:40 crc kubenswrapper[4672]: I0217 16:21:40.323258 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" event={"ID":"2e52d03d-9616-4c46-b7c9-d090f4a43a93","Type":"ContainerStarted","Data":"710d642139bcaf79940ef2be4479007763671bc3d3599de342ceb94e39b4b58a"} Feb 17 16:21:40 crc kubenswrapper[4672]: I0217 16:21:40.325745 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-htwhx" event={"ID":"7d64c91f-b138-4fa6-bb58-9d31c4c65861","Type":"ContainerStarted","Data":"1da524262e4aee7397c8c666a12f39b54646e8676cd63979429e4b5ab6c666c3"} Feb 17 16:21:40 crc kubenswrapper[4672]: I0217 16:21:40.325882 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-htwhx" Feb 17 16:21:40 crc kubenswrapper[4672]: I0217 16:21:40.328341 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"3acacae4-cbf8-43e1-a2af-3e1bf95be39b","Type":"ContainerStarted","Data":"cba7b3e53e8a2f243b62f4bbb48c58a6da4da661d5c4ab1673e80f030a3bf060"} Feb 17 16:21:40 crc kubenswrapper[4672]: I0217 16:21:40.330621 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" event={"ID":"b01fa86f-90fb-4b04-9bea-681cb6385a05","Type":"ContainerStarted","Data":"0c08f35a04125d1af1b66c8ee70709dcc4ed498d4695f529df43a654281e7b77"} Feb 17 16:21:40 crc kubenswrapper[4672]: I0217 16:21:40.371391 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-htwhx" podStartSLOduration=3.814305405 podStartE2EDuration="26.371371426s" podCreationTimestamp="2026-02-17 16:21:14 +0000 UTC" firstStartedPulling="2026-02-17 16:21:15.403310372 +0000 UTC m=+1084.157399104" lastFinishedPulling="2026-02-17 16:21:37.960376393 +0000 UTC m=+1106.714465125" observedRunningTime="2026-02-17 16:21:40.366098606 +0000 UTC m=+1109.120187338" watchObservedRunningTime="2026-02-17 16:21:40.371371426 +0000 UTC m=+1109.125460158" Feb 17 16:21:40 crc kubenswrapper[4672]: I0217 16:21:40.500054 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 16:21:40 crc kubenswrapper[4672]: W0217 16:21:40.502093 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44577c92_aff9_433c_aece_3021a8e85377.slice/crio-68dc4a7e6cf7eb89dcc12cd8ee992e57df970bd3fd15f066d2ac129ad4682d61 WatchSource:0}: Error finding container 68dc4a7e6cf7eb89dcc12cd8ee992e57df970bd3fd15f066d2ac129ad4682d61: Status 404 returned error can't find the container with id 68dc4a7e6cf7eb89dcc12cd8ee992e57df970bd3fd15f066d2ac129ad4682d61 Feb 17 16:21:41 crc kubenswrapper[4672]: I0217 16:21:41.340673 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"44577c92-aff9-433c-aece-3021a8e85377","Type":"ContainerStarted","Data":"68dc4a7e6cf7eb89dcc12cd8ee992e57df970bd3fd15f066d2ac129ad4682d61"} Feb 17 16:21:41 crc kubenswrapper[4672]: I0217 16:21:41.344161 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-284jx" event={"ID":"d7d4f70d-2ce7-493f-bfe4-53b8157d295c","Type":"ContainerStarted","Data":"1af70264e1afe9258ffbde03a7d1916d77178dfa492a7e562f20a8cc350d47f1"} Feb 17 16:21:41 crc kubenswrapper[4672]: E0217 16:21:41.345199 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-index-gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="73efd99e-65c1-4c17-90aa-562d35719b17" Feb 17 16:21:41 crc kubenswrapper[4672]: I0217 16:21:41.385872 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-284jx" podStartSLOduration=-9223372009.468918 podStartE2EDuration="27.385857359s" podCreationTimestamp="2026-02-17 16:21:14 +0000 UTC" firstStartedPulling="2026-02-17 16:21:15.098192429 +0000 UTC m=+1083.852281151" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:21:41.383502207 +0000 UTC m=+1110.137590939" watchObservedRunningTime="2026-02-17 16:21:41.385857359 +0000 UTC m=+1110.139946091" Feb 17 16:21:44 crc kubenswrapper[4672]: I0217 16:21:44.530615 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-284jx" Feb 17 16:21:44 crc kubenswrapper[4672]: I0217 16:21:44.896377 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-htwhx" Feb 17 16:21:44 crc kubenswrapper[4672]: I0217 16:21:44.944405 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-284jx"] Feb 17 16:21:45 crc kubenswrapper[4672]: I0217 16:21:45.378020 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-284jx" podUID="d7d4f70d-2ce7-493f-bfe4-53b8157d295c" containerName="dnsmasq-dns" containerID="cri-o://1af70264e1afe9258ffbde03a7d1916d77178dfa492a7e562f20a8cc350d47f1" gracePeriod=10 Feb 17 16:21:45 crc kubenswrapper[4672]: I0217 16:21:45.379217 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-284jx" Feb 17 16:21:46 crc kubenswrapper[4672]: I0217 16:21:46.387617 4672 generic.go:334] "Generic (PLEG): container finished" podID="d7d4f70d-2ce7-493f-bfe4-53b8157d295c" containerID="1af70264e1afe9258ffbde03a7d1916d77178dfa492a7e562f20a8cc350d47f1" exitCode=0 Feb 17 16:21:46 crc kubenswrapper[4672]: I0217 16:21:46.387688 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-284jx" event={"ID":"d7d4f70d-2ce7-493f-bfe4-53b8157d295c","Type":"ContainerDied","Data":"1af70264e1afe9258ffbde03a7d1916d77178dfa492a7e562f20a8cc350d47f1"} Feb 17 16:21:54 crc kubenswrapper[4672]: I0217 16:21:54.530920 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-284jx" podUID="d7d4f70d-2ce7-493f-bfe4-53b8157d295c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.105:5353: i/o timeout" Feb 17 16:21:57 crc kubenswrapper[4672]: E0217 16:21:57.489833 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 17 16:21:57 crc kubenswrapper[4672]: E0217 16:21:57.490396 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sz2jl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(322bd505-c790-49c2-8ffa-0cb97cf40d7c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:21:57 crc kubenswrapper[4672]: E0217 16:21:57.491717 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="322bd505-c790-49c2-8ffa-0cb97cf40d7c" Feb 17 16:21:57 crc kubenswrapper[4672]: E0217 16:21:57.505927 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Feb 17 16:21:57 crc kubenswrapper[4672]: E0217 16:21:57.506197 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf6h68h5c8h5c9h5c5h5cfh654h77h579hcdh59ch8hc6h7fh55fh599h599hcdh5f7h5b7h646hfdh7fh5f9h56bh686h75h6dh64bh5c4h59dhdq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-74kvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-q9cd6_openstack(12b377dd-1f13-4af0-81d6-635d39cc528c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:21:57 crc kubenswrapper[4672]: E0217 16:21:57.508108 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-q9cd6" podUID="12b377dd-1f13-4af0-81d6-635d39cc528c" Feb 17 16:21:57 crc kubenswrapper[4672]: E0217 16:21:57.525676 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="322bd505-c790-49c2-8ffa-0cb97cf40d7c" Feb 17 16:21:57 crc kubenswrapper[4672]: E0217 16:21:57.549247 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a" Feb 17 16:21:57 crc kubenswrapper[4672]: E0217 16:21:57.549863 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/alertmanager/config/alertmanager.yaml.gz --config-envsubst-file=/etc/alertmanager/config_out/alertmanager.env.yaml --watched-dir=/etc/alertmanager/config],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:-1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/alertmanager/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/alertmanager/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/alertmanager/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xbjw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod alertmanager-metric-storage-0_openstack(91c936b2-eda8-4075-bcec-4c56d31cda1d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:21:57 crc kubenswrapper[4672]: E0217 16:21:57.553069 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/alertmanager-metric-storage-0" podUID="91c936b2-eda8-4075-bcec-4c56d31cda1d" Feb 17 16:21:57 crc kubenswrapper[4672]: E0217 16:21:57.564965 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 17 16:21:57 crc kubenswrapper[4672]: E0217 16:21:57.565208 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glk4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(164bb24e-646b-4404-92f5-912254ac1421): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:21:57 crc kubenswrapper[4672]: I0217 16:21:57.565642 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:21:57 crc kubenswrapper[4672]: I0217 16:21:57.565728 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:21:57 crc kubenswrapper[4672]: E0217 16:21:57.567672 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="164bb24e-646b-4404-92f5-912254ac1421" Feb 17 16:21:57 crc kubenswrapper[4672]: E0217 16:21:57.649655 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a" Feb 17 16:21:57 crc kubenswrapper[4672]: E0217 16:21:57.649878 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/prometheus/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kgzb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(878cc257-0a03-44ea-ae70-356195dc5427): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:21:57 crc kubenswrapper[4672]: E0217 16:21:57.651159 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="878cc257-0a03-44ea-ae70-356195dc5427" Feb 17 16:21:57 crc kubenswrapper[4672]: I0217 16:21:57.738972 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-284jx" Feb 17 16:21:57 crc kubenswrapper[4672]: I0217 16:21:57.902902 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7d4f70d-2ce7-493f-bfe4-53b8157d295c-dns-svc\") pod \"d7d4f70d-2ce7-493f-bfe4-53b8157d295c\" (UID: \"d7d4f70d-2ce7-493f-bfe4-53b8157d295c\") " Feb 17 16:21:57 crc kubenswrapper[4672]: I0217 16:21:57.903010 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2b98\" (UniqueName: \"kubernetes.io/projected/d7d4f70d-2ce7-493f-bfe4-53b8157d295c-kube-api-access-s2b98\") pod \"d7d4f70d-2ce7-493f-bfe4-53b8157d295c\" (UID: \"d7d4f70d-2ce7-493f-bfe4-53b8157d295c\") " Feb 17 16:21:57 crc kubenswrapper[4672]: I0217 16:21:57.903174 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7d4f70d-2ce7-493f-bfe4-53b8157d295c-config\") pod \"d7d4f70d-2ce7-493f-bfe4-53b8157d295c\" (UID: \"d7d4f70d-2ce7-493f-bfe4-53b8157d295c\") " Feb 17 16:21:57 crc kubenswrapper[4672]: I0217 16:21:57.910978 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7d4f70d-2ce7-493f-bfe4-53b8157d295c-kube-api-access-s2b98" (OuterVolumeSpecName: "kube-api-access-s2b98") pod "d7d4f70d-2ce7-493f-bfe4-53b8157d295c" (UID: "d7d4f70d-2ce7-493f-bfe4-53b8157d295c"). InnerVolumeSpecName "kube-api-access-s2b98". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:21:57 crc kubenswrapper[4672]: I0217 16:21:57.942865 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7d4f70d-2ce7-493f-bfe4-53b8157d295c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d7d4f70d-2ce7-493f-bfe4-53b8157d295c" (UID: "d7d4f70d-2ce7-493f-bfe4-53b8157d295c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:21:57 crc kubenswrapper[4672]: I0217 16:21:57.957997 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7d4f70d-2ce7-493f-bfe4-53b8157d295c-config" (OuterVolumeSpecName: "config") pod "d7d4f70d-2ce7-493f-bfe4-53b8157d295c" (UID: "d7d4f70d-2ce7-493f-bfe4-53b8157d295c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:21:58 crc kubenswrapper[4672]: I0217 16:21:58.007659 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7d4f70d-2ce7-493f-bfe4-53b8157d295c-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:58 crc kubenswrapper[4672]: I0217 16:21:58.007693 4672 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7d4f70d-2ce7-493f-bfe4-53b8157d295c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:58 crc kubenswrapper[4672]: I0217 16:21:58.007705 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2b98\" (UniqueName: \"kubernetes.io/projected/d7d4f70d-2ce7-493f-bfe4-53b8157d295c-kube-api-access-s2b98\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:58 crc kubenswrapper[4672]: I0217 16:21:58.523133 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-284jx" event={"ID":"d7d4f70d-2ce7-493f-bfe4-53b8157d295c","Type":"ContainerDied","Data":"b0f8b8d0706ba0a5a03b4e0c0450f10db634483f4c442ec99b0c2c3131c52fa5"} Feb 17 16:21:58 crc kubenswrapper[4672]: I0217 16:21:58.523192 4672 scope.go:117] "RemoveContainer" containerID="1af70264e1afe9258ffbde03a7d1916d77178dfa492a7e562f20a8cc350d47f1" Feb 17 16:21:58 crc kubenswrapper[4672]: I0217 16:21:58.523221 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-284jx" Feb 17 16:21:58 crc kubenswrapper[4672]: E0217 16:21:58.525344 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-q9cd6" podUID="12b377dd-1f13-4af0-81d6-635d39cc528c" Feb 17 16:21:58 crc kubenswrapper[4672]: E0217 16:21:58.525808 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="164bb24e-646b-4404-92f5-912254ac1421" Feb 17 16:21:58 crc kubenswrapper[4672]: E0217 16:21:58.527730 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="878cc257-0a03-44ea-ae70-356195dc5427" Feb 17 16:21:58 crc kubenswrapper[4672]: E0217 16:21:58.535768 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="91c936b2-eda8-4075-bcec-4c56d31cda1d" Feb 17 16:21:58 crc kubenswrapper[4672]: I0217 16:21:58.569398 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-284jx"] Feb 17 16:21:58 crc kubenswrapper[4672]: I0217 16:21:58.585792 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-284jx"] Feb 17 16:21:58 crc kubenswrapper[4672]: E0217 16:21:58.825890 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981" Feb 17 16:21:58 crc kubenswrapper[4672]: E0217 16:21:58.826160 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-querier,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=querier -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-querier-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-querier-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lzqrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-querier-58c84b5844-r97r4_openstack(7ce7f56b-68cd-42a8-bbfe-588269b90802): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:21:58 crc kubenswrapper[4672]: E0217 16:21:58.827539 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-querier\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" podUID="7ce7f56b-68cd-42a8-bbfe-588269b90802" Feb 17 16:21:58 crc kubenswrapper[4672]: E0217 16:21:58.834679 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981" Feb 17 16:21:58 crc kubenswrapper[4672]: E0217 16:21:58.834893 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-compactor,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=compactor -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-compactor-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-compactor-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k22r6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-compactor-0_openstack(e6cf604e-3c10-4dd3-b6a7-6e6126705e3c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:21:58 crc kubenswrapper[4672]: E0217 16:21:58.836653 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-compactor\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="e6cf604e-3c10-4dd3-b6a7-6e6126705e3c" Feb 17 16:21:58 crc kubenswrapper[4672]: E0217 16:21:58.851602 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981" Feb 17 16:21:58 crc kubenswrapper[4672]: E0217 16:21:58.851893 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-ingester,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=ingester -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:wal,ReadOnly:false,MountPath:/tmp/wal,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ingester-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ingester-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-52x4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-ingester-0_openstack(3acacae4-cbf8-43e1-a2af-3e1bf95be39b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:21:58 crc kubenswrapper[4672]: E0217 16:21:58.854056 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-ingester\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="3acacae4-cbf8-43e1-a2af-3e1bf95be39b" Feb 17 16:21:58 crc kubenswrapper[4672]: E0217 16:21:58.875439 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981" Feb 17 16:21:58 crc kubenswrapper[4672]: E0217 16:21:58.875826 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-distributor,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=distributor -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-distributor-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-distributor-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlvwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-distributor-585d9bcbc-gwjj7_openstack(2e52d03d-9616-4c46-b7c9-d090f4a43a93): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:21:58 crc kubenswrapper[4672]: E0217 16:21:58.878359 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" podUID="2e52d03d-9616-4c46-b7c9-d090f4a43a93" Feb 17 16:21:59 crc kubenswrapper[4672]: E0217 16:21:59.294203 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:41eda20b890c200ee7fce0b56b5d168445cd9a6486d560f39ce73d0704e03934" Feb 17 16:21:59 crc kubenswrapper[4672]: E0217 16:21:59.294438 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:gateway,Image:registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:41eda20b890c200ee7fce0b56b5d168445cd9a6486d560f39ce73d0704e03934,Command:[],Args:[--debug.name=lokistack-gateway --web.listen=0.0.0.0:8080 --web.internal.listen=0.0.0.0:8081 --web.healthchecks.url=https://localhost:8080 --log.level=warn --logs.read.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.tail.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.write.endpoint=https://cloudkitty-lokistack-distributor-http.openstack.svc.cluster.local:3100 --logs.write-timeout=4m0s --rbac.config=/etc/lokistack-gateway/rbac.yaml --tenants.config=/etc/lokistack-gateway/tenants.yaml --server.read-timeout=48s --server.write-timeout=6m0s --tls.min-version=VersionTLS12 --tls.server.cert-file=/var/run/tls/http/server/tls.crt --tls.server.key-file=/var/run/tls/http/server/tls.key --tls.healthchecks.server-ca-file=/var/run/ca/server/service-ca.crt --tls.healthchecks.server-name=cloudkitty-lokistack-gateway-http.openstack.svc.cluster.local --tls.internal.server.cert-file=/var/run/tls/http/server/tls.crt --tls.internal.server.key-file=/var/run/tls/http/server/tls.key --tls.min-version=VersionTLS12 --tls.cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --logs.tls.ca-file=/var/run/ca/upstream/service-ca.crt --logs.tls.cert-file=/var/run/tls/http/upstream/tls.crt --logs.tls.key-file=/var/run/tls/http/upstream/tls.key --tls.client-auth-type=RequestClientCert],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},ContainerPort{Name:public,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rbac,ReadOnly:true,MountPath:/etc/lokistack-gateway/rbac.yaml,SubPath:rbac.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tenants,ReadOnly:true,MountPath:/etc/lokistack-gateway/tenants.yaml,SubPath:tenants.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lokistack-gateway,ReadOnly:true,MountPath:/etc/lokistack-gateway/lokistack-gateway.rego,SubPath:lokistack-gateway.rego,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-secret,ReadOnly:true,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-client-http,ReadOnly:true,MountPath:/var/run/tls/http/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-ca-bundle,ReadOnly:false,MountPath:/var/run/tenants-ca/cloudkitty,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cfmwm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/live,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:12,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-gateway-7f8685b49f-zpb9n_openstack(41bcd30f-d987-4e6c-ab80-4bff10853442): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:21:59 crc kubenswrapper[4672]: E0217 16:21:59.296098 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" podUID="41bcd30f-d987-4e6c-ab80-4bff10853442" Feb 17 16:21:59 crc kubenswrapper[4672]: E0217 16:21:59.304707 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:41eda20b890c200ee7fce0b56b5d168445cd9a6486d560f39ce73d0704e03934" Feb 17 16:21:59 crc kubenswrapper[4672]: E0217 16:21:59.304898 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:gateway,Image:registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:41eda20b890c200ee7fce0b56b5d168445cd9a6486d560f39ce73d0704e03934,Command:[],Args:[--debug.name=lokistack-gateway --web.listen=0.0.0.0:8080 --web.internal.listen=0.0.0.0:8081 --web.healthchecks.url=https://localhost:8080 --log.level=warn --logs.read.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.tail.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.write.endpoint=https://cloudkitty-lokistack-distributor-http.openstack.svc.cluster.local:3100 --logs.write-timeout=4m0s --rbac.config=/etc/lokistack-gateway/rbac.yaml --tenants.config=/etc/lokistack-gateway/tenants.yaml --server.read-timeout=48s --server.write-timeout=6m0s --tls.min-version=VersionTLS12 --tls.server.cert-file=/var/run/tls/http/server/tls.crt --tls.server.key-file=/var/run/tls/http/server/tls.key --tls.healthchecks.server-ca-file=/var/run/ca/server/service-ca.crt --tls.healthchecks.server-name=cloudkitty-lokistack-gateway-http.openstack.svc.cluster.local --tls.internal.server.cert-file=/var/run/tls/http/server/tls.crt --tls.internal.server.key-file=/var/run/tls/http/server/tls.key --tls.min-version=VersionTLS12 --tls.cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --logs.tls.ca-file=/var/run/ca/upstream/service-ca.crt --logs.tls.cert-file=/var/run/tls/http/upstream/tls.crt --logs.tls.key-file=/var/run/tls/http/upstream/tls.key --tls.client-auth-type=RequestClientCert],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},ContainerPort{Name:public,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rbac,ReadOnly:true,MountPath:/etc/lokistack-gateway/rbac.yaml,SubPath:rbac.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tenants,ReadOnly:true,MountPath:/etc/lokistack-gateway/tenants.yaml,SubPath:tenants.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lokistack-gateway,ReadOnly:true,MountPath:/etc/lokistack-gateway/lokistack-gateway.rego,SubPath:lokistack-gateway.rego,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-secret,ReadOnly:true,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-client-http,ReadOnly:true,MountPath:/var/run/tls/http/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-ca-bundle,ReadOnly:false,MountPath:/var/run/tenants-ca/cloudkitty,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mqqgg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/live,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:12,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-gateway-7f8685b49f-hszn6_openstack(b01fa86f-90fb-4b04-9bea-681cb6385a05): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:21:59 crc kubenswrapper[4672]: E0217 16:21:59.306092 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" podUID="b01fa86f-90fb-4b04-9bea-681cb6385a05" Feb 17 16:21:59 crc kubenswrapper[4672]: E0217 16:21:59.319389 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981" Feb 17 16:21:59 crc kubenswrapper[4672]: E0217 16:21:59.320637 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-query-frontend,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=query-frontend -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-query-frontend-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-query-frontend-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxm55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr_openstack(2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:21:59 crc kubenswrapper[4672]: E0217 16:21:59.321857 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-query-frontend\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" podUID="2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf" Feb 17 16:21:59 crc kubenswrapper[4672]: I0217 16:21:59.531855 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-284jx" podUID="d7d4f70d-2ce7-493f-bfe4-53b8157d295c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.105:5353: i/o timeout" Feb 17 16:21:59 crc kubenswrapper[4672]: E0217 16:21:59.534422 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-query-frontend\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" podUID="2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf" Feb 17 16:21:59 crc kubenswrapper[4672]: E0217 16:21:59.537719 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-compactor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="e6cf604e-3c10-4dd3-b6a7-6e6126705e3c" Feb 17 16:21:59 crc kubenswrapper[4672]: E0217 16:21:59.537805 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-querier\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" podUID="7ce7f56b-68cd-42a8-bbfe-588269b90802" Feb 17 16:21:59 crc kubenswrapper[4672]: E0217 16:21:59.537853 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-ingester\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="3acacae4-cbf8-43e1-a2af-3e1bf95be39b" Feb 17 16:21:59 crc kubenswrapper[4672]: E0217 16:21:59.537900 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" podUID="2e52d03d-9616-4c46-b7c9-d090f4a43a93" Feb 17 16:21:59 crc kubenswrapper[4672]: E0217 16:21:59.537950 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:41eda20b890c200ee7fce0b56b5d168445cd9a6486d560f39ce73d0704e03934\\\"\"" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" podUID="41bcd30f-d987-4e6c-ab80-4bff10853442" Feb 17 16:21:59 crc kubenswrapper[4672]: E0217 16:21:59.537994 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:41eda20b890c200ee7fce0b56b5d168445cd9a6486d560f39ce73d0704e03934\\\"\"" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" podUID="b01fa86f-90fb-4b04-9bea-681cb6385a05" Feb 17 16:22:00 crc kubenswrapper[4672]: I0217 16:22:00.008445 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7d4f70d-2ce7-493f-bfe4-53b8157d295c" path="/var/lib/kubelet/pods/d7d4f70d-2ce7-493f-bfe4-53b8157d295c/volumes" Feb 17 16:22:00 crc kubenswrapper[4672]: I0217 16:22:00.143112 4672 scope.go:117] "RemoveContainer" containerID="e6c35cf61f79539a5088ee1b53ccb7a89817689a5536f14f87b481b7c40ea1ea" Feb 17 16:22:01 crc kubenswrapper[4672]: E0217 16:22:01.531494 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 17 16:22:01 crc kubenswrapper[4672]: E0217 16:22:01.531760 4672 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 17 16:22:01 crc kubenswrapper[4672]: E0217 16:22:01.531881 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wrvq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(0494473e-5e65-47bf-b3a3-6d8c7b27243f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:22:01 crc kubenswrapper[4672]: E0217 16:22:01.532986 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="0494473e-5e65-47bf-b3a3-6d8c7b27243f" Feb 17 16:22:01 crc kubenswrapper[4672]: E0217 16:22:01.559759 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="0494473e-5e65-47bf-b3a3-6d8c7b27243f" Feb 17 16:22:02 crc kubenswrapper[4672]: I0217 16:22:02.575035 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"68117379-9c1b-497f-8d3a-39bddb5a76dc","Type":"ContainerStarted","Data":"eba068fc724800bdec9048867ca5198193872d9e656798cfe9d124b433b5af1f"} Feb 17 16:22:02 crc kubenswrapper[4672]: I0217 16:22:02.583305 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"44577c92-aff9-433c-aece-3021a8e85377","Type":"ContainerStarted","Data":"2c5899fd2b72945e52725bd91c71094a4f7a888724a8b5b8a9c507bac03ee8c0"} Feb 17 16:22:02 crc kubenswrapper[4672]: I0217 16:22:02.586362 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"abbf2ccd-ce83-432b-9e9d-7f39d2483aee","Type":"ContainerStarted","Data":"a119f60ef650e0c89abfca26adb359b48f81e95b15dd05e3085821400c3214d2"} Feb 17 16:22:02 crc kubenswrapper[4672]: I0217 16:22:02.587976 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 17 16:22:02 crc kubenswrapper[4672]: I0217 16:22:02.591133 4672 generic.go:334] "Generic (PLEG): container finished" podID="a3267a9e-18a1-49f9-bda5-8dcb1467446a" containerID="69e4dd1cd17200c2b49c68ad6361229bbea8fa8b4afa40b72decab7fd7fe795c" exitCode=0 Feb 17 16:22:02 crc kubenswrapper[4672]: I0217 16:22:02.591200 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-8nlcn" event={"ID":"a3267a9e-18a1-49f9-bda5-8dcb1467446a","Type":"ContainerDied","Data":"69e4dd1cd17200c2b49c68ad6361229bbea8fa8b4afa40b72decab7fd7fe795c"} Feb 17 16:22:02 crc kubenswrapper[4672]: I0217 16:22:02.601423 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"73efd99e-65c1-4c17-90aa-562d35719b17","Type":"ContainerStarted","Data":"56dd440fb9a5f7065b5a48f31a22741ce1d3f0a91ce82c0092383e94a77e0850"} Feb 17 16:22:02 crc kubenswrapper[4672]: I0217 16:22:02.604455 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:22:02 crc kubenswrapper[4672]: I0217 16:22:02.609124 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=22.194300991 podStartE2EDuration="44.609103403s" podCreationTimestamp="2026-02-17 16:21:18 +0000 UTC" firstStartedPulling="2026-02-17 16:21:37.946547658 +0000 UTC m=+1106.700636390" lastFinishedPulling="2026-02-17 16:22:00.36135007 +0000 UTC m=+1129.115438802" observedRunningTime="2026-02-17 16:22:02.605714773 +0000 UTC m=+1131.359803515" watchObservedRunningTime="2026-02-17 16:22:02.609103403 +0000 UTC m=+1131.363192145" Feb 17 16:22:02 crc kubenswrapper[4672]: I0217 16:22:02.651823 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=10.453039082 podStartE2EDuration="31.65180313s" podCreationTimestamp="2026-02-17 16:21:31 +0000 UTC" firstStartedPulling="2026-02-17 16:21:39.276715646 +0000 UTC m=+1108.030804378" lastFinishedPulling="2026-02-17 16:22:00.475479694 +0000 UTC m=+1129.229568426" observedRunningTime="2026-02-17 16:22:02.649124419 +0000 UTC m=+1131.403213161" watchObservedRunningTime="2026-02-17 16:22:02.65180313 +0000 UTC m=+1131.405891872" Feb 17 16:22:03 crc kubenswrapper[4672]: I0217 16:22:03.612286 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5467b054-ae2f-4852-8d68-f9ba7cd2bdab","Type":"ContainerStarted","Data":"87ffd86f4e0157f11b7f529a9b0619f55b4daab09253878dc6cec5c5e545a0d2"} Feb 17 16:22:03 crc kubenswrapper[4672]: I0217 16:22:03.613870 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3068e639-1b58-4971-bf3e-c321ff88289b","Type":"ContainerStarted","Data":"c6fb63d9f2a376c50007c407a43b299fc08c9519b4a5c7f6c3e24d766cae0726"} Feb 17 16:22:03 crc kubenswrapper[4672]: I0217 16:22:03.616053 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-8nlcn" event={"ID":"a3267a9e-18a1-49f9-bda5-8dcb1467446a","Type":"ContainerStarted","Data":"33970c8cfa5da412e98e853ff39d0ef938b63dd81a4594a2ea0f6886cf6e5118"} Feb 17 16:22:04 crc kubenswrapper[4672]: I0217 16:22:04.628649 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"68117379-9c1b-497f-8d3a-39bddb5a76dc","Type":"ContainerStarted","Data":"cabda2992259038d447f89ad7395bd4ef4944ae065a902673bcba0716fe70ee9"} Feb 17 16:22:04 crc kubenswrapper[4672]: I0217 16:22:04.631060 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"44577c92-aff9-433c-aece-3021a8e85377","Type":"ContainerStarted","Data":"cb1c9939f1c09b1c503fa6b52b9ed8f22481a465157b08ee61583cc087dc0183"} Feb 17 16:22:04 crc kubenswrapper[4672]: I0217 16:22:04.633654 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-8nlcn" event={"ID":"a3267a9e-18a1-49f9-bda5-8dcb1467446a","Type":"ContainerStarted","Data":"d3fbafb7a1f087d523aba6d6f31f36860e499f242cc4f8b73354b8eb429d66eb"} Feb 17 16:22:04 crc kubenswrapper[4672]: I0217 16:22:04.634820 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:22:04 crc kubenswrapper[4672]: I0217 16:22:04.658159 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=13.878894384 podStartE2EDuration="37.658131849s" podCreationTimestamp="2026-02-17 16:21:27 +0000 UTC" firstStartedPulling="2026-02-17 16:21:39.953393831 +0000 UTC m=+1108.707482563" lastFinishedPulling="2026-02-17 16:22:03.732631296 +0000 UTC m=+1132.486720028" observedRunningTime="2026-02-17 16:22:04.651289579 +0000 UTC m=+1133.405378321" watchObservedRunningTime="2026-02-17 16:22:04.658131849 +0000 UTC m=+1133.412220621" Feb 17 16:22:04 crc kubenswrapper[4672]: I0217 16:22:04.674683 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:22:04 crc kubenswrapper[4672]: I0217 16:22:04.688754 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-8nlcn" podStartSLOduration=19.234916207 podStartE2EDuration="40.688721277s" podCreationTimestamp="2026-02-17 16:21:24 +0000 UTC" firstStartedPulling="2026-02-17 16:21:38.908450664 +0000 UTC m=+1107.662539396" lastFinishedPulling="2026-02-17 16:22:00.362255734 +0000 UTC m=+1129.116344466" observedRunningTime="2026-02-17 16:22:04.683469988 +0000 UTC m=+1133.437558760" watchObservedRunningTime="2026-02-17 16:22:04.688721277 +0000 UTC m=+1133.442810049" Feb 17 16:22:04 crc kubenswrapper[4672]: I0217 16:22:04.719446 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=15.484816611 podStartE2EDuration="38.719417377s" podCreationTimestamp="2026-02-17 16:21:26 +0000 UTC" firstStartedPulling="2026-02-17 16:21:40.504931052 +0000 UTC m=+1109.259019784" lastFinishedPulling="2026-02-17 16:22:03.739531828 +0000 UTC m=+1132.493620550" observedRunningTime="2026-02-17 16:22:04.705661694 +0000 UTC m=+1133.459750456" watchObservedRunningTime="2026-02-17 16:22:04.719417377 +0000 UTC m=+1133.473506129" Feb 17 16:22:05 crc kubenswrapper[4672]: I0217 16:22:05.278870 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:05 crc kubenswrapper[4672]: I0217 16:22:05.352673 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:05 crc kubenswrapper[4672]: I0217 16:22:05.644204 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:06 crc kubenswrapper[4672]: I0217 16:22:06.715862 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:06 crc kubenswrapper[4672]: I0217 16:22:06.743095 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 17 16:22:06 crc kubenswrapper[4672]: I0217 16:22:06.786075 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:06 crc kubenswrapper[4672]: I0217 16:22:06.965395 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-54hvt"] Feb 17 16:22:06 crc kubenswrapper[4672]: E0217 16:22:06.965833 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7d4f70d-2ce7-493f-bfe4-53b8157d295c" containerName="init" Feb 17 16:22:06 crc kubenswrapper[4672]: I0217 16:22:06.965853 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7d4f70d-2ce7-493f-bfe4-53b8157d295c" containerName="init" Feb 17 16:22:06 crc kubenswrapper[4672]: E0217 16:22:06.965884 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7d4f70d-2ce7-493f-bfe4-53b8157d295c" containerName="dnsmasq-dns" Feb 17 16:22:06 crc kubenswrapper[4672]: I0217 16:22:06.965893 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7d4f70d-2ce7-493f-bfe4-53b8157d295c" containerName="dnsmasq-dns" Feb 17 16:22:06 crc kubenswrapper[4672]: I0217 16:22:06.967198 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7d4f70d-2ce7-493f-bfe4-53b8157d295c" containerName="dnsmasq-dns" Feb 17 16:22:06 crc kubenswrapper[4672]: I0217 16:22:06.968392 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-54hvt" Feb 17 16:22:06 crc kubenswrapper[4672]: I0217 16:22:06.970386 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 17 16:22:06 crc kubenswrapper[4672]: I0217 16:22:06.974917 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-54hvt"] Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.023417 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-dw5sq"] Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.024474 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dw5sq" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.027588 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.030306 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dw5sq"] Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.107058 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce2af7d6-0f80-4c6a-90d5-89dba254991f-combined-ca-bundle\") pod \"ovn-controller-metrics-dw5sq\" (UID: \"ce2af7d6-0f80-4c6a-90d5-89dba254991f\") " pod="openstack/ovn-controller-metrics-dw5sq" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.107108 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0851f31-7e3d-4817-9c3b-59c29d1ef858-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-54hvt\" (UID: \"a0851f31-7e3d-4817-9c3b-59c29d1ef858\") " pod="openstack/dnsmasq-dns-7fd796d7df-54hvt" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.107136 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0851f31-7e3d-4817-9c3b-59c29d1ef858-config\") pod \"dnsmasq-dns-7fd796d7df-54hvt\" (UID: \"a0851f31-7e3d-4817-9c3b-59c29d1ef858\") " pod="openstack/dnsmasq-dns-7fd796d7df-54hvt" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.107212 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce2af7d6-0f80-4c6a-90d5-89dba254991f-config\") pod \"ovn-controller-metrics-dw5sq\" (UID: \"ce2af7d6-0f80-4c6a-90d5-89dba254991f\") " pod="openstack/ovn-controller-metrics-dw5sq" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.107263 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ce2af7d6-0f80-4c6a-90d5-89dba254991f-ovn-rundir\") pod \"ovn-controller-metrics-dw5sq\" (UID: \"ce2af7d6-0f80-4c6a-90d5-89dba254991f\") " pod="openstack/ovn-controller-metrics-dw5sq" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.107391 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ce2af7d6-0f80-4c6a-90d5-89dba254991f-ovs-rundir\") pod \"ovn-controller-metrics-dw5sq\" (UID: \"ce2af7d6-0f80-4c6a-90d5-89dba254991f\") " pod="openstack/ovn-controller-metrics-dw5sq" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.107486 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grk4h\" (UniqueName: \"kubernetes.io/projected/ce2af7d6-0f80-4c6a-90d5-89dba254991f-kube-api-access-grk4h\") pod \"ovn-controller-metrics-dw5sq\" (UID: \"ce2af7d6-0f80-4c6a-90d5-89dba254991f\") " pod="openstack/ovn-controller-metrics-dw5sq" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.107591 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4dbw\" (UniqueName: \"kubernetes.io/projected/a0851f31-7e3d-4817-9c3b-59c29d1ef858-kube-api-access-j4dbw\") pod \"dnsmasq-dns-7fd796d7df-54hvt\" (UID: \"a0851f31-7e3d-4817-9c3b-59c29d1ef858\") " pod="openstack/dnsmasq-dns-7fd796d7df-54hvt" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.107627 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce2af7d6-0f80-4c6a-90d5-89dba254991f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dw5sq\" (UID: \"ce2af7d6-0f80-4c6a-90d5-89dba254991f\") " pod="openstack/ovn-controller-metrics-dw5sq" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.107708 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0851f31-7e3d-4817-9c3b-59c29d1ef858-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-54hvt\" (UID: \"a0851f31-7e3d-4817-9c3b-59c29d1ef858\") " pod="openstack/dnsmasq-dns-7fd796d7df-54hvt" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.210163 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ce2af7d6-0f80-4c6a-90d5-89dba254991f-ovs-rundir\") pod \"ovn-controller-metrics-dw5sq\" (UID: \"ce2af7d6-0f80-4c6a-90d5-89dba254991f\") " pod="openstack/ovn-controller-metrics-dw5sq" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.210232 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grk4h\" (UniqueName: \"kubernetes.io/projected/ce2af7d6-0f80-4c6a-90d5-89dba254991f-kube-api-access-grk4h\") pod \"ovn-controller-metrics-dw5sq\" (UID: \"ce2af7d6-0f80-4c6a-90d5-89dba254991f\") " pod="openstack/ovn-controller-metrics-dw5sq" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.210271 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4dbw\" (UniqueName: \"kubernetes.io/projected/a0851f31-7e3d-4817-9c3b-59c29d1ef858-kube-api-access-j4dbw\") pod \"dnsmasq-dns-7fd796d7df-54hvt\" (UID: \"a0851f31-7e3d-4817-9c3b-59c29d1ef858\") " pod="openstack/dnsmasq-dns-7fd796d7df-54hvt" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.210307 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce2af7d6-0f80-4c6a-90d5-89dba254991f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dw5sq\" (UID: \"ce2af7d6-0f80-4c6a-90d5-89dba254991f\") " pod="openstack/ovn-controller-metrics-dw5sq" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.210328 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0851f31-7e3d-4817-9c3b-59c29d1ef858-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-54hvt\" (UID: \"a0851f31-7e3d-4817-9c3b-59c29d1ef858\") " pod="openstack/dnsmasq-dns-7fd796d7df-54hvt" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.210348 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce2af7d6-0f80-4c6a-90d5-89dba254991f-combined-ca-bundle\") pod \"ovn-controller-metrics-dw5sq\" (UID: \"ce2af7d6-0f80-4c6a-90d5-89dba254991f\") " pod="openstack/ovn-controller-metrics-dw5sq" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.210382 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0851f31-7e3d-4817-9c3b-59c29d1ef858-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-54hvt\" (UID: \"a0851f31-7e3d-4817-9c3b-59c29d1ef858\") " pod="openstack/dnsmasq-dns-7fd796d7df-54hvt" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.210405 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0851f31-7e3d-4817-9c3b-59c29d1ef858-config\") pod \"dnsmasq-dns-7fd796d7df-54hvt\" (UID: \"a0851f31-7e3d-4817-9c3b-59c29d1ef858\") " pod="openstack/dnsmasq-dns-7fd796d7df-54hvt" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.210484 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce2af7d6-0f80-4c6a-90d5-89dba254991f-config\") pod \"ovn-controller-metrics-dw5sq\" (UID: \"ce2af7d6-0f80-4c6a-90d5-89dba254991f\") " pod="openstack/ovn-controller-metrics-dw5sq" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.210493 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ce2af7d6-0f80-4c6a-90d5-89dba254991f-ovs-rundir\") pod \"ovn-controller-metrics-dw5sq\" (UID: \"ce2af7d6-0f80-4c6a-90d5-89dba254991f\") " pod="openstack/ovn-controller-metrics-dw5sq" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.210574 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ce2af7d6-0f80-4c6a-90d5-89dba254991f-ovn-rundir\") pod \"ovn-controller-metrics-dw5sq\" (UID: \"ce2af7d6-0f80-4c6a-90d5-89dba254991f\") " pod="openstack/ovn-controller-metrics-dw5sq" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.210740 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ce2af7d6-0f80-4c6a-90d5-89dba254991f-ovn-rundir\") pod \"ovn-controller-metrics-dw5sq\" (UID: \"ce2af7d6-0f80-4c6a-90d5-89dba254991f\") " pod="openstack/ovn-controller-metrics-dw5sq" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.211498 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0851f31-7e3d-4817-9c3b-59c29d1ef858-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-54hvt\" (UID: \"a0851f31-7e3d-4817-9c3b-59c29d1ef858\") " pod="openstack/dnsmasq-dns-7fd796d7df-54hvt" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.211536 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0851f31-7e3d-4817-9c3b-59c29d1ef858-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-54hvt\" (UID: \"a0851f31-7e3d-4817-9c3b-59c29d1ef858\") " pod="openstack/dnsmasq-dns-7fd796d7df-54hvt" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.211543 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0851f31-7e3d-4817-9c3b-59c29d1ef858-config\") pod \"dnsmasq-dns-7fd796d7df-54hvt\" (UID: \"a0851f31-7e3d-4817-9c3b-59c29d1ef858\") " pod="openstack/dnsmasq-dns-7fd796d7df-54hvt" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.211638 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce2af7d6-0f80-4c6a-90d5-89dba254991f-config\") pod \"ovn-controller-metrics-dw5sq\" (UID: \"ce2af7d6-0f80-4c6a-90d5-89dba254991f\") " pod="openstack/ovn-controller-metrics-dw5sq" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.216840 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce2af7d6-0f80-4c6a-90d5-89dba254991f-combined-ca-bundle\") pod \"ovn-controller-metrics-dw5sq\" (UID: \"ce2af7d6-0f80-4c6a-90d5-89dba254991f\") " pod="openstack/ovn-controller-metrics-dw5sq" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.219166 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce2af7d6-0f80-4c6a-90d5-89dba254991f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dw5sq\" (UID: \"ce2af7d6-0f80-4c6a-90d5-89dba254991f\") " pod="openstack/ovn-controller-metrics-dw5sq" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.264530 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4dbw\" (UniqueName: \"kubernetes.io/projected/a0851f31-7e3d-4817-9c3b-59c29d1ef858-kube-api-access-j4dbw\") pod \"dnsmasq-dns-7fd796d7df-54hvt\" (UID: \"a0851f31-7e3d-4817-9c3b-59c29d1ef858\") " pod="openstack/dnsmasq-dns-7fd796d7df-54hvt" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.268088 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grk4h\" (UniqueName: \"kubernetes.io/projected/ce2af7d6-0f80-4c6a-90d5-89dba254991f-kube-api-access-grk4h\") pod \"ovn-controller-metrics-dw5sq\" (UID: \"ce2af7d6-0f80-4c6a-90d5-89dba254991f\") " pod="openstack/ovn-controller-metrics-dw5sq" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.283892 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-54hvt" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.339175 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dw5sq" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.373061 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-54hvt"] Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.439823 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-l6fs2"] Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.441180 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-l6fs2" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.443808 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.515489 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g74md\" (UniqueName: \"kubernetes.io/projected/56b8ee2d-e701-4705-b364-3183b1db7772-kube-api-access-g74md\") pod \"dnsmasq-dns-86db49b7ff-l6fs2\" (UID: \"56b8ee2d-e701-4705-b364-3183b1db7772\") " pod="openstack/dnsmasq-dns-86db49b7ff-l6fs2" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.515595 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-l6fs2\" (UID: \"56b8ee2d-e701-4705-b364-3183b1db7772\") " pod="openstack/dnsmasq-dns-86db49b7ff-l6fs2" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.515649 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-l6fs2\" (UID: \"56b8ee2d-e701-4705-b364-3183b1db7772\") " pod="openstack/dnsmasq-dns-86db49b7ff-l6fs2" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.515702 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-config\") pod \"dnsmasq-dns-86db49b7ff-l6fs2\" (UID: \"56b8ee2d-e701-4705-b364-3183b1db7772\") " pod="openstack/dnsmasq-dns-86db49b7ff-l6fs2" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.515738 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-l6fs2\" (UID: \"56b8ee2d-e701-4705-b364-3183b1db7772\") " pod="openstack/dnsmasq-dns-86db49b7ff-l6fs2" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.568135 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-l6fs2"] Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.625633 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-config\") pod \"dnsmasq-dns-86db49b7ff-l6fs2\" (UID: \"56b8ee2d-e701-4705-b364-3183b1db7772\") " pod="openstack/dnsmasq-dns-86db49b7ff-l6fs2" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.626006 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-l6fs2\" (UID: \"56b8ee2d-e701-4705-b364-3183b1db7772\") " pod="openstack/dnsmasq-dns-86db49b7ff-l6fs2" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.626059 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g74md\" (UniqueName: \"kubernetes.io/projected/56b8ee2d-e701-4705-b364-3183b1db7772-kube-api-access-g74md\") pod \"dnsmasq-dns-86db49b7ff-l6fs2\" (UID: \"56b8ee2d-e701-4705-b364-3183b1db7772\") " pod="openstack/dnsmasq-dns-86db49b7ff-l6fs2" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.626115 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-l6fs2\" (UID: \"56b8ee2d-e701-4705-b364-3183b1db7772\") " pod="openstack/dnsmasq-dns-86db49b7ff-l6fs2" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.626180 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-l6fs2\" (UID: \"56b8ee2d-e701-4705-b364-3183b1db7772\") " pod="openstack/dnsmasq-dns-86db49b7ff-l6fs2" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.627573 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-l6fs2\" (UID: \"56b8ee2d-e701-4705-b364-3183b1db7772\") " pod="openstack/dnsmasq-dns-86db49b7ff-l6fs2" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.628048 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-l6fs2\" (UID: \"56b8ee2d-e701-4705-b364-3183b1db7772\") " pod="openstack/dnsmasq-dns-86db49b7ff-l6fs2" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.637855 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-l6fs2\" (UID: \"56b8ee2d-e701-4705-b364-3183b1db7772\") " pod="openstack/dnsmasq-dns-86db49b7ff-l6fs2" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.637888 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-config\") pod \"dnsmasq-dns-86db49b7ff-l6fs2\" (UID: \"56b8ee2d-e701-4705-b364-3183b1db7772\") " pod="openstack/dnsmasq-dns-86db49b7ff-l6fs2" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.659452 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g74md\" (UniqueName: \"kubernetes.io/projected/56b8ee2d-e701-4705-b364-3183b1db7772-kube-api-access-g74md\") pod \"dnsmasq-dns-86db49b7ff-l6fs2\" (UID: \"56b8ee2d-e701-4705-b364-3183b1db7772\") " pod="openstack/dnsmasq-dns-86db49b7ff-l6fs2" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.679460 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.751112 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dw5sq"] Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.752821 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.768411 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-l6fs2" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.943135 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.944887 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.957078 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-r88wk" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.957305 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.957439 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 17 16:22:07 crc kubenswrapper[4672]: I0217 16:22:07.957522 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.047842 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.051812 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d24cbf47-ec62-44c7-8e9e-1c93a52aabbc-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc\") " pod="openstack/ovn-northd-0" Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.052093 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d24cbf47-ec62-44c7-8e9e-1c93a52aabbc-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc\") " pod="openstack/ovn-northd-0" Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.052166 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d24cbf47-ec62-44c7-8e9e-1c93a52aabbc-config\") pod \"ovn-northd-0\" (UID: \"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc\") " pod="openstack/ovn-northd-0" Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.052315 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dhjg\" (UniqueName: \"kubernetes.io/projected/d24cbf47-ec62-44c7-8e9e-1c93a52aabbc-kube-api-access-2dhjg\") pod \"ovn-northd-0\" (UID: \"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc\") " pod="openstack/ovn-northd-0" Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.052341 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d24cbf47-ec62-44c7-8e9e-1c93a52aabbc-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc\") " pod="openstack/ovn-northd-0" Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.052428 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d24cbf47-ec62-44c7-8e9e-1c93a52aabbc-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc\") " pod="openstack/ovn-northd-0" Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.052469 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d24cbf47-ec62-44c7-8e9e-1c93a52aabbc-scripts\") pod \"ovn-northd-0\" (UID: \"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc\") " pod="openstack/ovn-northd-0" Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.120623 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-54hvt"] Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.154004 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d24cbf47-ec62-44c7-8e9e-1c93a52aabbc-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc\") " pod="openstack/ovn-northd-0" Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.154071 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d24cbf47-ec62-44c7-8e9e-1c93a52aabbc-config\") pod \"ovn-northd-0\" (UID: \"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc\") " pod="openstack/ovn-northd-0" Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.154145 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dhjg\" (UniqueName: \"kubernetes.io/projected/d24cbf47-ec62-44c7-8e9e-1c93a52aabbc-kube-api-access-2dhjg\") pod \"ovn-northd-0\" (UID: \"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc\") " pod="openstack/ovn-northd-0" Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.154165 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d24cbf47-ec62-44c7-8e9e-1c93a52aabbc-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc\") " pod="openstack/ovn-northd-0" Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.154214 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d24cbf47-ec62-44c7-8e9e-1c93a52aabbc-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc\") " pod="openstack/ovn-northd-0" Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.154238 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d24cbf47-ec62-44c7-8e9e-1c93a52aabbc-scripts\") pod \"ovn-northd-0\" (UID: \"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc\") " pod="openstack/ovn-northd-0" Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.154291 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d24cbf47-ec62-44c7-8e9e-1c93a52aabbc-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc\") " pod="openstack/ovn-northd-0" Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.155632 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d24cbf47-ec62-44c7-8e9e-1c93a52aabbc-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc\") " pod="openstack/ovn-northd-0" Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.155688 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d24cbf47-ec62-44c7-8e9e-1c93a52aabbc-scripts\") pod \"ovn-northd-0\" (UID: \"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc\") " pod="openstack/ovn-northd-0" Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.155722 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d24cbf47-ec62-44c7-8e9e-1c93a52aabbc-config\") pod \"ovn-northd-0\" (UID: \"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc\") " pod="openstack/ovn-northd-0" Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.158295 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d24cbf47-ec62-44c7-8e9e-1c93a52aabbc-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc\") " pod="openstack/ovn-northd-0" Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.159105 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d24cbf47-ec62-44c7-8e9e-1c93a52aabbc-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc\") " pod="openstack/ovn-northd-0" Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.163594 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d24cbf47-ec62-44c7-8e9e-1c93a52aabbc-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc\") " pod="openstack/ovn-northd-0" Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.173192 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dhjg\" (UniqueName: \"kubernetes.io/projected/d24cbf47-ec62-44c7-8e9e-1c93a52aabbc-kube-api-access-2dhjg\") pod \"ovn-northd-0\" (UID: \"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc\") " pod="openstack/ovn-northd-0" Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.292812 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.426310 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-l6fs2"] Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.687970 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-54hvt" event={"ID":"a0851f31-7e3d-4817-9c3b-59c29d1ef858","Type":"ContainerStarted","Data":"64c613526aad3f112732672fc47d0a757ce0a58e5c8a2fb001e4da97f4f261fe"} Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.689309 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dw5sq" event={"ID":"ce2af7d6-0f80-4c6a-90d5-89dba254991f","Type":"ContainerStarted","Data":"8857bfaa3eb9e9350c945e6279db516c68bc5e321760a7cdb6c3574ef30130fa"} Feb 17 16:22:08 crc kubenswrapper[4672]: I0217 16:22:08.923657 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 17 16:22:10 crc kubenswrapper[4672]: I0217 16:22:10.309776 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 16:22:10 crc kubenswrapper[4672]: I0217 16:22:10.715346 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-l6fs2" event={"ID":"56b8ee2d-e701-4705-b364-3183b1db7772","Type":"ContainerStarted","Data":"b9fd7ae34090645cb9aef1eabac8afa3bde23e7c641fee87a2484dd05a5f3095"} Feb 17 16:22:10 crc kubenswrapper[4672]: I0217 16:22:10.717575 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc","Type":"ContainerStarted","Data":"15e21e55d39a37f45e0cb479f4ea4deaae34a1821f7f5c8b0ecf846375b3ef40"} Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.506554 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-l6fs2"] Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.543665 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-fm49z"] Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.545307 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-fm49z" Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.558934 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-fm49z"] Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.643674 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-fm49z\" (UID: \"4275129d-2c75-4df7-9c23-0f883ec0733d\") " pod="openstack/dnsmasq-dns-698758b865-fm49z" Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.643760 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-dns-svc\") pod \"dnsmasq-dns-698758b865-fm49z\" (UID: \"4275129d-2c75-4df7-9c23-0f883ec0733d\") " pod="openstack/dnsmasq-dns-698758b865-fm49z" Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.643815 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqdpq\" (UniqueName: \"kubernetes.io/projected/4275129d-2c75-4df7-9c23-0f883ec0733d-kube-api-access-fqdpq\") pod \"dnsmasq-dns-698758b865-fm49z\" (UID: \"4275129d-2c75-4df7-9c23-0f883ec0733d\") " pod="openstack/dnsmasq-dns-698758b865-fm49z" Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.643881 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-fm49z\" (UID: \"4275129d-2c75-4df7-9c23-0f883ec0733d\") " pod="openstack/dnsmasq-dns-698758b865-fm49z" Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.643922 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-config\") pod \"dnsmasq-dns-698758b865-fm49z\" (UID: \"4275129d-2c75-4df7-9c23-0f883ec0733d\") " pod="openstack/dnsmasq-dns-698758b865-fm49z" Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.733941 4672 generic.go:334] "Generic (PLEG): container finished" podID="56b8ee2d-e701-4705-b364-3183b1db7772" containerID="98bb6c8f0e0e55ea2a979833923ee4e243643af9c1422f4585e1b0f3935c8945" exitCode=0 Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.734041 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-l6fs2" event={"ID":"56b8ee2d-e701-4705-b364-3183b1db7772","Type":"ContainerDied","Data":"98bb6c8f0e0e55ea2a979833923ee4e243643af9c1422f4585e1b0f3935c8945"} Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.743586 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dw5sq" event={"ID":"ce2af7d6-0f80-4c6a-90d5-89dba254991f","Type":"ContainerStarted","Data":"563cd0320fe25b47744e0f2e65788c653367fafa0490d58806ea45230041ea1d"} Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.745223 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqdpq\" (UniqueName: \"kubernetes.io/projected/4275129d-2c75-4df7-9c23-0f883ec0733d-kube-api-access-fqdpq\") pod \"dnsmasq-dns-698758b865-fm49z\" (UID: \"4275129d-2c75-4df7-9c23-0f883ec0733d\") " pod="openstack/dnsmasq-dns-698758b865-fm49z" Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.745385 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-fm49z\" (UID: \"4275129d-2c75-4df7-9c23-0f883ec0733d\") " pod="openstack/dnsmasq-dns-698758b865-fm49z" Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.745467 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-config\") pod \"dnsmasq-dns-698758b865-fm49z\" (UID: \"4275129d-2c75-4df7-9c23-0f883ec0733d\") " pod="openstack/dnsmasq-dns-698758b865-fm49z" Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.745603 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-fm49z\" (UID: \"4275129d-2c75-4df7-9c23-0f883ec0733d\") " pod="openstack/dnsmasq-dns-698758b865-fm49z" Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.745701 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-dns-svc\") pod \"dnsmasq-dns-698758b865-fm49z\" (UID: \"4275129d-2c75-4df7-9c23-0f883ec0733d\") " pod="openstack/dnsmasq-dns-698758b865-fm49z" Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.746212 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-fm49z\" (UID: \"4275129d-2c75-4df7-9c23-0f883ec0733d\") " pod="openstack/dnsmasq-dns-698758b865-fm49z" Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.746569 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"3acacae4-cbf8-43e1-a2af-3e1bf95be39b","Type":"ContainerStarted","Data":"0b21a4180d4db183620fd653ef85ad6d20277a6eb62fe64f8f1052842f2f9863"} Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.746836 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-dns-svc\") pod \"dnsmasq-dns-698758b865-fm49z\" (UID: \"4275129d-2c75-4df7-9c23-0f883ec0733d\") " pod="openstack/dnsmasq-dns-698758b865-fm49z" Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.747003 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.747480 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-fm49z\" (UID: \"4275129d-2c75-4df7-9c23-0f883ec0733d\") " pod="openstack/dnsmasq-dns-698758b865-fm49z" Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.747685 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-config\") pod \"dnsmasq-dns-698758b865-fm49z\" (UID: \"4275129d-2c75-4df7-9c23-0f883ec0733d\") " pod="openstack/dnsmasq-dns-698758b865-fm49z" Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.753685 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"164bb24e-646b-4404-92f5-912254ac1421","Type":"ContainerStarted","Data":"6315cc9351c25c0f0b2cf5e5e03a533a33383ad16568663c4236db7beea01e75"} Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.788010 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqdpq\" (UniqueName: \"kubernetes.io/projected/4275129d-2c75-4df7-9c23-0f883ec0733d-kube-api-access-fqdpq\") pod \"dnsmasq-dns-698758b865-fm49z\" (UID: \"4275129d-2c75-4df7-9c23-0f883ec0733d\") " pod="openstack/dnsmasq-dns-698758b865-fm49z" Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.791175 4672 generic.go:334] "Generic (PLEG): container finished" podID="a0851f31-7e3d-4817-9c3b-59c29d1ef858" containerID="3af9e6b74d6adb7b92e3d0ecd6f8a57f40792a8badd868472818639808bccae4" exitCode=0 Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.791215 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-54hvt" event={"ID":"a0851f31-7e3d-4817-9c3b-59c29d1ef858","Type":"ContainerDied","Data":"3af9e6b74d6adb7b92e3d0ecd6f8a57f40792a8badd868472818639808bccae4"} Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.814289 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-dw5sq" podStartSLOduration=4.814272239 podStartE2EDuration="4.814272239s" podCreationTimestamp="2026-02-17 16:22:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:11.812862212 +0000 UTC m=+1140.566950944" watchObservedRunningTime="2026-02-17 16:22:11.814272239 +0000 UTC m=+1140.568360971" Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.872926 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-fm49z" Feb 17 16:22:11 crc kubenswrapper[4672]: I0217 16:22:11.892840 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=-9223371995.961958 podStartE2EDuration="40.892818593s" podCreationTimestamp="2026-02-17 16:21:31 +0000 UTC" firstStartedPulling="2026-02-17 16:21:39.247113755 +0000 UTC m=+1108.001202477" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:11.877614241 +0000 UTC m=+1140.631702973" watchObservedRunningTime="2026-02-17 16:22:11.892818593 +0000 UTC m=+1140.646907325" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.654602 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.666351 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.669392 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.669724 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-49mhh" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.669835 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.669957 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.680827 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.690723 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-54hvt" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.782340 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0851f31-7e3d-4817-9c3b-59c29d1ef858-ovsdbserver-nb\") pod \"a0851f31-7e3d-4817-9c3b-59c29d1ef858\" (UID: \"a0851f31-7e3d-4817-9c3b-59c29d1ef858\") " Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.782735 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0851f31-7e3d-4817-9c3b-59c29d1ef858-config\") pod \"a0851f31-7e3d-4817-9c3b-59c29d1ef858\" (UID: \"a0851f31-7e3d-4817-9c3b-59c29d1ef858\") " Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.782816 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4dbw\" (UniqueName: \"kubernetes.io/projected/a0851f31-7e3d-4817-9c3b-59c29d1ef858-kube-api-access-j4dbw\") pod \"a0851f31-7e3d-4817-9c3b-59c29d1ef858\" (UID: \"a0851f31-7e3d-4817-9c3b-59c29d1ef858\") " Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.782904 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0851f31-7e3d-4817-9c3b-59c29d1ef858-dns-svc\") pod \"a0851f31-7e3d-4817-9c3b-59c29d1ef858\" (UID: \"a0851f31-7e3d-4817-9c3b-59c29d1ef858\") " Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.783133 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6f82a4ce-8da0-40f1-996a-843302449a12-etc-swift\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") " pod="openstack/swift-storage-0" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.783152 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f82a4ce-8da0-40f1-996a-843302449a12-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") " pod="openstack/swift-storage-0" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.783368 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a33bd542-538c-4bee-a9ec-5180e2335d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33bd542-538c-4bee-a9ec-5180e2335d2a\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") " pod="openstack/swift-storage-0" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.783454 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgfs7\" (UniqueName: \"kubernetes.io/projected/6f82a4ce-8da0-40f1-996a-843302449a12-kube-api-access-hgfs7\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") " pod="openstack/swift-storage-0" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.783497 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6f82a4ce-8da0-40f1-996a-843302449a12-cache\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") " pod="openstack/swift-storage-0" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.783528 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6f82a4ce-8da0-40f1-996a-843302449a12-lock\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") " pod="openstack/swift-storage-0" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.820536 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-54hvt" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.820745 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-54hvt" event={"ID":"a0851f31-7e3d-4817-9c3b-59c29d1ef858","Type":"ContainerDied","Data":"64c613526aad3f112732672fc47d0a757ce0a58e5c8a2fb001e4da97f4f261fe"} Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.820791 4672 scope.go:117] "RemoveContainer" containerID="3af9e6b74d6adb7b92e3d0ecd6f8a57f40792a8badd868472818639808bccae4" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.876890 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0851f31-7e3d-4817-9c3b-59c29d1ef858-kube-api-access-j4dbw" (OuterVolumeSpecName: "kube-api-access-j4dbw") pod "a0851f31-7e3d-4817-9c3b-59c29d1ef858" (UID: "a0851f31-7e3d-4817-9c3b-59c29d1ef858"). InnerVolumeSpecName "kube-api-access-j4dbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.880309 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0851f31-7e3d-4817-9c3b-59c29d1ef858-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a0851f31-7e3d-4817-9c3b-59c29d1ef858" (UID: "a0851f31-7e3d-4817-9c3b-59c29d1ef858"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.881436 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0851f31-7e3d-4817-9c3b-59c29d1ef858-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a0851f31-7e3d-4817-9c3b-59c29d1ef858" (UID: "a0851f31-7e3d-4817-9c3b-59c29d1ef858"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.885616 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgfs7\" (UniqueName: \"kubernetes.io/projected/6f82a4ce-8da0-40f1-996a-843302449a12-kube-api-access-hgfs7\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") " pod="openstack/swift-storage-0" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.885705 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6f82a4ce-8da0-40f1-996a-843302449a12-cache\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") " pod="openstack/swift-storage-0" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.885726 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6f82a4ce-8da0-40f1-996a-843302449a12-lock\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") " pod="openstack/swift-storage-0" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.885778 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6f82a4ce-8da0-40f1-996a-843302449a12-etc-swift\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") " pod="openstack/swift-storage-0" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.885793 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f82a4ce-8da0-40f1-996a-843302449a12-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") " pod="openstack/swift-storage-0" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.885839 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a33bd542-538c-4bee-a9ec-5180e2335d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33bd542-538c-4bee-a9ec-5180e2335d2a\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") " pod="openstack/swift-storage-0" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.885990 4672 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0851f31-7e3d-4817-9c3b-59c29d1ef858-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.886000 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0851f31-7e3d-4817-9c3b-59c29d1ef858-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.886011 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4dbw\" (UniqueName: \"kubernetes.io/projected/a0851f31-7e3d-4817-9c3b-59c29d1ef858-kube-api-access-j4dbw\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:12 crc kubenswrapper[4672]: E0217 16:22:12.886857 4672 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:22:12 crc kubenswrapper[4672]: E0217 16:22:12.886879 4672 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:22:12 crc kubenswrapper[4672]: E0217 16:22:12.886921 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f82a4ce-8da0-40f1-996a-843302449a12-etc-swift podName:6f82a4ce-8da0-40f1-996a-843302449a12 nodeName:}" failed. No retries permitted until 2026-02-17 16:22:13.386904078 +0000 UTC m=+1142.140992810 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6f82a4ce-8da0-40f1-996a-843302449a12-etc-swift") pod "swift-storage-0" (UID: "6f82a4ce-8da0-40f1-996a-843302449a12") : configmap "swift-ring-files" not found Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.887207 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6f82a4ce-8da0-40f1-996a-843302449a12-cache\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") " pod="openstack/swift-storage-0" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.887270 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6f82a4ce-8da0-40f1-996a-843302449a12-lock\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") " pod="openstack/swift-storage-0" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.890131 4672 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.890340 4672 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a33bd542-538c-4bee-a9ec-5180e2335d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33bd542-538c-4bee-a9ec-5180e2335d2a\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/56310d3df874805d1411400c6c2296118d58d504a34188cf8b62db53c8033914/globalmount\"" pod="openstack/swift-storage-0" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.892820 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f82a4ce-8da0-40f1-996a-843302449a12-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") " pod="openstack/swift-storage-0" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.917587 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a33bd542-538c-4bee-a9ec-5180e2335d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33bd542-538c-4bee-a9ec-5180e2335d2a\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") " pod="openstack/swift-storage-0" Feb 17 16:22:12 crc kubenswrapper[4672]: E0217 16:22:12.947937 4672 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 17 16:22:12 crc kubenswrapper[4672]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/56b8ee2d-e701-4705-b364-3183b1db7772/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 17 16:22:12 crc kubenswrapper[4672]: > podSandboxID="b9fd7ae34090645cb9aef1eabac8afa3bde23e7c641fee87a2484dd05a5f3095" Feb 17 16:22:12 crc kubenswrapper[4672]: E0217 16:22:12.948149 4672 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 17 16:22:12 crc kubenswrapper[4672]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n599h5cbh7ch5d4h66fh676hdbh546h95h88h5ffh55ch7fhch57ch687hddhc7h5fdh57dh674h56fh64ch98h9bh557h55dh646h54ch54fh5c4h597q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g74md,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-86db49b7ff-l6fs2_openstack(56b8ee2d-e701-4705-b364-3183b1db7772): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/56b8ee2d-e701-4705-b364-3183b1db7772/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 17 16:22:12 crc kubenswrapper[4672]: > logger="UnhandledError" Feb 17 16:22:12 crc kubenswrapper[4672]: E0217 16:22:12.949255 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/56b8ee2d-e701-4705-b364-3183b1db7772/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-86db49b7ff-l6fs2" podUID="56b8ee2d-e701-4705-b364-3183b1db7772" Feb 17 16:22:12 crc kubenswrapper[4672]: I0217 16:22:12.972482 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgfs7\" (UniqueName: \"kubernetes.io/projected/6f82a4ce-8da0-40f1-996a-843302449a12-kube-api-access-hgfs7\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") " pod="openstack/swift-storage-0" Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.117303 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-fm49z"] Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.396648 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6f82a4ce-8da0-40f1-996a-843302449a12-etc-swift\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") " pod="openstack/swift-storage-0" Feb 17 16:22:13 crc kubenswrapper[4672]: E0217 16:22:13.397176 4672 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:22:13 crc kubenswrapper[4672]: E0217 16:22:13.397210 4672 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:22:13 crc kubenswrapper[4672]: E0217 16:22:13.397279 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f82a4ce-8da0-40f1-996a-843302449a12-etc-swift podName:6f82a4ce-8da0-40f1-996a-843302449a12 nodeName:}" failed. No retries permitted until 2026-02-17 16:22:14.397256022 +0000 UTC m=+1143.151344794 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6f82a4ce-8da0-40f1-996a-843302449a12-etc-swift") pod "swift-storage-0" (UID: "6f82a4ce-8da0-40f1-996a-843302449a12") : configmap "swift-ring-files" not found Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.577080 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0851f31-7e3d-4817-9c3b-59c29d1ef858-config" (OuterVolumeSpecName: "config") pod "a0851f31-7e3d-4817-9c3b-59c29d1ef858" (UID: "a0851f31-7e3d-4817-9c3b-59c29d1ef858"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.599754 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0851f31-7e3d-4817-9c3b-59c29d1ef858-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.833135 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-p8zpn"] Feb 17 16:22:13 crc kubenswrapper[4672]: E0217 16:22:13.833608 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0851f31-7e3d-4817-9c3b-59c29d1ef858" containerName="init" Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.833827 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0851f31-7e3d-4817-9c3b-59c29d1ef858" containerName="init" Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.833981 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0851f31-7e3d-4817-9c3b-59c29d1ef858" containerName="init" Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.834581 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.839912 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.840291 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.840452 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-54hvt"] Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.840791 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.858138 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" event={"ID":"2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf","Type":"ContainerStarted","Data":"54be1a60e14d03ee670b4326086076be5555691f8ea797f9bb06b11c87b8c919"} Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.858650 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.861193 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-54hvt"] Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.868672 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-fm49z" event={"ID":"4275129d-2c75-4df7-9c23-0f883ec0733d","Type":"ContainerStarted","Data":"570be24cf13175e7cafb45b9ae7a46920e2c5583568e0288692683822b53f026"} Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.886951 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-p8zpn"] Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.889021 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"91c936b2-eda8-4075-bcec-4c56d31cda1d","Type":"ContainerStarted","Data":"384042406c67cc20bce06bf91482eaa5dbce83335928d5552d7aa96be6c82620"} Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.903791 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-zrw2r"] Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.904721 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ffe88931-0657-428d-9c1b-40c2521f5662-swiftconf\") pod \"swift-ring-rebalance-p8zpn\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.904803 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe88931-0657-428d-9c1b-40c2521f5662-combined-ca-bundle\") pod \"swift-ring-rebalance-p8zpn\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.904839 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ffe88931-0657-428d-9c1b-40c2521f5662-ring-data-devices\") pod \"swift-ring-rebalance-p8zpn\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.904962 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ffe88931-0657-428d-9c1b-40c2521f5662-etc-swift\") pod \"swift-ring-rebalance-p8zpn\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.905005 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffe88931-0657-428d-9c1b-40c2521f5662-scripts\") pod \"swift-ring-rebalance-p8zpn\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.905044 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.905058 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m26qd\" (UniqueName: \"kubernetes.io/projected/ffe88931-0657-428d-9c1b-40c2521f5662-kube-api-access-m26qd\") pod \"swift-ring-rebalance-p8zpn\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:13 crc kubenswrapper[4672]: I0217 16:22:13.905085 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ffe88931-0657-428d-9c1b-40c2521f5662-dispersionconf\") pod \"swift-ring-rebalance-p8zpn\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:13 crc kubenswrapper[4672]: E0217 16:22:13.957382 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-m26qd ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-p8zpn" podUID="ffe88931-0657-428d-9c1b-40c2521f5662" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.047049 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ffe88931-0657-428d-9c1b-40c2521f5662-swiftconf\") pod \"swift-ring-rebalance-p8zpn\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.048310 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe88931-0657-428d-9c1b-40c2521f5662-combined-ca-bundle\") pod \"swift-ring-rebalance-p8zpn\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.048377 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ffe88931-0657-428d-9c1b-40c2521f5662-ring-data-devices\") pod \"swift-ring-rebalance-p8zpn\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.048534 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ffe88931-0657-428d-9c1b-40c2521f5662-etc-swift\") pod \"swift-ring-rebalance-p8zpn\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.048634 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffe88931-0657-428d-9c1b-40c2521f5662-scripts\") pod \"swift-ring-rebalance-p8zpn\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.048754 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m26qd\" (UniqueName: \"kubernetes.io/projected/ffe88931-0657-428d-9c1b-40c2521f5662-kube-api-access-m26qd\") pod \"swift-ring-rebalance-p8zpn\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.048789 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ffe88931-0657-428d-9c1b-40c2521f5662-dispersionconf\") pod \"swift-ring-rebalance-p8zpn\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.071006 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ffe88931-0657-428d-9c1b-40c2521f5662-etc-swift\") pod \"swift-ring-rebalance-p8zpn\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.071592 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffe88931-0657-428d-9c1b-40c2521f5662-scripts\") pod \"swift-ring-rebalance-p8zpn\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.071851 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ffe88931-0657-428d-9c1b-40c2521f5662-swiftconf\") pod \"swift-ring-rebalance-p8zpn\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.083162 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ffe88931-0657-428d-9c1b-40c2521f5662-ring-data-devices\") pod \"swift-ring-rebalance-p8zpn\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.085378 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0851f31-7e3d-4817-9c3b-59c29d1ef858" path="/var/lib/kubelet/pods/a0851f31-7e3d-4817-9c3b-59c29d1ef858/volumes" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.085895 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-p8zpn"] Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.094010 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe88931-0657-428d-9c1b-40c2521f5662-combined-ca-bundle\") pod \"swift-ring-rebalance-p8zpn\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.095889 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-zrw2r"] Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.101070 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ffe88931-0657-428d-9c1b-40c2521f5662-dispersionconf\") pod \"swift-ring-rebalance-p8zpn\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.109477 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m26qd\" (UniqueName: \"kubernetes.io/projected/ffe88931-0657-428d-9c1b-40c2521f5662-kube-api-access-m26qd\") pod \"swift-ring-rebalance-p8zpn\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.113011 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" podStartSLOduration=-9223371993.741787 podStartE2EDuration="43.112988058s" podCreationTimestamp="2026-02-17 16:21:31 +0000 UTC" firstStartedPulling="2026-02-17 16:21:38.905983999 +0000 UTC m=+1107.660072731" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:13.972289323 +0000 UTC m=+1142.726378055" watchObservedRunningTime="2026-02-17 16:22:14.112988058 +0000 UTC m=+1142.867076790" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.152730 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/59695c47-0c8e-4e97-b04f-3200eb8efc42-dispersionconf\") pod \"swift-ring-rebalance-zrw2r\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.152785 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/59695c47-0c8e-4e97-b04f-3200eb8efc42-etc-swift\") pod \"swift-ring-rebalance-zrw2r\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.153049 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/59695c47-0c8e-4e97-b04f-3200eb8efc42-swiftconf\") pod \"swift-ring-rebalance-zrw2r\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.153094 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59695c47-0c8e-4e97-b04f-3200eb8efc42-combined-ca-bundle\") pod \"swift-ring-rebalance-zrw2r\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.153140 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59695c47-0c8e-4e97-b04f-3200eb8efc42-scripts\") pod \"swift-ring-rebalance-zrw2r\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.153160 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plvk7\" (UniqueName: \"kubernetes.io/projected/59695c47-0c8e-4e97-b04f-3200eb8efc42-kube-api-access-plvk7\") pod \"swift-ring-rebalance-zrw2r\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.153228 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/59695c47-0c8e-4e97-b04f-3200eb8efc42-ring-data-devices\") pod \"swift-ring-rebalance-zrw2r\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.254736 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/59695c47-0c8e-4e97-b04f-3200eb8efc42-swiftconf\") pod \"swift-ring-rebalance-zrw2r\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.255020 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59695c47-0c8e-4e97-b04f-3200eb8efc42-combined-ca-bundle\") pod \"swift-ring-rebalance-zrw2r\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.255069 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59695c47-0c8e-4e97-b04f-3200eb8efc42-scripts\") pod \"swift-ring-rebalance-zrw2r\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.255088 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plvk7\" (UniqueName: \"kubernetes.io/projected/59695c47-0c8e-4e97-b04f-3200eb8efc42-kube-api-access-plvk7\") pod \"swift-ring-rebalance-zrw2r\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.255156 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/59695c47-0c8e-4e97-b04f-3200eb8efc42-ring-data-devices\") pod \"swift-ring-rebalance-zrw2r\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.255195 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/59695c47-0c8e-4e97-b04f-3200eb8efc42-dispersionconf\") pod \"swift-ring-rebalance-zrw2r\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.255225 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/59695c47-0c8e-4e97-b04f-3200eb8efc42-etc-swift\") pod \"swift-ring-rebalance-zrw2r\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.255778 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/59695c47-0c8e-4e97-b04f-3200eb8efc42-etc-swift\") pod \"swift-ring-rebalance-zrw2r\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.256479 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59695c47-0c8e-4e97-b04f-3200eb8efc42-scripts\") pod \"swift-ring-rebalance-zrw2r\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.257313 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/59695c47-0c8e-4e97-b04f-3200eb8efc42-ring-data-devices\") pod \"swift-ring-rebalance-zrw2r\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.263878 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59695c47-0c8e-4e97-b04f-3200eb8efc42-combined-ca-bundle\") pod \"swift-ring-rebalance-zrw2r\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.273363 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/59695c47-0c8e-4e97-b04f-3200eb8efc42-dispersionconf\") pod \"swift-ring-rebalance-zrw2r\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.278300 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/59695c47-0c8e-4e97-b04f-3200eb8efc42-swiftconf\") pod \"swift-ring-rebalance-zrw2r\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.282107 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plvk7\" (UniqueName: \"kubernetes.io/projected/59695c47-0c8e-4e97-b04f-3200eb8efc42-kube-api-access-plvk7\") pod \"swift-ring-rebalance-zrw2r\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.356961 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.463431 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6f82a4ce-8da0-40f1-996a-843302449a12-etc-swift\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") " pod="openstack/swift-storage-0" Feb 17 16:22:14 crc kubenswrapper[4672]: E0217 16:22:14.463947 4672 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:22:14 crc kubenswrapper[4672]: E0217 16:22:14.463966 4672 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:22:14 crc kubenswrapper[4672]: E0217 16:22:14.464011 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f82a4ce-8da0-40f1-996a-843302449a12-etc-swift podName:6f82a4ce-8da0-40f1-996a-843302449a12 nodeName:}" failed. No retries permitted until 2026-02-17 16:22:16.463995275 +0000 UTC m=+1145.218084007 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6f82a4ce-8da0-40f1-996a-843302449a12-etc-swift") pod "swift-storage-0" (UID: "6f82a4ce-8da0-40f1-996a-843302449a12") : configmap "swift-ring-files" not found Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.799722 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-l6fs2" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.874805 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-config\") pod \"56b8ee2d-e701-4705-b364-3183b1db7772\" (UID: \"56b8ee2d-e701-4705-b364-3183b1db7772\") " Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.874850 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-dns-svc\") pod \"56b8ee2d-e701-4705-b364-3183b1db7772\" (UID: \"56b8ee2d-e701-4705-b364-3183b1db7772\") " Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.874934 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-ovsdbserver-nb\") pod \"56b8ee2d-e701-4705-b364-3183b1db7772\" (UID: \"56b8ee2d-e701-4705-b364-3183b1db7772\") " Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.875166 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g74md\" (UniqueName: \"kubernetes.io/projected/56b8ee2d-e701-4705-b364-3183b1db7772-kube-api-access-g74md\") pod \"56b8ee2d-e701-4705-b364-3183b1db7772\" (UID: \"56b8ee2d-e701-4705-b364-3183b1db7772\") " Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.875188 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-ovsdbserver-sb\") pod \"56b8ee2d-e701-4705-b364-3183b1db7772\" (UID: \"56b8ee2d-e701-4705-b364-3183b1db7772\") " Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.898337 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-zrw2r"] Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.955973 4672 generic.go:334] "Generic (PLEG): container finished" podID="4275129d-2c75-4df7-9c23-0f883ec0733d" containerID="13822a9dd6628ab3db382a9e899e92c6fa6c05d9c3562b7f3bb2b6f5b34b85bf" exitCode=0 Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.956024 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-fm49z" event={"ID":"4275129d-2c75-4df7-9c23-0f883ec0733d","Type":"ContainerDied","Data":"13822a9dd6628ab3db382a9e899e92c6fa6c05d9c3562b7f3bb2b6f5b34b85bf"} Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.958997 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc","Type":"ContainerStarted","Data":"d6e9b3bdffb0d2ec659b0dca2d0253c5b4d931efa9d16de0b831654191274592"} Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.960499 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"322bd505-c790-49c2-8ffa-0cb97cf40d7c","Type":"ContainerStarted","Data":"1b6faa23aaf3be89ba3e8a21882860af9535f4a3c10919b04c4ffe5b33861950"} Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.965621 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" event={"ID":"b01fa86f-90fb-4b04-9bea-681cb6385a05","Type":"ContainerStarted","Data":"43cf406f54bca2ba3deea79f3929bc6f2d89f83817aa87ec55e0cd8cf8b43725"} Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.966413 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q9cd6" event={"ID":"12b377dd-1f13-4af0-81d6-635d39cc528c","Type":"ContainerStarted","Data":"00997f814c8af3020f3cf2f391388aa21807dc1c4b92d8cde161ebe9f56a9352"} Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.968155 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.968389 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-q9cd6" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.969128 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.969336 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-l6fs2" event={"ID":"56b8ee2d-e701-4705-b364-3183b1db7772","Type":"ContainerDied","Data":"b9fd7ae34090645cb9aef1eabac8afa3bde23e7c641fee87a2484dd05a5f3095"} Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.969366 4672 scope.go:117] "RemoveContainer" containerID="98bb6c8f0e0e55ea2a979833923ee4e243643af9c1422f4585e1b0f3935c8945" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.969473 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-l6fs2" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.989832 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.991974 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56b8ee2d-e701-4705-b364-3183b1db7772-kube-api-access-g74md" (OuterVolumeSpecName: "kube-api-access-g74md") pod "56b8ee2d-e701-4705-b364-3183b1db7772" (UID: "56b8ee2d-e701-4705-b364-3183b1db7772"). InnerVolumeSpecName "kube-api-access-g74md". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:14 crc kubenswrapper[4672]: I0217 16:22:14.999139 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.066819 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-q9cd6" podStartSLOduration=15.551044849 podStartE2EDuration="51.066799829s" podCreationTimestamp="2026-02-17 16:21:24 +0000 UTC" firstStartedPulling="2026-02-17 16:21:38.060048915 +0000 UTC m=+1106.814137647" lastFinishedPulling="2026-02-17 16:22:13.575803895 +0000 UTC m=+1142.329892627" observedRunningTime="2026-02-17 16:22:15.063678657 +0000 UTC m=+1143.817767389" watchObservedRunningTime="2026-02-17 16:22:15.066799829 +0000 UTC m=+1143.820888561" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.081862 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ffe88931-0657-428d-9c1b-40c2521f5662-ring-data-devices\") pod \"ffe88931-0657-428d-9c1b-40c2521f5662\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.081970 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffe88931-0657-428d-9c1b-40c2521f5662-scripts\") pod \"ffe88931-0657-428d-9c1b-40c2521f5662\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.082004 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ffe88931-0657-428d-9c1b-40c2521f5662-swiftconf\") pod \"ffe88931-0657-428d-9c1b-40c2521f5662\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.082164 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe88931-0657-428d-9c1b-40c2521f5662-combined-ca-bundle\") pod \"ffe88931-0657-428d-9c1b-40c2521f5662\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.082480 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffe88931-0657-428d-9c1b-40c2521f5662-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "ffe88931-0657-428d-9c1b-40c2521f5662" (UID: "ffe88931-0657-428d-9c1b-40c2521f5662"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.082618 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26qd\" (UniqueName: \"kubernetes.io/projected/ffe88931-0657-428d-9c1b-40c2521f5662-kube-api-access-m26qd\") pod \"ffe88931-0657-428d-9c1b-40c2521f5662\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.082706 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ffe88931-0657-428d-9c1b-40c2521f5662-etc-swift\") pod \"ffe88931-0657-428d-9c1b-40c2521f5662\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.082732 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ffe88931-0657-428d-9c1b-40c2521f5662-dispersionconf\") pod \"ffe88931-0657-428d-9c1b-40c2521f5662\" (UID: \"ffe88931-0657-428d-9c1b-40c2521f5662\") " Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.083231 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffe88931-0657-428d-9c1b-40c2521f5662-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "ffe88931-0657-428d-9c1b-40c2521f5662" (UID: "ffe88931-0657-428d-9c1b-40c2521f5662"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.090911 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffe88931-0657-428d-9c1b-40c2521f5662-scripts" (OuterVolumeSpecName: "scripts") pod "ffe88931-0657-428d-9c1b-40c2521f5662" (UID: "ffe88931-0657-428d-9c1b-40c2521f5662"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.097889 4672 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ffe88931-0657-428d-9c1b-40c2521f5662-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.097928 4672 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ffe88931-0657-428d-9c1b-40c2521f5662-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.097942 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g74md\" (UniqueName: \"kubernetes.io/projected/56b8ee2d-e701-4705-b364-3183b1db7772-kube-api-access-g74md\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.097957 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffe88931-0657-428d-9c1b-40c2521f5662-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.098533 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hszn6" podStartSLOduration=10.417661209 podStartE2EDuration="44.098498236s" podCreationTimestamp="2026-02-17 16:21:31 +0000 UTC" firstStartedPulling="2026-02-17 16:21:39.276375487 +0000 UTC m=+1108.030464219" lastFinishedPulling="2026-02-17 16:22:12.957212514 +0000 UTC m=+1141.711301246" observedRunningTime="2026-02-17 16:22:15.084172478 +0000 UTC m=+1143.838261220" watchObservedRunningTime="2026-02-17 16:22:15.098498236 +0000 UTC m=+1143.852586968" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.114700 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffe88931-0657-428d-9c1b-40c2521f5662-kube-api-access-m26qd" (OuterVolumeSpecName: "kube-api-access-m26qd") pod "ffe88931-0657-428d-9c1b-40c2521f5662" (UID: "ffe88931-0657-428d-9c1b-40c2521f5662"). InnerVolumeSpecName "kube-api-access-m26qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.166455 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffe88931-0657-428d-9c1b-40c2521f5662-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffe88931-0657-428d-9c1b-40c2521f5662" (UID: "ffe88931-0657-428d-9c1b-40c2521f5662"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.190798 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffe88931-0657-428d-9c1b-40c2521f5662-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "ffe88931-0657-428d-9c1b-40c2521f5662" (UID: "ffe88931-0657-428d-9c1b-40c2521f5662"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.193965 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffe88931-0657-428d-9c1b-40c2521f5662-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "ffe88931-0657-428d-9c1b-40c2521f5662" (UID: "ffe88931-0657-428d-9c1b-40c2521f5662"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.199288 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe88931-0657-428d-9c1b-40c2521f5662-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.199325 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m26qd\" (UniqueName: \"kubernetes.io/projected/ffe88931-0657-428d-9c1b-40c2521f5662-kube-api-access-m26qd\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.199338 4672 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ffe88931-0657-428d-9c1b-40c2521f5662-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.199350 4672 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ffe88931-0657-428d-9c1b-40c2521f5662-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.350444 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-config" (OuterVolumeSpecName: "config") pod "56b8ee2d-e701-4705-b364-3183b1db7772" (UID: "56b8ee2d-e701-4705-b364-3183b1db7772"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.358052 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56b8ee2d-e701-4705-b364-3183b1db7772" (UID: "56b8ee2d-e701-4705-b364-3183b1db7772"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.360484 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "56b8ee2d-e701-4705-b364-3183b1db7772" (UID: "56b8ee2d-e701-4705-b364-3183b1db7772"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.378103 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "56b8ee2d-e701-4705-b364-3183b1db7772" (UID: "56b8ee2d-e701-4705-b364-3183b1db7772"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.404819 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.404853 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.404862 4672 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.404870 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56b8ee2d-e701-4705-b364-3183b1db7772-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.626524 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-l6fs2"] Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.640942 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-l6fs2"] Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.964988 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56b8ee2d-e701-4705-b364-3183b1db7772" path="/var/lib/kubelet/pods/56b8ee2d-e701-4705-b364-3183b1db7772/volumes" Feb 17 16:22:15 crc kubenswrapper[4672]: I0217 16:22:15.995774 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zrw2r" event={"ID":"59695c47-0c8e-4e97-b04f-3200eb8efc42","Type":"ContainerStarted","Data":"0c4d9dc5554c7c6544b6d12b496478cf3c456aa4d797b2efc9c2b930393c5151"} Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.011714 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"e6cf604e-3c10-4dd3-b6a7-6e6126705e3c","Type":"ContainerStarted","Data":"3b478e23a63d844163f299ca83c9b024a0e27167652ff399ae1c03c41fc3062f"} Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.012769 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.028377 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-fm49z" event={"ID":"4275129d-2c75-4df7-9c23-0f883ec0733d","Type":"ContainerStarted","Data":"50c1827027995515911b7f4972a027a7056d2a7ca9ad9533f04cae9f334a0ba6"} Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.029139 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-fm49z" Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.053797 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" event={"ID":"2e52d03d-9616-4c46-b7c9-d090f4a43a93","Type":"ContainerStarted","Data":"0dfc8ec73c3e0355993a022ae21f06697ddbb25754bf171ebd6e1eee3bb9fcc9"} Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.054717 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.062911 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"878cc257-0a03-44ea-ae70-356195dc5427","Type":"ContainerStarted","Data":"8adee34dd3fa75a44a03d90eb2604f154c0a930cabee81bb6de7f08825978692"} Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.065208 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-fm49z" podStartSLOduration=5.065189827 podStartE2EDuration="5.065189827s" podCreationTimestamp="2026-02-17 16:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:16.059122217 +0000 UTC m=+1144.813210959" watchObservedRunningTime="2026-02-17 16:22:16.065189827 +0000 UTC m=+1144.819278569" Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.066777 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" event={"ID":"7ce7f56b-68cd-42a8-bbfe-588269b90802","Type":"ContainerStarted","Data":"259f09d1403d813aa12dac1107aa9fe5cdca44b0fa770a63ccd8f5cc83157ffe"} Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.067497 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.067928 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=-9223371991.786858 podStartE2EDuration="45.067918669s" podCreationTimestamp="2026-02-17 16:21:31 +0000 UTC" firstStartedPulling="2026-02-17 16:21:38.019143755 +0000 UTC m=+1106.773232487" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:16.03800659 +0000 UTC m=+1144.792095322" watchObservedRunningTime="2026-02-17 16:22:16.067918669 +0000 UTC m=+1144.822007411" Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.076143 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" podStartSLOduration=-9223371991.778648 podStartE2EDuration="45.076127796s" podCreationTimestamp="2026-02-17 16:21:31 +0000 UTC" firstStartedPulling="2026-02-17 16:21:39.276600663 +0000 UTC m=+1108.030689395" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:16.07516128 +0000 UTC m=+1144.829250012" watchObservedRunningTime="2026-02-17 16:22:16.076127796 +0000 UTC m=+1144.830216528" Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.076654 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" event={"ID":"41bcd30f-d987-4e6c-ab80-4bff10853442","Type":"ContainerStarted","Data":"a721380944d042460bb568a243bf63009effa025644eace0be0c6d8aa9962c42"} Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.076910 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.078448 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d24cbf47-ec62-44c7-8e9e-1c93a52aabbc","Type":"ContainerStarted","Data":"fa2aaf3658bbf248c6a8b71e273c2b8455ae469075b58c438515fa7fa19eb4e2"} Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.078498 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.079560 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-p8zpn" Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.080030 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0494473e-5e65-47bf-b3a3-6d8c7b27243f","Type":"ContainerStarted","Data":"63bd95b9b89a263cee605e8b05c867038d9f52a76a9efc344fde7fa796684a4b"} Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.080389 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.093528 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.130392 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" podStartSLOduration=-9223371991.724405 podStartE2EDuration="45.130369708s" podCreationTimestamp="2026-02-17 16:21:31 +0000 UTC" firstStartedPulling="2026-02-17 16:21:39.231051981 +0000 UTC m=+1107.985140723" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:16.092705194 +0000 UTC m=+1144.846793926" watchObservedRunningTime="2026-02-17 16:22:16.130369708 +0000 UTC m=+1144.884458440" Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.135570 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=6.743419831 podStartE2EDuration="9.135555375s" podCreationTimestamp="2026-02-17 16:22:07 +0000 UTC" firstStartedPulling="2026-02-17 16:22:10.312032949 +0000 UTC m=+1139.066121681" lastFinishedPulling="2026-02-17 16:22:12.704168493 +0000 UTC m=+1141.458257225" observedRunningTime="2026-02-17 16:22:16.133817639 +0000 UTC m=+1144.887906371" watchObservedRunningTime="2026-02-17 16:22:16.135555375 +0000 UTC m=+1144.889644107" Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.160194 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-zpb9n" podStartSLOduration=-9223371991.694605 podStartE2EDuration="45.160170255s" podCreationTimestamp="2026-02-17 16:21:31 +0000 UTC" firstStartedPulling="2026-02-17 16:21:39.230794764 +0000 UTC m=+1107.984883506" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:16.153807717 +0000 UTC m=+1144.907896449" watchObservedRunningTime="2026-02-17 16:22:16.160170255 +0000 UTC m=+1144.914258977" Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.229817 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-p8zpn"] Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.246813 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-p8zpn"] Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.252251 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=18.829063107 podStartE2EDuration="55.252232125s" podCreationTimestamp="2026-02-17 16:21:21 +0000 UTC" firstStartedPulling="2026-02-17 16:21:38.174414224 +0000 UTC m=+1106.928502956" lastFinishedPulling="2026-02-17 16:22:14.597583242 +0000 UTC m=+1143.351671974" observedRunningTime="2026-02-17 16:22:16.225684254 +0000 UTC m=+1144.979772996" watchObservedRunningTime="2026-02-17 16:22:16.252232125 +0000 UTC m=+1145.006320857" Feb 17 16:22:16 crc kubenswrapper[4672]: I0217 16:22:16.532429 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6f82a4ce-8da0-40f1-996a-843302449a12-etc-swift\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") " pod="openstack/swift-storage-0" Feb 17 16:22:16 crc kubenswrapper[4672]: E0217 16:22:16.532637 4672 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:22:16 crc kubenswrapper[4672]: E0217 16:22:16.532786 4672 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:22:16 crc kubenswrapper[4672]: E0217 16:22:16.532844 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f82a4ce-8da0-40f1-996a-843302449a12-etc-swift podName:6f82a4ce-8da0-40f1-996a-843302449a12 nodeName:}" failed. No retries permitted until 2026-02-17 16:22:20.532827153 +0000 UTC m=+1149.286915885 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6f82a4ce-8da0-40f1-996a-843302449a12-etc-swift") pod "swift-storage-0" (UID: "6f82a4ce-8da0-40f1-996a-843302449a12") : configmap "swift-ring-files" not found Feb 17 16:22:17 crc kubenswrapper[4672]: I0217 16:22:17.091739 4672 generic.go:334] "Generic (PLEG): container finished" podID="164bb24e-646b-4404-92f5-912254ac1421" containerID="6315cc9351c25c0f0b2cf5e5e03a533a33383ad16568663c4236db7beea01e75" exitCode=0 Feb 17 16:22:17 crc kubenswrapper[4672]: I0217 16:22:17.091801 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"164bb24e-646b-4404-92f5-912254ac1421","Type":"ContainerDied","Data":"6315cc9351c25c0f0b2cf5e5e03a533a33383ad16568663c4236db7beea01e75"} Feb 17 16:22:17 crc kubenswrapper[4672]: I0217 16:22:17.956973 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffe88931-0657-428d-9c1b-40c2521f5662" path="/var/lib/kubelet/pods/ffe88931-0657-428d-9c1b-40c2521f5662/volumes" Feb 17 16:22:18 crc kubenswrapper[4672]: I0217 16:22:18.136003 4672 generic.go:334] "Generic (PLEG): container finished" podID="322bd505-c790-49c2-8ffa-0cb97cf40d7c" containerID="1b6faa23aaf3be89ba3e8a21882860af9535f4a3c10919b04c4ffe5b33861950" exitCode=0 Feb 17 16:22:18 crc kubenswrapper[4672]: I0217 16:22:18.136084 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"322bd505-c790-49c2-8ffa-0cb97cf40d7c","Type":"ContainerDied","Data":"1b6faa23aaf3be89ba3e8a21882860af9535f4a3c10919b04c4ffe5b33861950"} Feb 17 16:22:19 crc kubenswrapper[4672]: I0217 16:22:19.148194 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zrw2r" event={"ID":"59695c47-0c8e-4e97-b04f-3200eb8efc42","Type":"ContainerStarted","Data":"a8c07a149b6a3d3f13e888dd60ad5dfec52c48ccad96ef7cbcf72acb9a93e8a8"} Feb 17 16:22:19 crc kubenswrapper[4672]: I0217 16:22:19.151849 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"164bb24e-646b-4404-92f5-912254ac1421","Type":"ContainerStarted","Data":"f89466d89e4f5e3969ab086ea726ef39254f54f35cb1df8cd77be6c5c2f2be61"} Feb 17 16:22:19 crc kubenswrapper[4672]: I0217 16:22:19.153837 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"322bd505-c790-49c2-8ffa-0cb97cf40d7c","Type":"ContainerStarted","Data":"09c19c25b368c9f17d8d351fee725f35865ef12f5263901fa0cddcddbf688352"} Feb 17 16:22:19 crc kubenswrapper[4672]: I0217 16:22:19.176925 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-zrw2r" podStartSLOduration=2.2884704 podStartE2EDuration="6.176906809s" podCreationTimestamp="2026-02-17 16:22:13 +0000 UTC" firstStartedPulling="2026-02-17 16:22:15.000853548 +0000 UTC m=+1143.754942310" lastFinishedPulling="2026-02-17 16:22:18.889289977 +0000 UTC m=+1147.643378719" observedRunningTime="2026-02-17 16:22:19.174238789 +0000 UTC m=+1147.928327521" watchObservedRunningTime="2026-02-17 16:22:19.176906809 +0000 UTC m=+1147.930995551" Feb 17 16:22:19 crc kubenswrapper[4672]: I0217 16:22:19.206882 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371972.647928 podStartE2EDuration="1m4.20684745s" podCreationTimestamp="2026-02-17 16:21:15 +0000 UTC" firstStartedPulling="2026-02-17 16:21:37.946544588 +0000 UTC m=+1106.700633320" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:19.193319933 +0000 UTC m=+1147.947408665" watchObservedRunningTime="2026-02-17 16:22:19.20684745 +0000 UTC m=+1147.960936182" Feb 17 16:22:19 crc kubenswrapper[4672]: I0217 16:22:19.230482 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=29.386424129 podStartE2EDuration="1m2.230460973s" podCreationTimestamp="2026-02-17 16:21:17 +0000 UTC" firstStartedPulling="2026-02-17 16:21:38.219764322 +0000 UTC m=+1106.973853054" lastFinishedPulling="2026-02-17 16:22:11.063801136 +0000 UTC m=+1139.817889898" observedRunningTime="2026-02-17 16:22:19.2295729 +0000 UTC m=+1147.983661652" watchObservedRunningTime="2026-02-17 16:22:19.230460973 +0000 UTC m=+1147.984549705" Feb 17 16:22:19 crc kubenswrapper[4672]: E0217 16:22:19.617331 4672 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.46:38446->38.102.83.46:42007: read tcp 38.102.83.46:38446->38.102.83.46:42007: read: connection reset by peer Feb 17 16:22:20 crc kubenswrapper[4672]: I0217 16:22:20.165624 4672 generic.go:334] "Generic (PLEG): container finished" podID="91c936b2-eda8-4075-bcec-4c56d31cda1d" containerID="384042406c67cc20bce06bf91482eaa5dbce83335928d5552d7aa96be6c82620" exitCode=0 Feb 17 16:22:20 crc kubenswrapper[4672]: I0217 16:22:20.165719 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"91c936b2-eda8-4075-bcec-4c56d31cda1d","Type":"ContainerDied","Data":"384042406c67cc20bce06bf91482eaa5dbce83335928d5552d7aa96be6c82620"} Feb 17 16:22:20 crc kubenswrapper[4672]: I0217 16:22:20.614711 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6f82a4ce-8da0-40f1-996a-843302449a12-etc-swift\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") " pod="openstack/swift-storage-0" Feb 17 16:22:20 crc kubenswrapper[4672]: E0217 16:22:20.614990 4672 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:22:20 crc kubenswrapper[4672]: E0217 16:22:20.615028 4672 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:22:20 crc kubenswrapper[4672]: E0217 16:22:20.615109 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f82a4ce-8da0-40f1-996a-843302449a12-etc-swift podName:6f82a4ce-8da0-40f1-996a-843302449a12 nodeName:}" failed. No retries permitted until 2026-02-17 16:22:28.615085989 +0000 UTC m=+1157.369174731 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6f82a4ce-8da0-40f1-996a-843302449a12-etc-swift") pod "swift-storage-0" (UID: "6f82a4ce-8da0-40f1-996a-843302449a12") : configmap "swift-ring-files" not found Feb 17 16:22:21 crc kubenswrapper[4672]: I0217 16:22:21.483024 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 17 16:22:21 crc kubenswrapper[4672]: I0217 16:22:21.875797 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-fm49z" Feb 17 16:22:21 crc kubenswrapper[4672]: I0217 16:22:21.928742 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-htwhx"] Feb 17 16:22:21 crc kubenswrapper[4672]: I0217 16:22:21.928976 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-htwhx" podUID="7d64c91f-b138-4fa6-bb58-9d31c4c65861" containerName="dnsmasq-dns" containerID="cri-o://1da524262e4aee7397c8c666a12f39b54646e8676cd63979429e4b5ab6c666c3" gracePeriod=10 Feb 17 16:22:22 crc kubenswrapper[4672]: I0217 16:22:22.196687 4672 generic.go:334] "Generic (PLEG): container finished" podID="7d64c91f-b138-4fa6-bb58-9d31c4c65861" containerID="1da524262e4aee7397c8c666a12f39b54646e8676cd63979429e4b5ab6c666c3" exitCode=0 Feb 17 16:22:22 crc kubenswrapper[4672]: I0217 16:22:22.196891 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-htwhx" event={"ID":"7d64c91f-b138-4fa6-bb58-9d31c4c65861","Type":"ContainerDied","Data":"1da524262e4aee7397c8c666a12f39b54646e8676cd63979429e4b5ab6c666c3"} Feb 17 16:22:22 crc kubenswrapper[4672]: I0217 16:22:22.198955 4672 generic.go:334] "Generic (PLEG): container finished" podID="878cc257-0a03-44ea-ae70-356195dc5427" containerID="8adee34dd3fa75a44a03d90eb2604f154c0a930cabee81bb6de7f08825978692" exitCode=0 Feb 17 16:22:22 crc kubenswrapper[4672]: I0217 16:22:22.198993 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"878cc257-0a03-44ea-ae70-356195dc5427","Type":"ContainerDied","Data":"8adee34dd3fa75a44a03d90eb2604f154c0a930cabee81bb6de7f08825978692"} Feb 17 16:22:22 crc kubenswrapper[4672]: I0217 16:22:22.358101 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-htwhx" Feb 17 16:22:22 crc kubenswrapper[4672]: I0217 16:22:22.451602 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d64c91f-b138-4fa6-bb58-9d31c4c65861-dns-svc\") pod \"7d64c91f-b138-4fa6-bb58-9d31c4c65861\" (UID: \"7d64c91f-b138-4fa6-bb58-9d31c4c65861\") " Feb 17 16:22:22 crc kubenswrapper[4672]: I0217 16:22:22.451754 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d64c91f-b138-4fa6-bb58-9d31c4c65861-config\") pod \"7d64c91f-b138-4fa6-bb58-9d31c4c65861\" (UID: \"7d64c91f-b138-4fa6-bb58-9d31c4c65861\") " Feb 17 16:22:22 crc kubenswrapper[4672]: I0217 16:22:22.451815 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q2mv\" (UniqueName: \"kubernetes.io/projected/7d64c91f-b138-4fa6-bb58-9d31c4c65861-kube-api-access-2q2mv\") pod \"7d64c91f-b138-4fa6-bb58-9d31c4c65861\" (UID: \"7d64c91f-b138-4fa6-bb58-9d31c4c65861\") " Feb 17 16:22:22 crc kubenswrapper[4672]: I0217 16:22:22.469722 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d64c91f-b138-4fa6-bb58-9d31c4c65861-kube-api-access-2q2mv" (OuterVolumeSpecName: "kube-api-access-2q2mv") pod "7d64c91f-b138-4fa6-bb58-9d31c4c65861" (UID: "7d64c91f-b138-4fa6-bb58-9d31c4c65861"). InnerVolumeSpecName "kube-api-access-2q2mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:22 crc kubenswrapper[4672]: I0217 16:22:22.492233 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d64c91f-b138-4fa6-bb58-9d31c4c65861-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7d64c91f-b138-4fa6-bb58-9d31c4c65861" (UID: "7d64c91f-b138-4fa6-bb58-9d31c4c65861"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:22 crc kubenswrapper[4672]: I0217 16:22:22.522341 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d64c91f-b138-4fa6-bb58-9d31c4c65861-config" (OuterVolumeSpecName: "config") pod "7d64c91f-b138-4fa6-bb58-9d31c4c65861" (UID: "7d64c91f-b138-4fa6-bb58-9d31c4c65861"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:22 crc kubenswrapper[4672]: I0217 16:22:22.553729 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q2mv\" (UniqueName: \"kubernetes.io/projected/7d64c91f-b138-4fa6-bb58-9d31c4c65861-kube-api-access-2q2mv\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:22 crc kubenswrapper[4672]: I0217 16:22:22.553979 4672 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d64c91f-b138-4fa6-bb58-9d31c4c65861-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:22 crc kubenswrapper[4672]: I0217 16:22:22.553991 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d64c91f-b138-4fa6-bb58-9d31c4c65861-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:23 crc kubenswrapper[4672]: I0217 16:22:23.209135 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-htwhx" event={"ID":"7d64c91f-b138-4fa6-bb58-9d31c4c65861","Type":"ContainerDied","Data":"bfe77f867a4d404ec12bc0bb2d8289c90e8b5f6b26b5322d1403ced0db32e3f8"} Feb 17 16:22:23 crc kubenswrapper[4672]: I0217 16:22:23.209191 4672 scope.go:117] "RemoveContainer" containerID="1da524262e4aee7397c8c666a12f39b54646e8676cd63979429e4b5ab6c666c3" Feb 17 16:22:23 crc kubenswrapper[4672]: I0217 16:22:23.209223 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-htwhx" Feb 17 16:22:23 crc kubenswrapper[4672]: I0217 16:22:23.236298 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-htwhx"] Feb 17 16:22:23 crc kubenswrapper[4672]: I0217 16:22:23.245190 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-htwhx"] Feb 17 16:22:23 crc kubenswrapper[4672]: I0217 16:22:23.250006 4672 scope.go:117] "RemoveContainer" containerID="5ce9f01ccdcd65f6574eb892925ca8d7dd7e846849f97831c1a5b8f15f6e0c51" Feb 17 16:22:23 crc kubenswrapper[4672]: I0217 16:22:23.253834 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:22:23 crc kubenswrapper[4672]: I0217 16:22:23.971227 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d64c91f-b138-4fa6-bb58-9d31c4c65861" path="/var/lib/kubelet/pods/7d64c91f-b138-4fa6-bb58-9d31c4c65861/volumes" Feb 17 16:22:24 crc kubenswrapper[4672]: I0217 16:22:24.231252 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"91c936b2-eda8-4075-bcec-4c56d31cda1d","Type":"ContainerStarted","Data":"7878d3481d9c3fa0a9c225ceb774555b77236741f29a7872c210269f54dc0976"} Feb 17 16:22:26 crc kubenswrapper[4672]: I0217 16:22:26.255570 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"91c936b2-eda8-4075-bcec-4c56d31cda1d","Type":"ContainerStarted","Data":"88aeecfc100de63c0afca8666260e8d05efc0ffe0897bab4a3f67dcc5fdedb50"} Feb 17 16:22:26 crc kubenswrapper[4672]: I0217 16:22:26.256815 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Feb 17 16:22:26 crc kubenswrapper[4672]: I0217 16:22:26.258128 4672 generic.go:334] "Generic (PLEG): container finished" podID="59695c47-0c8e-4e97-b04f-3200eb8efc42" containerID="a8c07a149b6a3d3f13e888dd60ad5dfec52c48ccad96ef7cbcf72acb9a93e8a8" exitCode=0 Feb 17 16:22:26 crc kubenswrapper[4672]: I0217 16:22:26.258150 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zrw2r" event={"ID":"59695c47-0c8e-4e97-b04f-3200eb8efc42","Type":"ContainerDied","Data":"a8c07a149b6a3d3f13e888dd60ad5dfec52c48ccad96ef7cbcf72acb9a93e8a8"} Feb 17 16:22:26 crc kubenswrapper[4672]: I0217 16:22:26.259259 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Feb 17 16:22:26 crc kubenswrapper[4672]: I0217 16:22:26.294220 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=20.166446637 podStartE2EDuration="1m5.294200733s" podCreationTimestamp="2026-02-17 16:21:21 +0000 UTC" firstStartedPulling="2026-02-17 16:21:38.208669699 +0000 UTC m=+1106.962758421" lastFinishedPulling="2026-02-17 16:22:23.336423785 +0000 UTC m=+1152.090512517" observedRunningTime="2026-02-17 16:22:26.286018587 +0000 UTC m=+1155.040107329" watchObservedRunningTime="2026-02-17 16:22:26.294200733 +0000 UTC m=+1155.048289475" Feb 17 16:22:27 crc kubenswrapper[4672]: I0217 16:22:27.199431 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 17 16:22:27 crc kubenswrapper[4672]: I0217 16:22:27.199473 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 17 16:22:27 crc kubenswrapper[4672]: I0217 16:22:27.279033 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 17 16:22:27 crc kubenswrapper[4672]: I0217 16:22:27.385261 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 17 16:22:27 crc kubenswrapper[4672]: I0217 16:22:27.566200 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:22:27 crc kubenswrapper[4672]: I0217 16:22:27.566476 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:22:27 crc kubenswrapper[4672]: I0217 16:22:27.566533 4672 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:22:27 crc kubenswrapper[4672]: I0217 16:22:27.567046 4672 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6e5c44fe403356546654090676cb1aa54373e380600ecb186fac59fca3fb0ed3"} pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:22:27 crc kubenswrapper[4672]: I0217 16:22:27.567095 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" containerID="cri-o://6e5c44fe403356546654090676cb1aa54373e380600ecb186fac59fca3fb0ed3" gracePeriod=600 Feb 17 16:22:28 crc kubenswrapper[4672]: I0217 16:22:28.282091 4672 generic.go:334] "Generic (PLEG): container finished" podID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerID="6e5c44fe403356546654090676cb1aa54373e380600ecb186fac59fca3fb0ed3" exitCode=0 Feb 17 16:22:28 crc kubenswrapper[4672]: I0217 16:22:28.282133 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerDied","Data":"6e5c44fe403356546654090676cb1aa54373e380600ecb186fac59fca3fb0ed3"} Feb 17 16:22:28 crc kubenswrapper[4672]: I0217 16:22:28.282190 4672 scope.go:117] "RemoveContainer" containerID="15aa63f02ee4cd25df0940b558fcaa7bcd640deeb41ec99378884cac7403f757" Feb 17 16:22:28 crc kubenswrapper[4672]: I0217 16:22:28.388717 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 17 16:22:28 crc kubenswrapper[4672]: I0217 16:22:28.645059 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4672]: I0217 16:22:28.645130 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4672]: I0217 16:22:28.685185 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6f82a4ce-8da0-40f1-996a-843302449a12-etc-swift\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") " pod="openstack/swift-storage-0" Feb 17 16:22:28 crc kubenswrapper[4672]: I0217 16:22:28.691638 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6f82a4ce-8da0-40f1-996a-843302449a12-etc-swift\") pod \"swift-storage-0\" (UID: \"6f82a4ce-8da0-40f1-996a-843302449a12\") " pod="openstack/swift-storage-0" Feb 17 16:22:28 crc kubenswrapper[4672]: I0217 16:22:28.743420 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:28 crc kubenswrapper[4672]: I0217 16:22:28.890956 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.202151 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-slqs6"] Feb 17 16:22:29 crc kubenswrapper[4672]: E0217 16:22:29.202906 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d64c91f-b138-4fa6-bb58-9d31c4c65861" containerName="init" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.202931 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d64c91f-b138-4fa6-bb58-9d31c4c65861" containerName="init" Feb 17 16:22:29 crc kubenswrapper[4672]: E0217 16:22:29.202960 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d64c91f-b138-4fa6-bb58-9d31c4c65861" containerName="dnsmasq-dns" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.202969 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d64c91f-b138-4fa6-bb58-9d31c4c65861" containerName="dnsmasq-dns" Feb 17 16:22:29 crc kubenswrapper[4672]: E0217 16:22:29.202978 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56b8ee2d-e701-4705-b364-3183b1db7772" containerName="init" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.202985 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="56b8ee2d-e701-4705-b364-3183b1db7772" containerName="init" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.203182 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d64c91f-b138-4fa6-bb58-9d31c4c65861" containerName="dnsmasq-dns" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.203215 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="56b8ee2d-e701-4705-b364-3183b1db7772" containerName="init" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.203993 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-slqs6" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.212784 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-688a-account-create-update-x7qhx"] Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.214243 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-688a-account-create-update-x7qhx" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.218924 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.224472 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-slqs6"] Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.238412 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-688a-account-create-update-x7qhx"] Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.297721 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f-operator-scripts\") pod \"glance-db-create-slqs6\" (UID: \"52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f\") " pod="openstack/glance-db-create-slqs6" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.297784 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdzjc\" (UniqueName: \"kubernetes.io/projected/52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f-kube-api-access-vdzjc\") pod \"glance-db-create-slqs6\" (UID: \"52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f\") " pod="openstack/glance-db-create-slqs6" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.399830 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77264531-415d-45b2-8009-2f1106313532-operator-scripts\") pod \"glance-688a-account-create-update-x7qhx\" (UID: \"77264531-415d-45b2-8009-2f1106313532\") " pod="openstack/glance-688a-account-create-update-x7qhx" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.399890 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f-operator-scripts\") pod \"glance-db-create-slqs6\" (UID: \"52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f\") " pod="openstack/glance-db-create-slqs6" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.399937 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdzjc\" (UniqueName: \"kubernetes.io/projected/52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f-kube-api-access-vdzjc\") pod \"glance-db-create-slqs6\" (UID: \"52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f\") " pod="openstack/glance-db-create-slqs6" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.399997 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrlnl\" (UniqueName: \"kubernetes.io/projected/77264531-415d-45b2-8009-2f1106313532-kube-api-access-vrlnl\") pod \"glance-688a-account-create-update-x7qhx\" (UID: \"77264531-415d-45b2-8009-2f1106313532\") " pod="openstack/glance-688a-account-create-update-x7qhx" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.400871 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f-operator-scripts\") pod \"glance-db-create-slqs6\" (UID: \"52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f\") " pod="openstack/glance-db-create-slqs6" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.417113 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdzjc\" (UniqueName: \"kubernetes.io/projected/52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f-kube-api-access-vdzjc\") pod \"glance-db-create-slqs6\" (UID: \"52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f\") " pod="openstack/glance-db-create-slqs6" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.429624 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.501578 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77264531-415d-45b2-8009-2f1106313532-operator-scripts\") pod \"glance-688a-account-create-update-x7qhx\" (UID: \"77264531-415d-45b2-8009-2f1106313532\") " pod="openstack/glance-688a-account-create-update-x7qhx" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.502545 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77264531-415d-45b2-8009-2f1106313532-operator-scripts\") pod \"glance-688a-account-create-update-x7qhx\" (UID: \"77264531-415d-45b2-8009-2f1106313532\") " pod="openstack/glance-688a-account-create-update-x7qhx" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.501942 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrlnl\" (UniqueName: \"kubernetes.io/projected/77264531-415d-45b2-8009-2f1106313532-kube-api-access-vrlnl\") pod \"glance-688a-account-create-update-x7qhx\" (UID: \"77264531-415d-45b2-8009-2f1106313532\") " pod="openstack/glance-688a-account-create-update-x7qhx" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.515843 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrlnl\" (UniqueName: \"kubernetes.io/projected/77264531-415d-45b2-8009-2f1106313532-kube-api-access-vrlnl\") pod \"glance-688a-account-create-update-x7qhx\" (UID: \"77264531-415d-45b2-8009-2f1106313532\") " pod="openstack/glance-688a-account-create-update-x7qhx" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.573193 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-slqs6" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.582797 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-688a-account-create-update-x7qhx" Feb 17 16:22:29 crc kubenswrapper[4672]: I0217 16:22:29.998668 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.042688 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-vcfsn"] Feb 17 16:22:30 crc kubenswrapper[4672]: E0217 16:22:30.043079 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59695c47-0c8e-4e97-b04f-3200eb8efc42" containerName="swift-ring-rebalance" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.043092 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="59695c47-0c8e-4e97-b04f-3200eb8efc42" containerName="swift-ring-rebalance" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.043266 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="59695c47-0c8e-4e97-b04f-3200eb8efc42" containerName="swift-ring-rebalance" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.043876 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vcfsn" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.049164 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-vcfsn"] Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.117044 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/59695c47-0c8e-4e97-b04f-3200eb8efc42-etc-swift\") pod \"59695c47-0c8e-4e97-b04f-3200eb8efc42\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.117214 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plvk7\" (UniqueName: \"kubernetes.io/projected/59695c47-0c8e-4e97-b04f-3200eb8efc42-kube-api-access-plvk7\") pod \"59695c47-0c8e-4e97-b04f-3200eb8efc42\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.117261 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/59695c47-0c8e-4e97-b04f-3200eb8efc42-dispersionconf\") pod \"59695c47-0c8e-4e97-b04f-3200eb8efc42\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.117338 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/59695c47-0c8e-4e97-b04f-3200eb8efc42-swiftconf\") pod \"59695c47-0c8e-4e97-b04f-3200eb8efc42\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.117390 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59695c47-0c8e-4e97-b04f-3200eb8efc42-scripts\") pod \"59695c47-0c8e-4e97-b04f-3200eb8efc42\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.117410 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/59695c47-0c8e-4e97-b04f-3200eb8efc42-ring-data-devices\") pod \"59695c47-0c8e-4e97-b04f-3200eb8efc42\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.117433 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59695c47-0c8e-4e97-b04f-3200eb8efc42-combined-ca-bundle\") pod \"59695c47-0c8e-4e97-b04f-3200eb8efc42\" (UID: \"59695c47-0c8e-4e97-b04f-3200eb8efc42\") " Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.122216 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59695c47-0c8e-4e97-b04f-3200eb8efc42-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "59695c47-0c8e-4e97-b04f-3200eb8efc42" (UID: "59695c47-0c8e-4e97-b04f-3200eb8efc42"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.122428 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59695c47-0c8e-4e97-b04f-3200eb8efc42-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "59695c47-0c8e-4e97-b04f-3200eb8efc42" (UID: "59695c47-0c8e-4e97-b04f-3200eb8efc42"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.160837 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59695c47-0c8e-4e97-b04f-3200eb8efc42-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "59695c47-0c8e-4e97-b04f-3200eb8efc42" (UID: "59695c47-0c8e-4e97-b04f-3200eb8efc42"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.176062 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-ecb0-account-create-update-tcklb"] Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.177109 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59695c47-0c8e-4e97-b04f-3200eb8efc42-scripts" (OuterVolumeSpecName: "scripts") pod "59695c47-0c8e-4e97-b04f-3200eb8efc42" (UID: "59695c47-0c8e-4e97-b04f-3200eb8efc42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.177867 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ecb0-account-create-update-tcklb" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.180812 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.182185 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59695c47-0c8e-4e97-b04f-3200eb8efc42-kube-api-access-plvk7" (OuterVolumeSpecName: "kube-api-access-plvk7") pod "59695c47-0c8e-4e97-b04f-3200eb8efc42" (UID: "59695c47-0c8e-4e97-b04f-3200eb8efc42"). InnerVolumeSpecName "kube-api-access-plvk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.183612 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59695c47-0c8e-4e97-b04f-3200eb8efc42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59695c47-0c8e-4e97-b04f-3200eb8efc42" (UID: "59695c47-0c8e-4e97-b04f-3200eb8efc42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.184692 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59695c47-0c8e-4e97-b04f-3200eb8efc42-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "59695c47-0c8e-4e97-b04f-3200eb8efc42" (UID: "59695c47-0c8e-4e97-b04f-3200eb8efc42"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.188715 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ecb0-account-create-update-tcklb"] Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.263073 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-tkd5l"] Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.264520 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tkd5l" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.274824 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsjhx\" (UniqueName: \"kubernetes.io/projected/319add3d-105e-415b-88e8-b42b594b72da-kube-api-access-qsjhx\") pod \"keystone-db-create-vcfsn\" (UID: \"319add3d-105e-415b-88e8-b42b594b72da\") " pod="openstack/keystone-db-create-vcfsn" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.274941 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/319add3d-105e-415b-88e8-b42b594b72da-operator-scripts\") pod \"keystone-db-create-vcfsn\" (UID: \"319add3d-105e-415b-88e8-b42b594b72da\") " pod="openstack/keystone-db-create-vcfsn" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.275016 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plvk7\" (UniqueName: \"kubernetes.io/projected/59695c47-0c8e-4e97-b04f-3200eb8efc42-kube-api-access-plvk7\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.275031 4672 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/59695c47-0c8e-4e97-b04f-3200eb8efc42-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.275042 4672 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/59695c47-0c8e-4e97-b04f-3200eb8efc42-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.275050 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59695c47-0c8e-4e97-b04f-3200eb8efc42-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.275059 4672 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/59695c47-0c8e-4e97-b04f-3200eb8efc42-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.275067 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59695c47-0c8e-4e97-b04f-3200eb8efc42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.275074 4672 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/59695c47-0c8e-4e97-b04f-3200eb8efc42-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.298712 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-tkd5l"] Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.310961 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zrw2r" event={"ID":"59695c47-0c8e-4e97-b04f-3200eb8efc42","Type":"ContainerDied","Data":"0c4d9dc5554c7c6544b6d12b496478cf3c456aa4d797b2efc9c2b930393c5151"} Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.311122 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c4d9dc5554c7c6544b6d12b496478cf3c456aa4d797b2efc9c2b930393c5151" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.311250 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zrw2r" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.316988 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerStarted","Data":"1722f428334a1de321c821e299e3526dfaf27650f5a791aad97e83a2cd3ceac4"} Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.358445 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-688a-account-create-update-x7qhx"] Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.377136 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/319add3d-105e-415b-88e8-b42b594b72da-operator-scripts\") pod \"keystone-db-create-vcfsn\" (UID: \"319add3d-105e-415b-88e8-b42b594b72da\") " pod="openstack/keystone-db-create-vcfsn" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.377191 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66b89c41-8aa4-4151-81ef-39e5c8d17d32-operator-scripts\") pod \"keystone-ecb0-account-create-update-tcklb\" (UID: \"66b89c41-8aa4-4151-81ef-39e5c8d17d32\") " pod="openstack/keystone-ecb0-account-create-update-tcklb" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.377235 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-769p5\" (UniqueName: \"kubernetes.io/projected/66b89c41-8aa4-4151-81ef-39e5c8d17d32-kube-api-access-769p5\") pod \"keystone-ecb0-account-create-update-tcklb\" (UID: \"66b89c41-8aa4-4151-81ef-39e5c8d17d32\") " pod="openstack/keystone-ecb0-account-create-update-tcklb" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.377260 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6psg\" (UniqueName: \"kubernetes.io/projected/69254987-f071-4542-8b6f-dca3a9333b96-kube-api-access-v6psg\") pod \"placement-db-create-tkd5l\" (UID: \"69254987-f071-4542-8b6f-dca3a9333b96\") " pod="openstack/placement-db-create-tkd5l" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.377309 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69254987-f071-4542-8b6f-dca3a9333b96-operator-scripts\") pod \"placement-db-create-tkd5l\" (UID: \"69254987-f071-4542-8b6f-dca3a9333b96\") " pod="openstack/placement-db-create-tkd5l" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.377892 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsjhx\" (UniqueName: \"kubernetes.io/projected/319add3d-105e-415b-88e8-b42b594b72da-kube-api-access-qsjhx\") pod \"keystone-db-create-vcfsn\" (UID: \"319add3d-105e-415b-88e8-b42b594b72da\") " pod="openstack/keystone-db-create-vcfsn" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.378001 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/319add3d-105e-415b-88e8-b42b594b72da-operator-scripts\") pod \"keystone-db-create-vcfsn\" (UID: \"319add3d-105e-415b-88e8-b42b594b72da\") " pod="openstack/keystone-db-create-vcfsn" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.395636 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsjhx\" (UniqueName: \"kubernetes.io/projected/319add3d-105e-415b-88e8-b42b594b72da-kube-api-access-qsjhx\") pod \"keystone-db-create-vcfsn\" (UID: \"319add3d-105e-415b-88e8-b42b594b72da\") " pod="openstack/keystone-db-create-vcfsn" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.440164 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-1287-account-create-update-x77sb"] Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.441676 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1287-account-create-update-x77sb" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.443627 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.452501 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1287-account-create-update-x77sb"] Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.479292 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66b89c41-8aa4-4151-81ef-39e5c8d17d32-operator-scripts\") pod \"keystone-ecb0-account-create-update-tcklb\" (UID: \"66b89c41-8aa4-4151-81ef-39e5c8d17d32\") " pod="openstack/keystone-ecb0-account-create-update-tcklb" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.479364 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-769p5\" (UniqueName: \"kubernetes.io/projected/66b89c41-8aa4-4151-81ef-39e5c8d17d32-kube-api-access-769p5\") pod \"keystone-ecb0-account-create-update-tcklb\" (UID: \"66b89c41-8aa4-4151-81ef-39e5c8d17d32\") " pod="openstack/keystone-ecb0-account-create-update-tcklb" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.479391 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6psg\" (UniqueName: \"kubernetes.io/projected/69254987-f071-4542-8b6f-dca3a9333b96-kube-api-access-v6psg\") pod \"placement-db-create-tkd5l\" (UID: \"69254987-f071-4542-8b6f-dca3a9333b96\") " pod="openstack/placement-db-create-tkd5l" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.479448 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69254987-f071-4542-8b6f-dca3a9333b96-operator-scripts\") pod \"placement-db-create-tkd5l\" (UID: \"69254987-f071-4542-8b6f-dca3a9333b96\") " pod="openstack/placement-db-create-tkd5l" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.480115 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66b89c41-8aa4-4151-81ef-39e5c8d17d32-operator-scripts\") pod \"keystone-ecb0-account-create-update-tcklb\" (UID: \"66b89c41-8aa4-4151-81ef-39e5c8d17d32\") " pod="openstack/keystone-ecb0-account-create-update-tcklb" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.502012 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69254987-f071-4542-8b6f-dca3a9333b96-operator-scripts\") pod \"placement-db-create-tkd5l\" (UID: \"69254987-f071-4542-8b6f-dca3a9333b96\") " pod="openstack/placement-db-create-tkd5l" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.504024 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6psg\" (UniqueName: \"kubernetes.io/projected/69254987-f071-4542-8b6f-dca3a9333b96-kube-api-access-v6psg\") pod \"placement-db-create-tkd5l\" (UID: \"69254987-f071-4542-8b6f-dca3a9333b96\") " pod="openstack/placement-db-create-tkd5l" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.504224 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-769p5\" (UniqueName: \"kubernetes.io/projected/66b89c41-8aa4-4151-81ef-39e5c8d17d32-kube-api-access-769p5\") pod \"keystone-ecb0-account-create-update-tcklb\" (UID: \"66b89c41-8aa4-4151-81ef-39e5c8d17d32\") " pod="openstack/keystone-ecb0-account-create-update-tcklb" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.506147 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ecb0-account-create-update-tcklb" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.555959 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-slqs6"] Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.581027 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrm4x\" (UniqueName: \"kubernetes.io/projected/6f74d64d-8fed-4be4-8d98-174760a351c0-kube-api-access-jrm4x\") pod \"placement-1287-account-create-update-x77sb\" (UID: \"6f74d64d-8fed-4be4-8d98-174760a351c0\") " pod="openstack/placement-1287-account-create-update-x77sb" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.581090 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f74d64d-8fed-4be4-8d98-174760a351c0-operator-scripts\") pod \"placement-1287-account-create-update-x77sb\" (UID: \"6f74d64d-8fed-4be4-8d98-174760a351c0\") " pod="openstack/placement-1287-account-create-update-x77sb" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.589677 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tkd5l" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.660338 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vcfsn" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.682780 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrm4x\" (UniqueName: \"kubernetes.io/projected/6f74d64d-8fed-4be4-8d98-174760a351c0-kube-api-access-jrm4x\") pod \"placement-1287-account-create-update-x77sb\" (UID: \"6f74d64d-8fed-4be4-8d98-174760a351c0\") " pod="openstack/placement-1287-account-create-update-x77sb" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.682864 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f74d64d-8fed-4be4-8d98-174760a351c0-operator-scripts\") pod \"placement-1287-account-create-update-x77sb\" (UID: \"6f74d64d-8fed-4be4-8d98-174760a351c0\") " pod="openstack/placement-1287-account-create-update-x77sb" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.683608 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f74d64d-8fed-4be4-8d98-174760a351c0-operator-scripts\") pod \"placement-1287-account-create-update-x77sb\" (UID: \"6f74d64d-8fed-4be4-8d98-174760a351c0\") " pod="openstack/placement-1287-account-create-update-x77sb" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.698124 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrm4x\" (UniqueName: \"kubernetes.io/projected/6f74d64d-8fed-4be4-8d98-174760a351c0-kube-api-access-jrm4x\") pod \"placement-1287-account-create-update-x77sb\" (UID: \"6f74d64d-8fed-4be4-8d98-174760a351c0\") " pod="openstack/placement-1287-account-create-update-x77sb" Feb 17 16:22:30 crc kubenswrapper[4672]: I0217 16:22:30.780654 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1287-account-create-update-x77sb" Feb 17 16:22:31 crc kubenswrapper[4672]: I0217 16:22:31.249940 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 16:22:31 crc kubenswrapper[4672]: W0217 16:22:31.252874 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f82a4ce_8da0_40f1_996a_843302449a12.slice/crio-bb261f5ce62bb773919e9762b6376281a6fd1594ea277fb4a711683ec58c3825 WatchSource:0}: Error finding container bb261f5ce62bb773919e9762b6376281a6fd1594ea277fb4a711683ec58c3825: Status 404 returned error can't find the container with id bb261f5ce62bb773919e9762b6376281a6fd1594ea277fb4a711683ec58c3825 Feb 17 16:22:31 crc kubenswrapper[4672]: I0217 16:22:31.326130 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6f82a4ce-8da0-40f1-996a-843302449a12","Type":"ContainerStarted","Data":"bb261f5ce62bb773919e9762b6376281a6fd1594ea277fb4a711683ec58c3825"} Feb 17 16:22:31 crc kubenswrapper[4672]: I0217 16:22:31.333577 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-slqs6" event={"ID":"52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f","Type":"ContainerStarted","Data":"a88fc86b813c4c380e2b19aee8793e5742986cb50927a434e1c6689a25893796"} Feb 17 16:22:31 crc kubenswrapper[4672]: I0217 16:22:31.337864 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-688a-account-create-update-x7qhx" event={"ID":"77264531-415d-45b2-8009-2f1106313532","Type":"ContainerStarted","Data":"f18c3a59d082468338fc89cdd399ece4f29972d5b3c0c9d528f34dbf0e8b9d60"} Feb 17 16:22:31 crc kubenswrapper[4672]: I0217 16:22:31.668812 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ecb0-account-create-update-tcklb"] Feb 17 16:22:31 crc kubenswrapper[4672]: W0217 16:22:31.696311 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66b89c41_8aa4_4151_81ef_39e5c8d17d32.slice/crio-63aa00003118338601dbde0ac2c9505b1a6373c995feb5a4279e875893855efe WatchSource:0}: Error finding container 63aa00003118338601dbde0ac2c9505b1a6373c995feb5a4279e875893855efe: Status 404 returned error can't find the container with id 63aa00003118338601dbde0ac2c9505b1a6373c995feb5a4279e875893855efe Feb 17 16:22:31 crc kubenswrapper[4672]: I0217 16:22:31.822831 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-gwjj7" Feb 17 16:22:31 crc kubenswrapper[4672]: I0217 16:22:31.824416 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1287-account-create-update-x77sb"] Feb 17 16:22:31 crc kubenswrapper[4672]: W0217 16:22:31.829958 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f74d64d_8fed_4be4_8d98_174760a351c0.slice/crio-2bacffe7b5bc85d05ffccead4c7720a7088f2c8dd239dc84edcf3a4c97b4d951 WatchSource:0}: Error finding container 2bacffe7b5bc85d05ffccead4c7720a7088f2c8dd239dc84edcf3a4c97b4d951: Status 404 returned error can't find the container with id 2bacffe7b5bc85d05ffccead4c7720a7088f2c8dd239dc84edcf3a4c97b4d951 Feb 17 16:22:31 crc kubenswrapper[4672]: I0217 16:22:31.878337 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-vcfsn"] Feb 17 16:22:31 crc kubenswrapper[4672]: I0217 16:22:31.902169 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-tkd5l"] Feb 17 16:22:32 crc kubenswrapper[4672]: I0217 16:22:32.023035 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-r97r4" Feb 17 16:22:32 crc kubenswrapper[4672]: I0217 16:22:32.088499 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr" Feb 17 16:22:32 crc kubenswrapper[4672]: I0217 16:22:32.347575 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-tkd5l" event={"ID":"69254987-f071-4542-8b6f-dca3a9333b96","Type":"ContainerStarted","Data":"76e9bd1049165f6bd7eb93befcb324ba1380c8a63d6328b44eecf19d9624fe89"} Feb 17 16:22:32 crc kubenswrapper[4672]: I0217 16:22:32.347863 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-tkd5l" event={"ID":"69254987-f071-4542-8b6f-dca3a9333b96","Type":"ContainerStarted","Data":"d2b39fe8b487b2fb6d18a431d6e0fde2a5df20df588fbbf79d7961a652a263d1"} Feb 17 16:22:32 crc kubenswrapper[4672]: I0217 16:22:32.353896 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ecb0-account-create-update-tcklb" event={"ID":"66b89c41-8aa4-4151-81ef-39e5c8d17d32","Type":"ContainerStarted","Data":"9ab2ef9cd2506f35d0d68c30fb6de002767aae3e73bd37b4b1784ed72a3083d5"} Feb 17 16:22:32 crc kubenswrapper[4672]: I0217 16:22:32.353949 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ecb0-account-create-update-tcklb" event={"ID":"66b89c41-8aa4-4151-81ef-39e5c8d17d32","Type":"ContainerStarted","Data":"63aa00003118338601dbde0ac2c9505b1a6373c995feb5a4279e875893855efe"} Feb 17 16:22:32 crc kubenswrapper[4672]: I0217 16:22:32.355403 4672 generic.go:334] "Generic (PLEG): container finished" podID="52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f" containerID="c099632ec5c7fc7fc3d1d8ca66146aac9abea92c9d5ece61e05dbc7fae093aca" exitCode=0 Feb 17 16:22:32 crc kubenswrapper[4672]: I0217 16:22:32.355470 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-slqs6" event={"ID":"52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f","Type":"ContainerDied","Data":"c099632ec5c7fc7fc3d1d8ca66146aac9abea92c9d5ece61e05dbc7fae093aca"} Feb 17 16:22:32 crc kubenswrapper[4672]: I0217 16:22:32.362356 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"878cc257-0a03-44ea-ae70-356195dc5427","Type":"ContainerStarted","Data":"81b9ff72fcf90aafc0a053c5d22382d36d35290cdf95ef286a4d0cc62e6eaff3"} Feb 17 16:22:32 crc kubenswrapper[4672]: I0217 16:22:32.364306 4672 generic.go:334] "Generic (PLEG): container finished" podID="77264531-415d-45b2-8009-2f1106313532" containerID="593b1f1b31e25ce3dc314d8e597d0e806bd6c00dfdb06dd511cff7c5d02c8c9c" exitCode=0 Feb 17 16:22:32 crc kubenswrapper[4672]: I0217 16:22:32.364424 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-688a-account-create-update-x7qhx" event={"ID":"77264531-415d-45b2-8009-2f1106313532","Type":"ContainerDied","Data":"593b1f1b31e25ce3dc314d8e597d0e806bd6c00dfdb06dd511cff7c5d02c8c9c"} Feb 17 16:22:32 crc kubenswrapper[4672]: I0217 16:22:32.366181 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1287-account-create-update-x77sb" event={"ID":"6f74d64d-8fed-4be4-8d98-174760a351c0","Type":"ContainerStarted","Data":"25ea71516ba9b87acf013fa346695c5e8bb69dd55df4cbc6f877c9ebcd43a9ba"} Feb 17 16:22:32 crc kubenswrapper[4672]: I0217 16:22:32.366214 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1287-account-create-update-x77sb" event={"ID":"6f74d64d-8fed-4be4-8d98-174760a351c0","Type":"ContainerStarted","Data":"2bacffe7b5bc85d05ffccead4c7720a7088f2c8dd239dc84edcf3a4c97b4d951"} Feb 17 16:22:32 crc kubenswrapper[4672]: I0217 16:22:32.368002 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-tkd5l" podStartSLOduration=2.367984807 podStartE2EDuration="2.367984807s" podCreationTimestamp="2026-02-17 16:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:32.360724855 +0000 UTC m=+1161.114813587" watchObservedRunningTime="2026-02-17 16:22:32.367984807 +0000 UTC m=+1161.122073539" Feb 17 16:22:32 crc kubenswrapper[4672]: I0217 16:22:32.368618 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vcfsn" event={"ID":"319add3d-105e-415b-88e8-b42b594b72da","Type":"ContainerStarted","Data":"360da191df09ca4264321fc7f5a648d7f7469215a92141e9fa0a06e4322f7eaf"} Feb 17 16:22:32 crc kubenswrapper[4672]: I0217 16:22:32.368645 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vcfsn" event={"ID":"319add3d-105e-415b-88e8-b42b594b72da","Type":"ContainerStarted","Data":"23d4f9ac7f25b4bf8749ea520e65731825e13ccbcb88633a25526af14235b4eb"} Feb 17 16:22:32 crc kubenswrapper[4672]: I0217 16:22:32.397980 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-ecb0-account-create-update-tcklb" podStartSLOduration=2.397960588 podStartE2EDuration="2.397960588s" podCreationTimestamp="2026-02-17 16:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:32.393072439 +0000 UTC m=+1161.147161171" watchObservedRunningTime="2026-02-17 16:22:32.397960588 +0000 UTC m=+1161.152049320" Feb 17 16:22:32 crc kubenswrapper[4672]: I0217 16:22:32.429486 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-vcfsn" podStartSLOduration=2.42946771 podStartE2EDuration="2.42946771s" podCreationTimestamp="2026-02-17 16:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:32.424093438 +0000 UTC m=+1161.178182170" watchObservedRunningTime="2026-02-17 16:22:32.42946771 +0000 UTC m=+1161.183556442" Feb 17 16:22:32 crc kubenswrapper[4672]: I0217 16:22:32.442781 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-1287-account-create-update-x77sb" podStartSLOduration=2.442767781 podStartE2EDuration="2.442767781s" podCreationTimestamp="2026-02-17 16:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:32.439008252 +0000 UTC m=+1161.193096984" watchObservedRunningTime="2026-02-17 16:22:32.442767781 +0000 UTC m=+1161.196856513" Feb 17 16:22:32 crc kubenswrapper[4672]: I0217 16:22:32.966981 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="3acacae4-cbf8-43e1-a2af-3e1bf95be39b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:22:33 crc kubenswrapper[4672]: I0217 16:22:33.047966 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:22:33 crc kubenswrapper[4672]: I0217 16:22:33.376533 4672 generic.go:334] "Generic (PLEG): container finished" podID="6f74d64d-8fed-4be4-8d98-174760a351c0" containerID="25ea71516ba9b87acf013fa346695c5e8bb69dd55df4cbc6f877c9ebcd43a9ba" exitCode=0 Feb 17 16:22:33 crc kubenswrapper[4672]: I0217 16:22:33.376579 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1287-account-create-update-x77sb" event={"ID":"6f74d64d-8fed-4be4-8d98-174760a351c0","Type":"ContainerDied","Data":"25ea71516ba9b87acf013fa346695c5e8bb69dd55df4cbc6f877c9ebcd43a9ba"} Feb 17 16:22:33 crc kubenswrapper[4672]: I0217 16:22:33.382413 4672 generic.go:334] "Generic (PLEG): container finished" podID="319add3d-105e-415b-88e8-b42b594b72da" containerID="360da191df09ca4264321fc7f5a648d7f7469215a92141e9fa0a06e4322f7eaf" exitCode=0 Feb 17 16:22:33 crc kubenswrapper[4672]: I0217 16:22:33.382500 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vcfsn" event={"ID":"319add3d-105e-415b-88e8-b42b594b72da","Type":"ContainerDied","Data":"360da191df09ca4264321fc7f5a648d7f7469215a92141e9fa0a06e4322f7eaf"} Feb 17 16:22:33 crc kubenswrapper[4672]: I0217 16:22:33.384283 4672 generic.go:334] "Generic (PLEG): container finished" podID="69254987-f071-4542-8b6f-dca3a9333b96" containerID="76e9bd1049165f6bd7eb93befcb324ba1380c8a63d6328b44eecf19d9624fe89" exitCode=0 Feb 17 16:22:33 crc kubenswrapper[4672]: I0217 16:22:33.384342 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-tkd5l" event={"ID":"69254987-f071-4542-8b6f-dca3a9333b96","Type":"ContainerDied","Data":"76e9bd1049165f6bd7eb93befcb324ba1380c8a63d6328b44eecf19d9624fe89"} Feb 17 16:22:33 crc kubenswrapper[4672]: I0217 16:22:33.386011 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6f82a4ce-8da0-40f1-996a-843302449a12","Type":"ContainerStarted","Data":"60aada542bf945f4c82d0fd41aafb5b6f81c95415f6fa3db37fd2c7b54d32d4b"} Feb 17 16:22:33 crc kubenswrapper[4672]: I0217 16:22:33.396766 4672 generic.go:334] "Generic (PLEG): container finished" podID="66b89c41-8aa4-4151-81ef-39e5c8d17d32" containerID="9ab2ef9cd2506f35d0d68c30fb6de002767aae3e73bd37b4b1784ed72a3083d5" exitCode=0 Feb 17 16:22:33 crc kubenswrapper[4672]: I0217 16:22:33.396848 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ecb0-account-create-update-tcklb" event={"ID":"66b89c41-8aa4-4151-81ef-39e5c8d17d32","Type":"ContainerDied","Data":"9ab2ef9cd2506f35d0d68c30fb6de002767aae3e73bd37b4b1784ed72a3083d5"} Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.218254 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-688a-account-create-update-x7qhx" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.224052 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-slqs6" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.385134 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdzjc\" (UniqueName: \"kubernetes.io/projected/52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f-kube-api-access-vdzjc\") pod \"52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f\" (UID: \"52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f\") " Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.386335 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f-operator-scripts\") pod \"52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f\" (UID: \"52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f\") " Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.386365 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrlnl\" (UniqueName: \"kubernetes.io/projected/77264531-415d-45b2-8009-2f1106313532-kube-api-access-vrlnl\") pod \"77264531-415d-45b2-8009-2f1106313532\" (UID: \"77264531-415d-45b2-8009-2f1106313532\") " Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.386415 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77264531-415d-45b2-8009-2f1106313532-operator-scripts\") pod \"77264531-415d-45b2-8009-2f1106313532\" (UID: \"77264531-415d-45b2-8009-2f1106313532\") " Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.387230 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77264531-415d-45b2-8009-2f1106313532-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "77264531-415d-45b2-8009-2f1106313532" (UID: "77264531-415d-45b2-8009-2f1106313532"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.387431 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f" (UID: "52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.388451 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f-kube-api-access-vdzjc" (OuterVolumeSpecName: "kube-api-access-vdzjc") pod "52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f" (UID: "52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f"). InnerVolumeSpecName "kube-api-access-vdzjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.390594 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77264531-415d-45b2-8009-2f1106313532-kube-api-access-vrlnl" (OuterVolumeSpecName: "kube-api-access-vrlnl") pod "77264531-415d-45b2-8009-2f1106313532" (UID: "77264531-415d-45b2-8009-2f1106313532"). InnerVolumeSpecName "kube-api-access-vrlnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.408842 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6f82a4ce-8da0-40f1-996a-843302449a12","Type":"ContainerStarted","Data":"51c0eab9b52c740a2974d23ac3e03f34c516b1943d580b5ae07480a7241e501c"} Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.408895 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6f82a4ce-8da0-40f1-996a-843302449a12","Type":"ContainerStarted","Data":"14484bbd7f071c51b24ae446900c0615ca446872128da356eca704f0c17c9e0d"} Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.408910 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6f82a4ce-8da0-40f1-996a-843302449a12","Type":"ContainerStarted","Data":"5b28e09a0f77df0c25650d1ab7ca349dbed3d671ac6e1673c9ec893e0bb70050"} Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.411888 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-slqs6" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.411905 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-slqs6" event={"ID":"52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f","Type":"ContainerDied","Data":"a88fc86b813c4c380e2b19aee8793e5742986cb50927a434e1c6689a25893796"} Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.411977 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a88fc86b813c4c380e2b19aee8793e5742986cb50927a434e1c6689a25893796" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.415920 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"878cc257-0a03-44ea-ae70-356195dc5427","Type":"ContainerStarted","Data":"d211f7c0a6efc1d39c5a671b547c6d127b5eec7be3d091240b64d2494b576ee4"} Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.417783 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-688a-account-create-update-x7qhx" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.417825 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-688a-account-create-update-x7qhx" event={"ID":"77264531-415d-45b2-8009-2f1106313532","Type":"ContainerDied","Data":"f18c3a59d082468338fc89cdd399ece4f29972d5b3c0c9d528f34dbf0e8b9d60"} Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.417841 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f18c3a59d082468338fc89cdd399ece4f29972d5b3c0c9d528f34dbf0e8b9d60" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.488438 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdzjc\" (UniqueName: \"kubernetes.io/projected/52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f-kube-api-access-vdzjc\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.488473 4672 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.488487 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrlnl\" (UniqueName: \"kubernetes.io/projected/77264531-415d-45b2-8009-2f1106313532-kube-api-access-vrlnl\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.488499 4672 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77264531-415d-45b2-8009-2f1106313532-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.723944 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.751312 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-8nlcn" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.767745 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1287-account-create-update-x77sb" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.895197 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f74d64d-8fed-4be4-8d98-174760a351c0-operator-scripts\") pod \"6f74d64d-8fed-4be4-8d98-174760a351c0\" (UID: \"6f74d64d-8fed-4be4-8d98-174760a351c0\") " Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.895383 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrm4x\" (UniqueName: \"kubernetes.io/projected/6f74d64d-8fed-4be4-8d98-174760a351c0-kube-api-access-jrm4x\") pod \"6f74d64d-8fed-4be4-8d98-174760a351c0\" (UID: \"6f74d64d-8fed-4be4-8d98-174760a351c0\") " Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.897847 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f74d64d-8fed-4be4-8d98-174760a351c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6f74d64d-8fed-4be4-8d98-174760a351c0" (UID: "6f74d64d-8fed-4be4-8d98-174760a351c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.904409 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f74d64d-8fed-4be4-8d98-174760a351c0-kube-api-access-jrm4x" (OuterVolumeSpecName: "kube-api-access-jrm4x") pod "6f74d64d-8fed-4be4-8d98-174760a351c0" (UID: "6f74d64d-8fed-4be4-8d98-174760a351c0"). InnerVolumeSpecName "kube-api-access-jrm4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.972280 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-q9cd6-config-rs6sh"] Feb 17 16:22:34 crc kubenswrapper[4672]: E0217 16:22:34.972729 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f" containerName="mariadb-database-create" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.972746 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f" containerName="mariadb-database-create" Feb 17 16:22:34 crc kubenswrapper[4672]: E0217 16:22:34.972762 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f74d64d-8fed-4be4-8d98-174760a351c0" containerName="mariadb-account-create-update" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.972770 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f74d64d-8fed-4be4-8d98-174760a351c0" containerName="mariadb-account-create-update" Feb 17 16:22:34 crc kubenswrapper[4672]: E0217 16:22:34.972804 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77264531-415d-45b2-8009-2f1106313532" containerName="mariadb-account-create-update" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.972814 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="77264531-415d-45b2-8009-2f1106313532" containerName="mariadb-account-create-update" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.973024 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f" containerName="mariadb-database-create" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.973043 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f74d64d-8fed-4be4-8d98-174760a351c0" containerName="mariadb-account-create-update" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.973058 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="77264531-415d-45b2-8009-2f1106313532" containerName="mariadb-account-create-update" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.974947 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q9cd6-config-rs6sh" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.981245 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 17 16:22:34 crc kubenswrapper[4672]: I0217 16:22:34.988114 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q9cd6-config-rs6sh"] Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.000547 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrm4x\" (UniqueName: \"kubernetes.io/projected/6f74d64d-8fed-4be4-8d98-174760a351c0-kube-api-access-jrm4x\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.000579 4672 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f74d64d-8fed-4be4-8d98-174760a351c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.040275 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ecb0-account-create-update-tcklb" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.049559 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tkd5l" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.071224 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vcfsn" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.102083 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-var-log-ovn\") pod \"ovn-controller-q9cd6-config-rs6sh\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " pod="openstack/ovn-controller-q9cd6-config-rs6sh" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.102145 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-scripts\") pod \"ovn-controller-q9cd6-config-rs6sh\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " pod="openstack/ovn-controller-q9cd6-config-rs6sh" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.102209 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5tmq\" (UniqueName: \"kubernetes.io/projected/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-kube-api-access-m5tmq\") pod \"ovn-controller-q9cd6-config-rs6sh\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " pod="openstack/ovn-controller-q9cd6-config-rs6sh" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.102263 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-additional-scripts\") pod \"ovn-controller-q9cd6-config-rs6sh\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " pod="openstack/ovn-controller-q9cd6-config-rs6sh" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.102283 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-var-run-ovn\") pod \"ovn-controller-q9cd6-config-rs6sh\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " pod="openstack/ovn-controller-q9cd6-config-rs6sh" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.102338 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-var-run\") pod \"ovn-controller-q9cd6-config-rs6sh\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " pod="openstack/ovn-controller-q9cd6-config-rs6sh" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.204040 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-769p5\" (UniqueName: \"kubernetes.io/projected/66b89c41-8aa4-4151-81ef-39e5c8d17d32-kube-api-access-769p5\") pod \"66b89c41-8aa4-4151-81ef-39e5c8d17d32\" (UID: \"66b89c41-8aa4-4151-81ef-39e5c8d17d32\") " Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.204190 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/319add3d-105e-415b-88e8-b42b594b72da-operator-scripts\") pod \"319add3d-105e-415b-88e8-b42b594b72da\" (UID: \"319add3d-105e-415b-88e8-b42b594b72da\") " Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.204253 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsjhx\" (UniqueName: \"kubernetes.io/projected/319add3d-105e-415b-88e8-b42b594b72da-kube-api-access-qsjhx\") pod \"319add3d-105e-415b-88e8-b42b594b72da\" (UID: \"319add3d-105e-415b-88e8-b42b594b72da\") " Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.204282 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69254987-f071-4542-8b6f-dca3a9333b96-operator-scripts\") pod \"69254987-f071-4542-8b6f-dca3a9333b96\" (UID: \"69254987-f071-4542-8b6f-dca3a9333b96\") " Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.204308 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6psg\" (UniqueName: \"kubernetes.io/projected/69254987-f071-4542-8b6f-dca3a9333b96-kube-api-access-v6psg\") pod \"69254987-f071-4542-8b6f-dca3a9333b96\" (UID: \"69254987-f071-4542-8b6f-dca3a9333b96\") " Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.204413 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66b89c41-8aa4-4151-81ef-39e5c8d17d32-operator-scripts\") pod \"66b89c41-8aa4-4151-81ef-39e5c8d17d32\" (UID: \"66b89c41-8aa4-4151-81ef-39e5c8d17d32\") " Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.204729 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-var-log-ovn\") pod \"ovn-controller-q9cd6-config-rs6sh\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " pod="openstack/ovn-controller-q9cd6-config-rs6sh" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.204771 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-scripts\") pod \"ovn-controller-q9cd6-config-rs6sh\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " pod="openstack/ovn-controller-q9cd6-config-rs6sh" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.204849 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5tmq\" (UniqueName: \"kubernetes.io/projected/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-kube-api-access-m5tmq\") pod \"ovn-controller-q9cd6-config-rs6sh\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " pod="openstack/ovn-controller-q9cd6-config-rs6sh" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.204925 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-additional-scripts\") pod \"ovn-controller-q9cd6-config-rs6sh\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " pod="openstack/ovn-controller-q9cd6-config-rs6sh" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.204961 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-var-run-ovn\") pod \"ovn-controller-q9cd6-config-rs6sh\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " pod="openstack/ovn-controller-q9cd6-config-rs6sh" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.205028 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-var-run\") pod \"ovn-controller-q9cd6-config-rs6sh\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " pod="openstack/ovn-controller-q9cd6-config-rs6sh" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.205331 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-var-log-ovn\") pod \"ovn-controller-q9cd6-config-rs6sh\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " pod="openstack/ovn-controller-q9cd6-config-rs6sh" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.205355 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66b89c41-8aa4-4151-81ef-39e5c8d17d32-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66b89c41-8aa4-4151-81ef-39e5c8d17d32" (UID: "66b89c41-8aa4-4151-81ef-39e5c8d17d32"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.205349 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-var-run\") pod \"ovn-controller-q9cd6-config-rs6sh\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " pod="openstack/ovn-controller-q9cd6-config-rs6sh" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.205408 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-var-run-ovn\") pod \"ovn-controller-q9cd6-config-rs6sh\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " pod="openstack/ovn-controller-q9cd6-config-rs6sh" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.205794 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/319add3d-105e-415b-88e8-b42b594b72da-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "319add3d-105e-415b-88e8-b42b594b72da" (UID: "319add3d-105e-415b-88e8-b42b594b72da"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.205869 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-additional-scripts\") pod \"ovn-controller-q9cd6-config-rs6sh\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " pod="openstack/ovn-controller-q9cd6-config-rs6sh" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.205882 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69254987-f071-4542-8b6f-dca3a9333b96-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "69254987-f071-4542-8b6f-dca3a9333b96" (UID: "69254987-f071-4542-8b6f-dca3a9333b96"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.207417 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-scripts\") pod \"ovn-controller-q9cd6-config-rs6sh\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " pod="openstack/ovn-controller-q9cd6-config-rs6sh" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.222442 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/319add3d-105e-415b-88e8-b42b594b72da-kube-api-access-qsjhx" (OuterVolumeSpecName: "kube-api-access-qsjhx") pod "319add3d-105e-415b-88e8-b42b594b72da" (UID: "319add3d-105e-415b-88e8-b42b594b72da"). InnerVolumeSpecName "kube-api-access-qsjhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.222797 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66b89c41-8aa4-4151-81ef-39e5c8d17d32-kube-api-access-769p5" (OuterVolumeSpecName: "kube-api-access-769p5") pod "66b89c41-8aa4-4151-81ef-39e5c8d17d32" (UID: "66b89c41-8aa4-4151-81ef-39e5c8d17d32"). InnerVolumeSpecName "kube-api-access-769p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.224068 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69254987-f071-4542-8b6f-dca3a9333b96-kube-api-access-v6psg" (OuterVolumeSpecName: "kube-api-access-v6psg") pod "69254987-f071-4542-8b6f-dca3a9333b96" (UID: "69254987-f071-4542-8b6f-dca3a9333b96"). InnerVolumeSpecName "kube-api-access-v6psg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.226489 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5tmq\" (UniqueName: \"kubernetes.io/projected/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-kube-api-access-m5tmq\") pod \"ovn-controller-q9cd6-config-rs6sh\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " pod="openstack/ovn-controller-q9cd6-config-rs6sh" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.306839 4672 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66b89c41-8aa4-4151-81ef-39e5c8d17d32-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.306872 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-769p5\" (UniqueName: \"kubernetes.io/projected/66b89c41-8aa4-4151-81ef-39e5c8d17d32-kube-api-access-769p5\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.306885 4672 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/319add3d-105e-415b-88e8-b42b594b72da-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.306893 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsjhx\" (UniqueName: \"kubernetes.io/projected/319add3d-105e-415b-88e8-b42b594b72da-kube-api-access-qsjhx\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.306902 4672 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69254987-f071-4542-8b6f-dca3a9333b96-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.306913 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6psg\" (UniqueName: \"kubernetes.io/projected/69254987-f071-4542-8b6f-dca3a9333b96-kube-api-access-v6psg\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.368273 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q9cd6-config-rs6sh" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.450363 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ecb0-account-create-update-tcklb" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.450407 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ecb0-account-create-update-tcklb" event={"ID":"66b89c41-8aa4-4151-81ef-39e5c8d17d32","Type":"ContainerDied","Data":"63aa00003118338601dbde0ac2c9505b1a6373c995feb5a4279e875893855efe"} Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.450452 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63aa00003118338601dbde0ac2c9505b1a6373c995feb5a4279e875893855efe" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.453258 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1287-account-create-update-x77sb" event={"ID":"6f74d64d-8fed-4be4-8d98-174760a351c0","Type":"ContainerDied","Data":"2bacffe7b5bc85d05ffccead4c7720a7088f2c8dd239dc84edcf3a4c97b4d951"} Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.453467 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bacffe7b5bc85d05ffccead4c7720a7088f2c8dd239dc84edcf3a4c97b4d951" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.453300 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1287-account-create-update-x77sb" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.455127 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vcfsn" event={"ID":"319add3d-105e-415b-88e8-b42b594b72da","Type":"ContainerDied","Data":"23d4f9ac7f25b4bf8749ea520e65731825e13ccbcb88633a25526af14235b4eb"} Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.455158 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23d4f9ac7f25b4bf8749ea520e65731825e13ccbcb88633a25526af14235b4eb" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.455207 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vcfsn" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.456689 4672 generic.go:334] "Generic (PLEG): container finished" podID="5467b054-ae2f-4852-8d68-f9ba7cd2bdab" containerID="87ffd86f4e0157f11b7f529a9b0619f55b4daab09253878dc6cec5c5e545a0d2" exitCode=0 Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.456741 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5467b054-ae2f-4852-8d68-f9ba7cd2bdab","Type":"ContainerDied","Data":"87ffd86f4e0157f11b7f529a9b0619f55b4daab09253878dc6cec5c5e545a0d2"} Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.463758 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tkd5l" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.463780 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-tkd5l" event={"ID":"69254987-f071-4542-8b6f-dca3a9333b96","Type":"ContainerDied","Data":"d2b39fe8b487b2fb6d18a431d6e0fde2a5df20df588fbbf79d7961a652a263d1"} Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.463814 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2b39fe8b487b2fb6d18a431d6e0fde2a5df20df588fbbf79d7961a652a263d1" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.864165 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hv8dn"] Feb 17 16:22:35 crc kubenswrapper[4672]: E0217 16:22:35.864717 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="319add3d-105e-415b-88e8-b42b594b72da" containerName="mariadb-database-create" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.864733 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="319add3d-105e-415b-88e8-b42b594b72da" containerName="mariadb-database-create" Feb 17 16:22:35 crc kubenswrapper[4672]: E0217 16:22:35.864748 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69254987-f071-4542-8b6f-dca3a9333b96" containerName="mariadb-database-create" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.864755 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="69254987-f071-4542-8b6f-dca3a9333b96" containerName="mariadb-database-create" Feb 17 16:22:35 crc kubenswrapper[4672]: E0217 16:22:35.864780 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b89c41-8aa4-4151-81ef-39e5c8d17d32" containerName="mariadb-account-create-update" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.864786 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b89c41-8aa4-4151-81ef-39e5c8d17d32" containerName="mariadb-account-create-update" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.864928 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="319add3d-105e-415b-88e8-b42b594b72da" containerName="mariadb-database-create" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.864948 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="69254987-f071-4542-8b6f-dca3a9333b96" containerName="mariadb-database-create" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.864957 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="66b89c41-8aa4-4151-81ef-39e5c8d17d32" containerName="mariadb-account-create-update" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.865496 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hv8dn" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.873277 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.886983 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q9cd6-config-rs6sh"] Feb 17 16:22:35 crc kubenswrapper[4672]: I0217 16:22:35.898378 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hv8dn"] Feb 17 16:22:35 crc kubenswrapper[4672]: W0217 16:22:35.948322 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc45a3ebd_df31_4e4d_9a99_44f7c9f3c252.slice/crio-cb7aeece963859901ee5437670a78fe281e46b9b626eb817881e2e4ff5051b1d WatchSource:0}: Error finding container cb7aeece963859901ee5437670a78fe281e46b9b626eb817881e2e4ff5051b1d: Status 404 returned error can't find the container with id cb7aeece963859901ee5437670a78fe281e46b9b626eb817881e2e4ff5051b1d Feb 17 16:22:36 crc kubenswrapper[4672]: I0217 16:22:36.022170 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d81e8815-c66e-4498-a714-6a1176f7bf1a-operator-scripts\") pod \"root-account-create-update-hv8dn\" (UID: \"d81e8815-c66e-4498-a714-6a1176f7bf1a\") " pod="openstack/root-account-create-update-hv8dn" Feb 17 16:22:36 crc kubenswrapper[4672]: I0217 16:22:36.022661 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djv8x\" (UniqueName: \"kubernetes.io/projected/d81e8815-c66e-4498-a714-6a1176f7bf1a-kube-api-access-djv8x\") pod \"root-account-create-update-hv8dn\" (UID: \"d81e8815-c66e-4498-a714-6a1176f7bf1a\") " pod="openstack/root-account-create-update-hv8dn" Feb 17 16:22:36 crc kubenswrapper[4672]: I0217 16:22:36.124074 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d81e8815-c66e-4498-a714-6a1176f7bf1a-operator-scripts\") pod \"root-account-create-update-hv8dn\" (UID: \"d81e8815-c66e-4498-a714-6a1176f7bf1a\") " pod="openstack/root-account-create-update-hv8dn" Feb 17 16:22:36 crc kubenswrapper[4672]: I0217 16:22:36.124117 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djv8x\" (UniqueName: \"kubernetes.io/projected/d81e8815-c66e-4498-a714-6a1176f7bf1a-kube-api-access-djv8x\") pod \"root-account-create-update-hv8dn\" (UID: \"d81e8815-c66e-4498-a714-6a1176f7bf1a\") " pod="openstack/root-account-create-update-hv8dn" Feb 17 16:22:36 crc kubenswrapper[4672]: I0217 16:22:36.125192 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d81e8815-c66e-4498-a714-6a1176f7bf1a-operator-scripts\") pod \"root-account-create-update-hv8dn\" (UID: \"d81e8815-c66e-4498-a714-6a1176f7bf1a\") " pod="openstack/root-account-create-update-hv8dn" Feb 17 16:22:36 crc kubenswrapper[4672]: I0217 16:22:36.144099 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djv8x\" (UniqueName: \"kubernetes.io/projected/d81e8815-c66e-4498-a714-6a1176f7bf1a-kube-api-access-djv8x\") pod \"root-account-create-update-hv8dn\" (UID: \"d81e8815-c66e-4498-a714-6a1176f7bf1a\") " pod="openstack/root-account-create-update-hv8dn" Feb 17 16:22:36 crc kubenswrapper[4672]: I0217 16:22:36.182175 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hv8dn" Feb 17 16:22:36 crc kubenswrapper[4672]: I0217 16:22:36.494051 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6f82a4ce-8da0-40f1-996a-843302449a12","Type":"ContainerStarted","Data":"bcfe70cbc61a91269f85de545ba7006b44d177f8c032393a22ab21f53c94da0a"} Feb 17 16:22:36 crc kubenswrapper[4672]: I0217 16:22:36.494282 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6f82a4ce-8da0-40f1-996a-843302449a12","Type":"ContainerStarted","Data":"6c923614737d3992e13b706b0759522ac2ef5e41d8030832535a907942493315"} Feb 17 16:22:36 crc kubenswrapper[4672]: I0217 16:22:36.495961 4672 generic.go:334] "Generic (PLEG): container finished" podID="3068e639-1b58-4971-bf3e-c321ff88289b" containerID="c6fb63d9f2a376c50007c407a43b299fc08c9519b4a5c7f6c3e24d766cae0726" exitCode=0 Feb 17 16:22:36 crc kubenswrapper[4672]: I0217 16:22:36.495997 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3068e639-1b58-4971-bf3e-c321ff88289b","Type":"ContainerDied","Data":"c6fb63d9f2a376c50007c407a43b299fc08c9519b4a5c7f6c3e24d766cae0726"} Feb 17 16:22:36 crc kubenswrapper[4672]: I0217 16:22:36.500879 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5467b054-ae2f-4852-8d68-f9ba7cd2bdab","Type":"ContainerStarted","Data":"0ba50ac14fde088faf589887170dbfe9e183afe39627f702197f9e8cc3fe394d"} Feb 17 16:22:36 crc kubenswrapper[4672]: I0217 16:22:36.501143 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:36 crc kubenswrapper[4672]: I0217 16:22:36.512344 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q9cd6-config-rs6sh" event={"ID":"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252","Type":"ContainerStarted","Data":"0426f5faa9be35ab2713513540515886f13b5bd7bc01b26accedadb2ccde784f"} Feb 17 16:22:36 crc kubenswrapper[4672]: I0217 16:22:36.512388 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q9cd6-config-rs6sh" event={"ID":"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252","Type":"ContainerStarted","Data":"cb7aeece963859901ee5437670a78fe281e46b9b626eb817881e2e4ff5051b1d"} Feb 17 16:22:36 crc kubenswrapper[4672]: I0217 16:22:36.549075 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-q9cd6-config-rs6sh" podStartSLOduration=2.549054071 podStartE2EDuration="2.549054071s" podCreationTimestamp="2026-02-17 16:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:36.543021392 +0000 UTC m=+1165.297110124" watchObservedRunningTime="2026-02-17 16:22:36.549054071 +0000 UTC m=+1165.303142803" Feb 17 16:22:36 crc kubenswrapper[4672]: I0217 16:22:36.570803 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=59.169846077 podStartE2EDuration="1m22.570786485s" podCreationTimestamp="2026-02-17 16:21:14 +0000 UTC" firstStartedPulling="2026-02-17 16:21:36.960388652 +0000 UTC m=+1105.714477384" lastFinishedPulling="2026-02-17 16:22:00.36132906 +0000 UTC m=+1129.115417792" observedRunningTime="2026-02-17 16:22:36.566980364 +0000 UTC m=+1165.321069096" watchObservedRunningTime="2026-02-17 16:22:36.570786485 +0000 UTC m=+1165.324875217" Feb 17 16:22:36 crc kubenswrapper[4672]: I0217 16:22:36.636959 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hv8dn"] Feb 17 16:22:36 crc kubenswrapper[4672]: W0217 16:22:36.652643 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd81e8815_c66e_4498_a714_6a1176f7bf1a.slice/crio-c59f39b48e49d6cf35a3c217dee05c4d015419141f1e6317a50c9c706512b7d3 WatchSource:0}: Error finding container c59f39b48e49d6cf35a3c217dee05c4d015419141f1e6317a50c9c706512b7d3: Status 404 returned error can't find the container with id c59f39b48e49d6cf35a3c217dee05c4d015419141f1e6317a50c9c706512b7d3 Feb 17 16:22:37 crc kubenswrapper[4672]: I0217 16:22:37.533313 4672 generic.go:334] "Generic (PLEG): container finished" podID="c45a3ebd-df31-4e4d-9a99-44f7c9f3c252" containerID="0426f5faa9be35ab2713513540515886f13b5bd7bc01b26accedadb2ccde784f" exitCode=0 Feb 17 16:22:37 crc kubenswrapper[4672]: I0217 16:22:37.533408 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q9cd6-config-rs6sh" event={"ID":"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252","Type":"ContainerDied","Data":"0426f5faa9be35ab2713513540515886f13b5bd7bc01b26accedadb2ccde784f"} Feb 17 16:22:37 crc kubenswrapper[4672]: I0217 16:22:37.538191 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6f82a4ce-8da0-40f1-996a-843302449a12","Type":"ContainerStarted","Data":"e6e5d5b20965e3ca2e221f46e50ed39bc85c57efadf165e3ff6d66ea24870d98"} Feb 17 16:22:37 crc kubenswrapper[4672]: I0217 16:22:37.538233 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6f82a4ce-8da0-40f1-996a-843302449a12","Type":"ContainerStarted","Data":"bf8b1241c58bd4f2ec3611d9e271df73ca9ada57e6be60fd319245aa02e5526b"} Feb 17 16:22:37 crc kubenswrapper[4672]: I0217 16:22:37.539943 4672 generic.go:334] "Generic (PLEG): container finished" podID="d81e8815-c66e-4498-a714-6a1176f7bf1a" containerID="4e239253615386b74a32a02d370df1d52edd468fd4cc3937b61a87ae1b60e2fa" exitCode=0 Feb 17 16:22:37 crc kubenswrapper[4672]: I0217 16:22:37.540043 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hv8dn" event={"ID":"d81e8815-c66e-4498-a714-6a1176f7bf1a","Type":"ContainerDied","Data":"4e239253615386b74a32a02d370df1d52edd468fd4cc3937b61a87ae1b60e2fa"} Feb 17 16:22:37 crc kubenswrapper[4672]: I0217 16:22:37.540061 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hv8dn" event={"ID":"d81e8815-c66e-4498-a714-6a1176f7bf1a","Type":"ContainerStarted","Data":"c59f39b48e49d6cf35a3c217dee05c4d015419141f1e6317a50c9c706512b7d3"} Feb 17 16:22:37 crc kubenswrapper[4672]: I0217 16:22:37.543666 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3068e639-1b58-4971-bf3e-c321ff88289b","Type":"ContainerStarted","Data":"a1e6f4fc864ae2ff390bb89ecd4ecc97ef1e8e578421ecf2fb2f1557f6a73ff6"} Feb 17 16:22:37 crc kubenswrapper[4672]: I0217 16:22:37.543948 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 17 16:22:37 crc kubenswrapper[4672]: I0217 16:22:37.579128 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=61.46341981 podStartE2EDuration="1m23.579113315s" podCreationTimestamp="2026-02-17 16:21:14 +0000 UTC" firstStartedPulling="2026-02-17 16:21:38.427062855 +0000 UTC m=+1107.181151587" lastFinishedPulling="2026-02-17 16:22:00.54275636 +0000 UTC m=+1129.296845092" observedRunningTime="2026-02-17 16:22:37.571397911 +0000 UTC m=+1166.325486643" watchObservedRunningTime="2026-02-17 16:22:37.579113315 +0000 UTC m=+1166.333202047" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.285031 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hv8dn" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.321261 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q9cd6-config-rs6sh" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.378744 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-additional-scripts\") pod \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.379443 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c45a3ebd-df31-4e4d-9a99-44f7c9f3c252" (UID: "c45a3ebd-df31-4e4d-9a99-44f7c9f3c252"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.379544 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djv8x\" (UniqueName: \"kubernetes.io/projected/d81e8815-c66e-4498-a714-6a1176f7bf1a-kube-api-access-djv8x\") pod \"d81e8815-c66e-4498-a714-6a1176f7bf1a\" (UID: \"d81e8815-c66e-4498-a714-6a1176f7bf1a\") " Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.379621 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-var-log-ovn\") pod \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.379687 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-scripts\") pod \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.379717 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-var-run-ovn\") pod \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.379863 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d81e8815-c66e-4498-a714-6a1176f7bf1a-operator-scripts\") pod \"d81e8815-c66e-4498-a714-6a1176f7bf1a\" (UID: \"d81e8815-c66e-4498-a714-6a1176f7bf1a\") " Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.379899 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-var-run\") pod \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.379914 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c45a3ebd-df31-4e4d-9a99-44f7c9f3c252" (UID: "c45a3ebd-df31-4e4d-9a99-44f7c9f3c252"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.379927 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c45a3ebd-df31-4e4d-9a99-44f7c9f3c252" (UID: "c45a3ebd-df31-4e4d-9a99-44f7c9f3c252"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.379976 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5tmq\" (UniqueName: \"kubernetes.io/projected/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-kube-api-access-m5tmq\") pod \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\" (UID: \"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252\") " Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.380025 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-var-run" (OuterVolumeSpecName: "var-run") pod "c45a3ebd-df31-4e4d-9a99-44f7c9f3c252" (UID: "c45a3ebd-df31-4e4d-9a99-44f7c9f3c252"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.380642 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-scripts" (OuterVolumeSpecName: "scripts") pod "c45a3ebd-df31-4e4d-9a99-44f7c9f3c252" (UID: "c45a3ebd-df31-4e4d-9a99-44f7c9f3c252"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.380755 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d81e8815-c66e-4498-a714-6a1176f7bf1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d81e8815-c66e-4498-a714-6a1176f7bf1a" (UID: "d81e8815-c66e-4498-a714-6a1176f7bf1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.380916 4672 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.380948 4672 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.380995 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.381011 4672 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.381029 4672 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d81e8815-c66e-4498-a714-6a1176f7bf1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.381048 4672 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-var-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.392311 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d81e8815-c66e-4498-a714-6a1176f7bf1a-kube-api-access-djv8x" (OuterVolumeSpecName: "kube-api-access-djv8x") pod "d81e8815-c66e-4498-a714-6a1176f7bf1a" (UID: "d81e8815-c66e-4498-a714-6a1176f7bf1a"). InnerVolumeSpecName "kube-api-access-djv8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.414744 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-kube-api-access-m5tmq" (OuterVolumeSpecName: "kube-api-access-m5tmq") pod "c45a3ebd-df31-4e4d-9a99-44f7c9f3c252" (UID: "c45a3ebd-df31-4e4d-9a99-44f7c9f3c252"). InnerVolumeSpecName "kube-api-access-m5tmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.482498 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djv8x\" (UniqueName: \"kubernetes.io/projected/d81e8815-c66e-4498-a714-6a1176f7bf1a-kube-api-access-djv8x\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.482561 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5tmq\" (UniqueName: \"kubernetes.io/projected/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252-kube-api-access-m5tmq\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.533692 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-p284t"] Feb 17 16:22:39 crc kubenswrapper[4672]: E0217 16:22:39.534045 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c45a3ebd-df31-4e4d-9a99-44f7c9f3c252" containerName="ovn-config" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.534061 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c45a3ebd-df31-4e4d-9a99-44f7c9f3c252" containerName="ovn-config" Feb 17 16:22:39 crc kubenswrapper[4672]: E0217 16:22:39.534076 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81e8815-c66e-4498-a714-6a1176f7bf1a" containerName="mariadb-account-create-update" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.534083 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81e8815-c66e-4498-a714-6a1176f7bf1a" containerName="mariadb-account-create-update" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.534226 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="d81e8815-c66e-4498-a714-6a1176f7bf1a" containerName="mariadb-account-create-update" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.534256 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="c45a3ebd-df31-4e4d-9a99-44f7c9f3c252" containerName="ovn-config" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.534903 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-p284t" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.537762 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qkr6q" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.538276 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.547869 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-p284t"] Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.568558 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q9cd6-config-rs6sh" event={"ID":"c45a3ebd-df31-4e4d-9a99-44f7c9f3c252","Type":"ContainerDied","Data":"cb7aeece963859901ee5437670a78fe281e46b9b626eb817881e2e4ff5051b1d"} Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.568605 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb7aeece963859901ee5437670a78fe281e46b9b626eb817881e2e4ff5051b1d" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.568665 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q9cd6-config-rs6sh" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.580961 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6f82a4ce-8da0-40f1-996a-843302449a12","Type":"ContainerStarted","Data":"d8542943942ca032398c3b6bab9f524fb1574aa62cf69d5d222477c599a4269c"} Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.582326 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hv8dn" event={"ID":"d81e8815-c66e-4498-a714-6a1176f7bf1a","Type":"ContainerDied","Data":"c59f39b48e49d6cf35a3c217dee05c4d015419141f1e6317a50c9c706512b7d3"} Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.582348 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c59f39b48e49d6cf35a3c217dee05c4d015419141f1e6317a50c9c706512b7d3" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.582394 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hv8dn" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.586496 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bltzw\" (UniqueName: \"kubernetes.io/projected/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-kube-api-access-bltzw\") pod \"glance-db-sync-p284t\" (UID: \"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac\") " pod="openstack/glance-db-sync-p284t" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.586599 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-combined-ca-bundle\") pod \"glance-db-sync-p284t\" (UID: \"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac\") " pod="openstack/glance-db-sync-p284t" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.586623 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-db-sync-config-data\") pod \"glance-db-sync-p284t\" (UID: \"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac\") " pod="openstack/glance-db-sync-p284t" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.586680 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-config-data\") pod \"glance-db-sync-p284t\" (UID: \"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac\") " pod="openstack/glance-db-sync-p284t" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.596122 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"878cc257-0a03-44ea-ae70-356195dc5427","Type":"ContainerStarted","Data":"ded6c95f43c9bd5001fe183b6701f3002c5ee4fc88d8a915bbc729bc99f91823"} Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.649901 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.434678166 podStartE2EDuration="1m18.649881436s" podCreationTimestamp="2026-02-17 16:21:21 +0000 UTC" firstStartedPulling="2026-02-17 16:21:37.95494202 +0000 UTC m=+1106.709030862" lastFinishedPulling="2026-02-17 16:22:39.17014538 +0000 UTC m=+1167.924234132" observedRunningTime="2026-02-17 16:22:39.628701586 +0000 UTC m=+1168.382790328" watchObservedRunningTime="2026-02-17 16:22:39.649881436 +0000 UTC m=+1168.403970168" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.687909 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-config-data\") pod \"glance-db-sync-p284t\" (UID: \"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac\") " pod="openstack/glance-db-sync-p284t" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.688406 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bltzw\" (UniqueName: \"kubernetes.io/projected/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-kube-api-access-bltzw\") pod \"glance-db-sync-p284t\" (UID: \"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac\") " pod="openstack/glance-db-sync-p284t" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.688601 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-combined-ca-bundle\") pod \"glance-db-sync-p284t\" (UID: \"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac\") " pod="openstack/glance-db-sync-p284t" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.688686 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-db-sync-config-data\") pod \"glance-db-sync-p284t\" (UID: \"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac\") " pod="openstack/glance-db-sync-p284t" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.694049 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-combined-ca-bundle\") pod \"glance-db-sync-p284t\" (UID: \"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac\") " pod="openstack/glance-db-sync-p284t" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.694952 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-db-sync-config-data\") pod \"glance-db-sync-p284t\" (UID: \"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac\") " pod="openstack/glance-db-sync-p284t" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.695986 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-config-data\") pod \"glance-db-sync-p284t\" (UID: \"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac\") " pod="openstack/glance-db-sync-p284t" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.722234 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bltzw\" (UniqueName: \"kubernetes.io/projected/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-kube-api-access-bltzw\") pod \"glance-db-sync-p284t\" (UID: \"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac\") " pod="openstack/glance-db-sync-p284t" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.724566 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-q9cd6-config-rs6sh"] Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.741395 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-q9cd6-config-rs6sh"] Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.849256 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-p284t" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.875915 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-q9cd6-config-pgp49"] Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.877052 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q9cd6-config-pgp49" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.879087 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.915140 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q9cd6-config-pgp49"] Feb 17 16:22:39 crc kubenswrapper[4672]: I0217 16:22:39.973589 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c45a3ebd-df31-4e4d-9a99-44f7c9f3c252" path="/var/lib/kubelet/pods/c45a3ebd-df31-4e4d-9a99-44f7c9f3c252/volumes" Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.007482 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ceeff567-d677-46ae-879b-4dd9f65170a5-var-run\") pod \"ovn-controller-q9cd6-config-pgp49\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " pod="openstack/ovn-controller-q9cd6-config-pgp49" Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.007542 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj882\" (UniqueName: \"kubernetes.io/projected/ceeff567-d677-46ae-879b-4dd9f65170a5-kube-api-access-vj882\") pod \"ovn-controller-q9cd6-config-pgp49\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " pod="openstack/ovn-controller-q9cd6-config-pgp49" Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.007637 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ceeff567-d677-46ae-879b-4dd9f65170a5-additional-scripts\") pod \"ovn-controller-q9cd6-config-pgp49\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " pod="openstack/ovn-controller-q9cd6-config-pgp49" Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.007665 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ceeff567-d677-46ae-879b-4dd9f65170a5-var-log-ovn\") pod \"ovn-controller-q9cd6-config-pgp49\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " pod="openstack/ovn-controller-q9cd6-config-pgp49" Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.007687 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ceeff567-d677-46ae-879b-4dd9f65170a5-var-run-ovn\") pod \"ovn-controller-q9cd6-config-pgp49\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " pod="openstack/ovn-controller-q9cd6-config-pgp49" Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.007710 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ceeff567-d677-46ae-879b-4dd9f65170a5-scripts\") pod \"ovn-controller-q9cd6-config-pgp49\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " pod="openstack/ovn-controller-q9cd6-config-pgp49" Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.110564 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ceeff567-d677-46ae-879b-4dd9f65170a5-var-run-ovn\") pod \"ovn-controller-q9cd6-config-pgp49\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " pod="openstack/ovn-controller-q9cd6-config-pgp49" Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.110926 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ceeff567-d677-46ae-879b-4dd9f65170a5-scripts\") pod \"ovn-controller-q9cd6-config-pgp49\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " pod="openstack/ovn-controller-q9cd6-config-pgp49" Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.111021 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ceeff567-d677-46ae-879b-4dd9f65170a5-var-run\") pod \"ovn-controller-q9cd6-config-pgp49\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " pod="openstack/ovn-controller-q9cd6-config-pgp49" Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.111038 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj882\" (UniqueName: \"kubernetes.io/projected/ceeff567-d677-46ae-879b-4dd9f65170a5-kube-api-access-vj882\") pod \"ovn-controller-q9cd6-config-pgp49\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " pod="openstack/ovn-controller-q9cd6-config-pgp49" Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.111115 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ceeff567-d677-46ae-879b-4dd9f65170a5-additional-scripts\") pod \"ovn-controller-q9cd6-config-pgp49\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " pod="openstack/ovn-controller-q9cd6-config-pgp49" Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.111140 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ceeff567-d677-46ae-879b-4dd9f65170a5-var-log-ovn\") pod \"ovn-controller-q9cd6-config-pgp49\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " pod="openstack/ovn-controller-q9cd6-config-pgp49" Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.111386 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ceeff567-d677-46ae-879b-4dd9f65170a5-var-log-ovn\") pod \"ovn-controller-q9cd6-config-pgp49\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " pod="openstack/ovn-controller-q9cd6-config-pgp49" Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.111434 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ceeff567-d677-46ae-879b-4dd9f65170a5-var-run\") pod \"ovn-controller-q9cd6-config-pgp49\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " pod="openstack/ovn-controller-q9cd6-config-pgp49" Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.111690 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ceeff567-d677-46ae-879b-4dd9f65170a5-var-run-ovn\") pod \"ovn-controller-q9cd6-config-pgp49\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " pod="openstack/ovn-controller-q9cd6-config-pgp49" Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.113465 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ceeff567-d677-46ae-879b-4dd9f65170a5-scripts\") pod \"ovn-controller-q9cd6-config-pgp49\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " pod="openstack/ovn-controller-q9cd6-config-pgp49" Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.113528 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ceeff567-d677-46ae-879b-4dd9f65170a5-additional-scripts\") pod \"ovn-controller-q9cd6-config-pgp49\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " pod="openstack/ovn-controller-q9cd6-config-pgp49" Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.154489 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj882\" (UniqueName: \"kubernetes.io/projected/ceeff567-d677-46ae-879b-4dd9f65170a5-kube-api-access-vj882\") pod \"ovn-controller-q9cd6-config-pgp49\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " pod="openstack/ovn-controller-q9cd6-config-pgp49" Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.236909 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q9cd6-config-pgp49" Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.599058 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-p284t"] Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.647616 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6f82a4ce-8da0-40f1-996a-843302449a12","Type":"ContainerStarted","Data":"fb822dcdbb1a574a6def92a5de243c074526903c73bf41042d426929de8a21a5"} Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.647678 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6f82a4ce-8da0-40f1-996a-843302449a12","Type":"ContainerStarted","Data":"60a7926f8f4f25cec03b48e5e2d38ea9afeadc727b6cc9516b12098a56dd33b5"} Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.647695 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6f82a4ce-8da0-40f1-996a-843302449a12","Type":"ContainerStarted","Data":"4e07137687447600e0d917b2ea64bf29b5edc3682997bfd7627a4963a01e4ccf"} Feb 17 16:22:40 crc kubenswrapper[4672]: I0217 16:22:40.793824 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-q9cd6-config-pgp49"] Feb 17 16:22:40 crc kubenswrapper[4672]: W0217 16:22:40.795878 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podceeff567_d677_46ae_879b_4dd9f65170a5.slice/crio-1d7d8e0301f611f30e3a107e920279a84b16963156fad9bfe6b286857662e25a WatchSource:0}: Error finding container 1d7d8e0301f611f30e3a107e920279a84b16963156fad9bfe6b286857662e25a: Status 404 returned error can't find the container with id 1d7d8e0301f611f30e3a107e920279a84b16963156fad9bfe6b286857662e25a Feb 17 16:22:41 crc kubenswrapper[4672]: I0217 16:22:41.656790 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-p284t" event={"ID":"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac","Type":"ContainerStarted","Data":"2b56796a92ebf963bc897e4aa91891fee0a562aff1ca6180e797b4ac37af6bff"} Feb 17 16:22:41 crc kubenswrapper[4672]: I0217 16:22:41.665831 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6f82a4ce-8da0-40f1-996a-843302449a12","Type":"ContainerStarted","Data":"443dfafcab97ec4378c87d381ecc1612b8a87f472e8435726f2a1a83de84c6c6"} Feb 17 16:22:41 crc kubenswrapper[4672]: I0217 16:22:41.665878 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6f82a4ce-8da0-40f1-996a-843302449a12","Type":"ContainerStarted","Data":"e6be798acf5e513dcf2c665375f806927839eb57b86b887e3cedc46525c90a8f"} Feb 17 16:22:41 crc kubenswrapper[4672]: I0217 16:22:41.665889 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6f82a4ce-8da0-40f1-996a-843302449a12","Type":"ContainerStarted","Data":"d59fb36f352dc69e5acb8ad544184ebe7996abe4ea3866ccf84e3fd5d39164c5"} Feb 17 16:22:41 crc kubenswrapper[4672]: I0217 16:22:41.667500 4672 generic.go:334] "Generic (PLEG): container finished" podID="ceeff567-d677-46ae-879b-4dd9f65170a5" containerID="f311e93ba52e53e04bc016546e66c38491b82ef7abef5d50d6c540b5885e99a7" exitCode=0 Feb 17 16:22:41 crc kubenswrapper[4672]: I0217 16:22:41.667564 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q9cd6-config-pgp49" event={"ID":"ceeff567-d677-46ae-879b-4dd9f65170a5","Type":"ContainerDied","Data":"f311e93ba52e53e04bc016546e66c38491b82ef7abef5d50d6c540b5885e99a7"} Feb 17 16:22:41 crc kubenswrapper[4672]: I0217 16:22:41.667591 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q9cd6-config-pgp49" event={"ID":"ceeff567-d677-46ae-879b-4dd9f65170a5","Type":"ContainerStarted","Data":"1d7d8e0301f611f30e3a107e920279a84b16963156fad9bfe6b286857662e25a"} Feb 17 16:22:41 crc kubenswrapper[4672]: I0217 16:22:41.703967 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=22.899936092 podStartE2EDuration="30.703951025s" podCreationTimestamp="2026-02-17 16:22:11 +0000 UTC" firstStartedPulling="2026-02-17 16:22:31.267732769 +0000 UTC m=+1160.021821501" lastFinishedPulling="2026-02-17 16:22:39.071747692 +0000 UTC m=+1167.825836434" observedRunningTime="2026-02-17 16:22:41.697681909 +0000 UTC m=+1170.451770661" watchObservedRunningTime="2026-02-17 16:22:41.703951025 +0000 UTC m=+1170.458039757" Feb 17 16:22:41 crc kubenswrapper[4672]: I0217 16:22:41.984353 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-bg42z"] Feb 17 16:22:41 crc kubenswrapper[4672]: I0217 16:22:41.986438 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:22:41 crc kubenswrapper[4672]: I0217 16:22:41.988491 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.009382 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-bg42z"] Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.050976 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-bg42z\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.051073 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-config\") pod \"dnsmasq-dns-77585f5f8c-bg42z\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.051289 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-bg42z\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.051346 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-bg42z\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.051429 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-bg42z\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.051569 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rz64\" (UniqueName: \"kubernetes.io/projected/1fa5ede2-7956-4541-8edf-8a5937a9f85d-kube-api-access-9rz64\") pod \"dnsmasq-dns-77585f5f8c-bg42z\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.154804 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rz64\" (UniqueName: \"kubernetes.io/projected/1fa5ede2-7956-4541-8edf-8a5937a9f85d-kube-api-access-9rz64\") pod \"dnsmasq-dns-77585f5f8c-bg42z\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.154872 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-bg42z\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.154934 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-config\") pod \"dnsmasq-dns-77585f5f8c-bg42z\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.154994 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-bg42z\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.155016 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-bg42z\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.155041 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-bg42z\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.156580 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-bg42z\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.157367 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-config\") pod \"dnsmasq-dns-77585f5f8c-bg42z\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.157523 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-bg42z\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.157957 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-bg42z\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.158101 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-bg42z\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.192219 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rz64\" (UniqueName: \"kubernetes.io/projected/1fa5ede2-7956-4541-8edf-8a5937a9f85d-kube-api-access-9rz64\") pod \"dnsmasq-dns-77585f5f8c-bg42z\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.307010 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hv8dn"] Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.313994 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hv8dn"] Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.355467 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.780371 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.868422 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-bg42z"] Feb 17 16:22:42 crc kubenswrapper[4672]: I0217 16:22:42.965643 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="3acacae4-cbf8-43e1-a2af-3e1bf95be39b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.066915 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q9cd6-config-pgp49" Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.073170 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ceeff567-d677-46ae-879b-4dd9f65170a5-var-log-ovn\") pod \"ceeff567-d677-46ae-879b-4dd9f65170a5\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.073280 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ceeff567-d677-46ae-879b-4dd9f65170a5-scripts\") pod \"ceeff567-d677-46ae-879b-4dd9f65170a5\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.073426 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ceeff567-d677-46ae-879b-4dd9f65170a5-var-run\") pod \"ceeff567-d677-46ae-879b-4dd9f65170a5\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.073470 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj882\" (UniqueName: \"kubernetes.io/projected/ceeff567-d677-46ae-879b-4dd9f65170a5-kube-api-access-vj882\") pod \"ceeff567-d677-46ae-879b-4dd9f65170a5\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.073505 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ceeff567-d677-46ae-879b-4dd9f65170a5-var-run-ovn\") pod \"ceeff567-d677-46ae-879b-4dd9f65170a5\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.073538 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ceeff567-d677-46ae-879b-4dd9f65170a5-additional-scripts\") pod \"ceeff567-d677-46ae-879b-4dd9f65170a5\" (UID: \"ceeff567-d677-46ae-879b-4dd9f65170a5\") " Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.073632 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ceeff567-d677-46ae-879b-4dd9f65170a5-var-run" (OuterVolumeSpecName: "var-run") pod "ceeff567-d677-46ae-879b-4dd9f65170a5" (UID: "ceeff567-d677-46ae-879b-4dd9f65170a5"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.073685 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ceeff567-d677-46ae-879b-4dd9f65170a5-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ceeff567-d677-46ae-879b-4dd9f65170a5" (UID: "ceeff567-d677-46ae-879b-4dd9f65170a5"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.073931 4672 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ceeff567-d677-46ae-879b-4dd9f65170a5-var-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.073944 4672 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ceeff567-d677-46ae-879b-4dd9f65170a5-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.074492 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceeff567-d677-46ae-879b-4dd9f65170a5-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ceeff567-d677-46ae-879b-4dd9f65170a5" (UID: "ceeff567-d677-46ae-879b-4dd9f65170a5"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.074572 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ceeff567-d677-46ae-879b-4dd9f65170a5-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ceeff567-d677-46ae-879b-4dd9f65170a5" (UID: "ceeff567-d677-46ae-879b-4dd9f65170a5"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.074696 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceeff567-d677-46ae-879b-4dd9f65170a5-scripts" (OuterVolumeSpecName: "scripts") pod "ceeff567-d677-46ae-879b-4dd9f65170a5" (UID: "ceeff567-d677-46ae-879b-4dd9f65170a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.080232 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceeff567-d677-46ae-879b-4dd9f65170a5-kube-api-access-vj882" (OuterVolumeSpecName: "kube-api-access-vj882") pod "ceeff567-d677-46ae-879b-4dd9f65170a5" (UID: "ceeff567-d677-46ae-879b-4dd9f65170a5"). InnerVolumeSpecName "kube-api-access-vj882". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.176565 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj882\" (UniqueName: \"kubernetes.io/projected/ceeff567-d677-46ae-879b-4dd9f65170a5-kube-api-access-vj882\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.176595 4672 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ceeff567-d677-46ae-879b-4dd9f65170a5-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.176605 4672 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ceeff567-d677-46ae-879b-4dd9f65170a5-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.176614 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ceeff567-d677-46ae-879b-4dd9f65170a5-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.697107 4672 generic.go:334] "Generic (PLEG): container finished" podID="1fa5ede2-7956-4541-8edf-8a5937a9f85d" containerID="54c66b4ed13aaab76473befc15bbe8f6fa7e1a3d19ec2c3f9d49f2c0ccf90fea" exitCode=0 Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.697166 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" event={"ID":"1fa5ede2-7956-4541-8edf-8a5937a9f85d","Type":"ContainerDied","Data":"54c66b4ed13aaab76473befc15bbe8f6fa7e1a3d19ec2c3f9d49f2c0ccf90fea"} Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.697190 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" event={"ID":"1fa5ede2-7956-4541-8edf-8a5937a9f85d","Type":"ContainerStarted","Data":"69ae77bc84eccd51dce2bb1706574a47833007000898225471ef079f7379fd8a"} Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.702896 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-q9cd6-config-pgp49" event={"ID":"ceeff567-d677-46ae-879b-4dd9f65170a5","Type":"ContainerDied","Data":"1d7d8e0301f611f30e3a107e920279a84b16963156fad9bfe6b286857662e25a"} Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.702955 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d7d8e0301f611f30e3a107e920279a84b16963156fad9bfe6b286857662e25a" Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.703037 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-q9cd6-config-pgp49" Feb 17 16:22:43 crc kubenswrapper[4672]: I0217 16:22:43.973920 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d81e8815-c66e-4498-a714-6a1176f7bf1a" path="/var/lib/kubelet/pods/d81e8815-c66e-4498-a714-6a1176f7bf1a/volumes" Feb 17 16:22:44 crc kubenswrapper[4672]: I0217 16:22:44.135176 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-q9cd6-config-pgp49"] Feb 17 16:22:44 crc kubenswrapper[4672]: I0217 16:22:44.145774 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-q9cd6-config-pgp49"] Feb 17 16:22:44 crc kubenswrapper[4672]: I0217 16:22:44.699648 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-q9cd6" Feb 17 16:22:44 crc kubenswrapper[4672]: I0217 16:22:44.724383 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" event={"ID":"1fa5ede2-7956-4541-8edf-8a5937a9f85d","Type":"ContainerStarted","Data":"815fba74963cb8511025b006d9ec173b0c5ae8f7fc16d19bf0ca00ce401d5f1e"} Feb 17 16:22:44 crc kubenswrapper[4672]: I0217 16:22:44.724769 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:22:44 crc kubenswrapper[4672]: I0217 16:22:44.754130 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" podStartSLOduration=3.754110292 podStartE2EDuration="3.754110292s" podCreationTimestamp="2026-02-17 16:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:44.747533348 +0000 UTC m=+1173.501622080" watchObservedRunningTime="2026-02-17 16:22:44.754110292 +0000 UTC m=+1173.508199024" Feb 17 16:22:45 crc kubenswrapper[4672]: I0217 16:22:45.958631 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceeff567-d677-46ae-879b-4dd9f65170a5" path="/var/lib/kubelet/pods/ceeff567-d677-46ae-879b-4dd9f65170a5/volumes" Feb 17 16:22:46 crc kubenswrapper[4672]: I0217 16:22:46.082842 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:22:47 crc kubenswrapper[4672]: I0217 16:22:47.327897 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-bjfpp"] Feb 17 16:22:47 crc kubenswrapper[4672]: E0217 16:22:47.328852 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceeff567-d677-46ae-879b-4dd9f65170a5" containerName="ovn-config" Feb 17 16:22:47 crc kubenswrapper[4672]: I0217 16:22:47.328875 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceeff567-d677-46ae-879b-4dd9f65170a5" containerName="ovn-config" Feb 17 16:22:47 crc kubenswrapper[4672]: I0217 16:22:47.329113 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceeff567-d677-46ae-879b-4dd9f65170a5" containerName="ovn-config" Feb 17 16:22:47 crc kubenswrapper[4672]: I0217 16:22:47.329890 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bjfpp" Feb 17 16:22:47 crc kubenswrapper[4672]: I0217 16:22:47.332180 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 17 16:22:47 crc kubenswrapper[4672]: I0217 16:22:47.356289 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bjfpp"] Feb 17 16:22:47 crc kubenswrapper[4672]: I0217 16:22:47.371825 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7508897a-e56b-444c-87c2-9d1cbc41170f-operator-scripts\") pod \"root-account-create-update-bjfpp\" (UID: \"7508897a-e56b-444c-87c2-9d1cbc41170f\") " pod="openstack/root-account-create-update-bjfpp" Feb 17 16:22:47 crc kubenswrapper[4672]: I0217 16:22:47.371892 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7vd5\" (UniqueName: \"kubernetes.io/projected/7508897a-e56b-444c-87c2-9d1cbc41170f-kube-api-access-z7vd5\") pod \"root-account-create-update-bjfpp\" (UID: \"7508897a-e56b-444c-87c2-9d1cbc41170f\") " pod="openstack/root-account-create-update-bjfpp" Feb 17 16:22:47 crc kubenswrapper[4672]: I0217 16:22:47.474098 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7508897a-e56b-444c-87c2-9d1cbc41170f-operator-scripts\") pod \"root-account-create-update-bjfpp\" (UID: \"7508897a-e56b-444c-87c2-9d1cbc41170f\") " pod="openstack/root-account-create-update-bjfpp" Feb 17 16:22:47 crc kubenswrapper[4672]: I0217 16:22:47.474142 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7vd5\" (UniqueName: \"kubernetes.io/projected/7508897a-e56b-444c-87c2-9d1cbc41170f-kube-api-access-z7vd5\") pod \"root-account-create-update-bjfpp\" (UID: \"7508897a-e56b-444c-87c2-9d1cbc41170f\") " pod="openstack/root-account-create-update-bjfpp" Feb 17 16:22:47 crc kubenswrapper[4672]: I0217 16:22:47.474943 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7508897a-e56b-444c-87c2-9d1cbc41170f-operator-scripts\") pod \"root-account-create-update-bjfpp\" (UID: \"7508897a-e56b-444c-87c2-9d1cbc41170f\") " pod="openstack/root-account-create-update-bjfpp" Feb 17 16:22:47 crc kubenswrapper[4672]: I0217 16:22:47.492706 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7vd5\" (UniqueName: \"kubernetes.io/projected/7508897a-e56b-444c-87c2-9d1cbc41170f-kube-api-access-z7vd5\") pod \"root-account-create-update-bjfpp\" (UID: \"7508897a-e56b-444c-87c2-9d1cbc41170f\") " pod="openstack/root-account-create-update-bjfpp" Feb 17 16:22:47 crc kubenswrapper[4672]: I0217 16:22:47.656908 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bjfpp" Feb 17 16:22:52 crc kubenswrapper[4672]: I0217 16:22:52.356648 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:22:52 crc kubenswrapper[4672]: I0217 16:22:52.427578 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-fm49z"] Feb 17 16:22:52 crc kubenswrapper[4672]: I0217 16:22:52.427806 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-fm49z" podUID="4275129d-2c75-4df7-9c23-0f883ec0733d" containerName="dnsmasq-dns" containerID="cri-o://50c1827027995515911b7f4972a027a7056d2a7ca9ad9533f04cae9f334a0ba6" gracePeriod=10 Feb 17 16:22:52 crc kubenswrapper[4672]: I0217 16:22:52.780999 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:52 crc kubenswrapper[4672]: I0217 16:22:52.789293 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:52 crc kubenswrapper[4672]: I0217 16:22:52.835263 4672 generic.go:334] "Generic (PLEG): container finished" podID="4275129d-2c75-4df7-9c23-0f883ec0733d" containerID="50c1827027995515911b7f4972a027a7056d2a7ca9ad9533f04cae9f334a0ba6" exitCode=0 Feb 17 16:22:52 crc kubenswrapper[4672]: I0217 16:22:52.836434 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-fm49z" event={"ID":"4275129d-2c75-4df7-9c23-0f883ec0733d","Type":"ContainerDied","Data":"50c1827027995515911b7f4972a027a7056d2a7ca9ad9533f04cae9f334a0ba6"} Feb 17 16:22:52 crc kubenswrapper[4672]: I0217 16:22:52.838932 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:52 crc kubenswrapper[4672]: I0217 16:22:52.963932 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="3acacae4-cbf8-43e1-a2af-3e1bf95be39b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.318930 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.319809 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="878cc257-0a03-44ea-ae70-356195dc5427" containerName="thanos-sidecar" containerID="cri-o://ded6c95f43c9bd5001fe183b6701f3002c5ee4fc88d8a915bbc729bc99f91823" gracePeriod=600 Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.319816 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="878cc257-0a03-44ea-ae70-356195dc5427" containerName="config-reloader" containerID="cri-o://d211f7c0a6efc1d39c5a671b547c6d127b5eec7be3d091240b64d2494b576ee4" gracePeriod=600 Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.320055 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="878cc257-0a03-44ea-ae70-356195dc5427" containerName="prometheus" containerID="cri-o://81b9ff72fcf90aafc0a053c5d22382d36d35290cdf95ef286a4d0cc62e6eaff3" gracePeriod=600 Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.841124 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bjfpp"] Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.844678 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-fm49z" Feb 17 16:22:55 crc kubenswrapper[4672]: W0217 16:22:55.851350 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7508897a_e56b_444c_87c2_9d1cbc41170f.slice/crio-b84f2d2164f5fc0acee385c6d5c9b293464f28acd4dfaaaef33f081ed6fe77db WatchSource:0}: Error finding container b84f2d2164f5fc0acee385c6d5c9b293464f28acd4dfaaaef33f081ed6fe77db: Status 404 returned error can't find the container with id b84f2d2164f5fc0acee385c6d5c9b293464f28acd4dfaaaef33f081ed6fe77db Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.912533 4672 generic.go:334] "Generic (PLEG): container finished" podID="878cc257-0a03-44ea-ae70-356195dc5427" containerID="ded6c95f43c9bd5001fe183b6701f3002c5ee4fc88d8a915bbc729bc99f91823" exitCode=0 Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.912820 4672 generic.go:334] "Generic (PLEG): container finished" podID="878cc257-0a03-44ea-ae70-356195dc5427" containerID="d211f7c0a6efc1d39c5a671b547c6d127b5eec7be3d091240b64d2494b576ee4" exitCode=0 Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.912832 4672 generic.go:334] "Generic (PLEG): container finished" podID="878cc257-0a03-44ea-ae70-356195dc5427" containerID="81b9ff72fcf90aafc0a053c5d22382d36d35290cdf95ef286a4d0cc62e6eaff3" exitCode=0 Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.912592 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"878cc257-0a03-44ea-ae70-356195dc5427","Type":"ContainerDied","Data":"ded6c95f43c9bd5001fe183b6701f3002c5ee4fc88d8a915bbc729bc99f91823"} Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.912909 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"878cc257-0a03-44ea-ae70-356195dc5427","Type":"ContainerDied","Data":"d211f7c0a6efc1d39c5a671b547c6d127b5eec7be3d091240b64d2494b576ee4"} Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.912927 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"878cc257-0a03-44ea-ae70-356195dc5427","Type":"ContainerDied","Data":"81b9ff72fcf90aafc0a053c5d22382d36d35290cdf95ef286a4d0cc62e6eaff3"} Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.917879 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-p284t" event={"ID":"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac","Type":"ContainerStarted","Data":"6829a2817ffaca776f8c96dbc0fbf6f639a72fe83e59f4e17909f920b30ced24"} Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.921791 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-fm49z" event={"ID":"4275129d-2c75-4df7-9c23-0f883ec0733d","Type":"ContainerDied","Data":"570be24cf13175e7cafb45b9ae7a46920e2c5583568e0288692683822b53f026"} Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.921839 4672 scope.go:117] "RemoveContainer" containerID="50c1827027995515911b7f4972a027a7056d2a7ca9ad9533f04cae9f334a0ba6" Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.921939 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-fm49z" Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.924733 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bjfpp" event={"ID":"7508897a-e56b-444c-87c2-9d1cbc41170f","Type":"ContainerStarted","Data":"b84f2d2164f5fc0acee385c6d5c9b293464f28acd4dfaaaef33f081ed6fe77db"} Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.958898 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-p284t" podStartSLOduration=2.772250668 podStartE2EDuration="16.958875598s" podCreationTimestamp="2026-02-17 16:22:39 +0000 UTC" firstStartedPulling="2026-02-17 16:22:40.610975349 +0000 UTC m=+1169.365064081" lastFinishedPulling="2026-02-17 16:22:54.797600289 +0000 UTC m=+1183.551689011" observedRunningTime="2026-02-17 16:22:55.940856723 +0000 UTC m=+1184.694945455" watchObservedRunningTime="2026-02-17 16:22:55.958875598 +0000 UTC m=+1184.712964330" Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.974103 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-ovsdbserver-nb\") pod \"4275129d-2c75-4df7-9c23-0f883ec0733d\" (UID: \"4275129d-2c75-4df7-9c23-0f883ec0733d\") " Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.974173 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-dns-svc\") pod \"4275129d-2c75-4df7-9c23-0f883ec0733d\" (UID: \"4275129d-2c75-4df7-9c23-0f883ec0733d\") " Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.974275 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-ovsdbserver-sb\") pod \"4275129d-2c75-4df7-9c23-0f883ec0733d\" (UID: \"4275129d-2c75-4df7-9c23-0f883ec0733d\") " Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.974345 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqdpq\" (UniqueName: \"kubernetes.io/projected/4275129d-2c75-4df7-9c23-0f883ec0733d-kube-api-access-fqdpq\") pod \"4275129d-2c75-4df7-9c23-0f883ec0733d\" (UID: \"4275129d-2c75-4df7-9c23-0f883ec0733d\") " Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.974372 4672 scope.go:117] "RemoveContainer" containerID="13822a9dd6628ab3db382a9e899e92c6fa6c05d9c3562b7f3bb2b6f5b34b85bf" Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.974483 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-config\") pod \"4275129d-2c75-4df7-9c23-0f883ec0733d\" (UID: \"4275129d-2c75-4df7-9c23-0f883ec0733d\") " Feb 17 16:22:55 crc kubenswrapper[4672]: I0217 16:22:55.987028 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4275129d-2c75-4df7-9c23-0f883ec0733d-kube-api-access-fqdpq" (OuterVolumeSpecName: "kube-api-access-fqdpq") pod "4275129d-2c75-4df7-9c23-0f883ec0733d" (UID: "4275129d-2c75-4df7-9c23-0f883ec0733d"). InnerVolumeSpecName "kube-api-access-fqdpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.004704 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.080088 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqdpq\" (UniqueName: \"kubernetes.io/projected/4275129d-2c75-4df7-9c23-0f883ec0733d-kube-api-access-fqdpq\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.156313 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4275129d-2c75-4df7-9c23-0f883ec0733d" (UID: "4275129d-2c75-4df7-9c23-0f883ec0733d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.177331 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-config" (OuterVolumeSpecName: "config") pod "4275129d-2c75-4df7-9c23-0f883ec0733d" (UID: "4275129d-2c75-4df7-9c23-0f883ec0733d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.188119 4672 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.188148 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.191186 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4275129d-2c75-4df7-9c23-0f883ec0733d" (UID: "4275129d-2c75-4df7-9c23-0f883ec0733d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.300047 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.303172 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4275129d-2c75-4df7-9c23-0f883ec0733d" (UID: "4275129d-2c75-4df7-9c23-0f883ec0733d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.365497 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-dgcpz"] Feb 17 16:22:56 crc kubenswrapper[4672]: E0217 16:22:56.365858 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4275129d-2c75-4df7-9c23-0f883ec0733d" containerName="init" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.365876 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="4275129d-2c75-4df7-9c23-0f883ec0733d" containerName="init" Feb 17 16:22:56 crc kubenswrapper[4672]: E0217 16:22:56.365904 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4275129d-2c75-4df7-9c23-0f883ec0733d" containerName="dnsmasq-dns" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.365911 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="4275129d-2c75-4df7-9c23-0f883ec0733d" containerName="dnsmasq-dns" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.366080 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="4275129d-2c75-4df7-9c23-0f883ec0733d" containerName="dnsmasq-dns" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.372033 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dgcpz" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.389892 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dgcpz"] Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.395782 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.403270 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4275129d-2c75-4df7-9c23-0f883ec0733d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.459462 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-rxdz9"] Feb 17 16:22:56 crc kubenswrapper[4672]: E0217 16:22:56.473931 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878cc257-0a03-44ea-ae70-356195dc5427" containerName="thanos-sidecar" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.473967 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="878cc257-0a03-44ea-ae70-356195dc5427" containerName="thanos-sidecar" Feb 17 16:22:56 crc kubenswrapper[4672]: E0217 16:22:56.473984 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878cc257-0a03-44ea-ae70-356195dc5427" containerName="init-config-reloader" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.473990 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="878cc257-0a03-44ea-ae70-356195dc5427" containerName="init-config-reloader" Feb 17 16:22:56 crc kubenswrapper[4672]: E0217 16:22:56.473999 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878cc257-0a03-44ea-ae70-356195dc5427" containerName="prometheus" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.474005 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="878cc257-0a03-44ea-ae70-356195dc5427" containerName="prometheus" Feb 17 16:22:56 crc kubenswrapper[4672]: E0217 16:22:56.474020 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878cc257-0a03-44ea-ae70-356195dc5427" containerName="config-reloader" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.474026 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="878cc257-0a03-44ea-ae70-356195dc5427" containerName="config-reloader" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.474200 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="878cc257-0a03-44ea-ae70-356195dc5427" containerName="prometheus" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.474209 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="878cc257-0a03-44ea-ae70-356195dc5427" containerName="thanos-sidecar" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.474218 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="878cc257-0a03-44ea-ae70-356195dc5427" containerName="config-reloader" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.474835 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-rxdz9" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.495080 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ba30-account-create-update-mkpmg"] Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.496400 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ba30-account-create-update-mkpmg" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.497973 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.506100 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgzb2\" (UniqueName: \"kubernetes.io/projected/878cc257-0a03-44ea-ae70-356195dc5427-kube-api-access-kgzb2\") pod \"878cc257-0a03-44ea-ae70-356195dc5427\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.506748 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/878cc257-0a03-44ea-ae70-356195dc5427-prometheus-metric-storage-rulefiles-1\") pod \"878cc257-0a03-44ea-ae70-356195dc5427\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.506942 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/878cc257-0a03-44ea-ae70-356195dc5427-web-config\") pod \"878cc257-0a03-44ea-ae70-356195dc5427\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.507039 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/878cc257-0a03-44ea-ae70-356195dc5427-prometheus-metric-storage-rulefiles-0\") pod \"878cc257-0a03-44ea-ae70-356195dc5427\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.507136 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/878cc257-0a03-44ea-ae70-356195dc5427-prometheus-metric-storage-rulefiles-2\") pod \"878cc257-0a03-44ea-ae70-356195dc5427\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.507217 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/878cc257-0a03-44ea-ae70-356195dc5427-thanos-prometheus-http-client-file\") pod \"878cc257-0a03-44ea-ae70-356195dc5427\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.507299 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/878cc257-0a03-44ea-ae70-356195dc5427-config\") pod \"878cc257-0a03-44ea-ae70-356195dc5427\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.507408 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/878cc257-0a03-44ea-ae70-356195dc5427-config-out\") pod \"878cc257-0a03-44ea-ae70-356195dc5427\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.507574 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38\") pod \"878cc257-0a03-44ea-ae70-356195dc5427\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.507680 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/878cc257-0a03-44ea-ae70-356195dc5427-tls-assets\") pod \"878cc257-0a03-44ea-ae70-356195dc5427\" (UID: \"878cc257-0a03-44ea-ae70-356195dc5427\") " Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.508044 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06f2f111-8f19-433e-bb63-b57167c82e19-operator-scripts\") pod \"cinder-db-create-dgcpz\" (UID: \"06f2f111-8f19-433e-bb63-b57167c82e19\") " pod="openstack/cinder-db-create-dgcpz" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.508174 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxx7w\" (UniqueName: \"kubernetes.io/projected/06f2f111-8f19-433e-bb63-b57167c82e19-kube-api-access-wxx7w\") pod \"cinder-db-create-dgcpz\" (UID: \"06f2f111-8f19-433e-bb63-b57167c82e19\") " pod="openstack/cinder-db-create-dgcpz" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.510293 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/878cc257-0a03-44ea-ae70-356195dc5427-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "878cc257-0a03-44ea-ae70-356195dc5427" (UID: "878cc257-0a03-44ea-ae70-356195dc5427"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.510705 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/878cc257-0a03-44ea-ae70-356195dc5427-kube-api-access-kgzb2" (OuterVolumeSpecName: "kube-api-access-kgzb2") pod "878cc257-0a03-44ea-ae70-356195dc5427" (UID: "878cc257-0a03-44ea-ae70-356195dc5427"). InnerVolumeSpecName "kube-api-access-kgzb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.511259 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/878cc257-0a03-44ea-ae70-356195dc5427-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "878cc257-0a03-44ea-ae70-356195dc5427" (UID: "878cc257-0a03-44ea-ae70-356195dc5427"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.511592 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/878cc257-0a03-44ea-ae70-356195dc5427-config-out" (OuterVolumeSpecName: "config-out") pod "878cc257-0a03-44ea-ae70-356195dc5427" (UID: "878cc257-0a03-44ea-ae70-356195dc5427"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.511888 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/878cc257-0a03-44ea-ae70-356195dc5427-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "878cc257-0a03-44ea-ae70-356195dc5427" (UID: "878cc257-0a03-44ea-ae70-356195dc5427"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.512338 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-rxdz9"] Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.513819 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/878cc257-0a03-44ea-ae70-356195dc5427-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "878cc257-0a03-44ea-ae70-356195dc5427" (UID: "878cc257-0a03-44ea-ae70-356195dc5427"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.515519 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/878cc257-0a03-44ea-ae70-356195dc5427-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "878cc257-0a03-44ea-ae70-356195dc5427" (UID: "878cc257-0a03-44ea-ae70-356195dc5427"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.516638 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/878cc257-0a03-44ea-ae70-356195dc5427-config" (OuterVolumeSpecName: "config") pod "878cc257-0a03-44ea-ae70-356195dc5427" (UID: "878cc257-0a03-44ea-ae70-356195dc5427"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.549102 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ba30-account-create-update-mkpmg"] Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.562652 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/878cc257-0a03-44ea-ae70-356195dc5427-web-config" (OuterVolumeSpecName: "web-config") pod "878cc257-0a03-44ea-ae70-356195dc5427" (UID: "878cc257-0a03-44ea-ae70-356195dc5427"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.591008 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-1ccd-account-create-update-s59r2"] Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.592289 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1ccd-account-create-update-s59r2" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.620151 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "878cc257-0a03-44ea-ae70-356195dc5427" (UID: "878cc257-0a03-44ea-ae70-356195dc5427"). InnerVolumeSpecName "pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.633216 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxx7w\" (UniqueName: \"kubernetes.io/projected/06f2f111-8f19-433e-bb63-b57167c82e19-kube-api-access-wxx7w\") pod \"cinder-db-create-dgcpz\" (UID: \"06f2f111-8f19-433e-bb63-b57167c82e19\") " pod="openstack/cinder-db-create-dgcpz" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.633325 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab366b61-1428-4608-8ac3-2bb8063e88f2-operator-scripts\") pod \"cinder-ba30-account-create-update-mkpmg\" (UID: \"ab366b61-1428-4608-8ac3-2bb8063e88f2\") " pod="openstack/cinder-ba30-account-create-update-mkpmg" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.633399 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaf29db5-1c89-453b-8632-65a429e68374-operator-scripts\") pod \"cloudkitty-db-create-rxdz9\" (UID: \"aaf29db5-1c89-453b-8632-65a429e68374\") " pod="openstack/cloudkitty-db-create-rxdz9" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.633433 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbmms\" (UniqueName: \"kubernetes.io/projected/ab366b61-1428-4608-8ac3-2bb8063e88f2-kube-api-access-gbmms\") pod \"cinder-ba30-account-create-update-mkpmg\" (UID: \"ab366b61-1428-4608-8ac3-2bb8063e88f2\") " pod="openstack/cinder-ba30-account-create-update-mkpmg" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.633455 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r572\" (UniqueName: \"kubernetes.io/projected/aaf29db5-1c89-453b-8632-65a429e68374-kube-api-access-9r572\") pod \"cloudkitty-db-create-rxdz9\" (UID: \"aaf29db5-1c89-453b-8632-65a429e68374\") " pod="openstack/cloudkitty-db-create-rxdz9" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.633564 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06f2f111-8f19-433e-bb63-b57167c82e19-operator-scripts\") pod \"cinder-db-create-dgcpz\" (UID: \"06f2f111-8f19-433e-bb63-b57167c82e19\") " pod="openstack/cinder-db-create-dgcpz" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.634234 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06f2f111-8f19-433e-bb63-b57167c82e19-operator-scripts\") pod \"cinder-db-create-dgcpz\" (UID: \"06f2f111-8f19-433e-bb63-b57167c82e19\") " pod="openstack/cinder-db-create-dgcpz" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.634540 4672 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/878cc257-0a03-44ea-ae70-356195dc5427-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.634564 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgzb2\" (UniqueName: \"kubernetes.io/projected/878cc257-0a03-44ea-ae70-356195dc5427-kube-api-access-kgzb2\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.634575 4672 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/878cc257-0a03-44ea-ae70-356195dc5427-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.634586 4672 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/878cc257-0a03-44ea-ae70-356195dc5427-web-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.634595 4672 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/878cc257-0a03-44ea-ae70-356195dc5427-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.634608 4672 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/878cc257-0a03-44ea-ae70-356195dc5427-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.634618 4672 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/878cc257-0a03-44ea-ae70-356195dc5427-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.634627 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/878cc257-0a03-44ea-ae70-356195dc5427-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.634635 4672 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/878cc257-0a03-44ea-ae70-356195dc5427-config-out\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.634658 4672 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38\") on node \"crc\" " Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.637623 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1ccd-account-create-update-s59r2"] Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.662037 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.683423 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxx7w\" (UniqueName: \"kubernetes.io/projected/06f2f111-8f19-433e-bb63-b57167c82e19-kube-api-access-wxx7w\") pod \"cinder-db-create-dgcpz\" (UID: \"06f2f111-8f19-433e-bb63-b57167c82e19\") " pod="openstack/cinder-db-create-dgcpz" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.705642 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-fm49z"] Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.715117 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-fm49z"] Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.721925 4672 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.722159 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dgcpz" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.722255 4672 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38") on node "crc" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.734074 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-xv2td"] Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.735462 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xv2td" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.735845 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab366b61-1428-4608-8ac3-2bb8063e88f2-operator-scripts\") pod \"cinder-ba30-account-create-update-mkpmg\" (UID: \"ab366b61-1428-4608-8ac3-2bb8063e88f2\") " pod="openstack/cinder-ba30-account-create-update-mkpmg" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.735915 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aad1e993-49a2-4984-9bf0-11c2a4190fd3-operator-scripts\") pod \"barbican-1ccd-account-create-update-s59r2\" (UID: \"aad1e993-49a2-4984-9bf0-11c2a4190fd3\") " pod="openstack/barbican-1ccd-account-create-update-s59r2" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.735941 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaf29db5-1c89-453b-8632-65a429e68374-operator-scripts\") pod \"cloudkitty-db-create-rxdz9\" (UID: \"aaf29db5-1c89-453b-8632-65a429e68374\") " pod="openstack/cloudkitty-db-create-rxdz9" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.735965 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbmms\" (UniqueName: \"kubernetes.io/projected/ab366b61-1428-4608-8ac3-2bb8063e88f2-kube-api-access-gbmms\") pod \"cinder-ba30-account-create-update-mkpmg\" (UID: \"ab366b61-1428-4608-8ac3-2bb8063e88f2\") " pod="openstack/cinder-ba30-account-create-update-mkpmg" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.735981 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r572\" (UniqueName: \"kubernetes.io/projected/aaf29db5-1c89-453b-8632-65a429e68374-kube-api-access-9r572\") pod \"cloudkitty-db-create-rxdz9\" (UID: \"aaf29db5-1c89-453b-8632-65a429e68374\") " pod="openstack/cloudkitty-db-create-rxdz9" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.736020 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-488tb\" (UniqueName: \"kubernetes.io/projected/aad1e993-49a2-4984-9bf0-11c2a4190fd3-kube-api-access-488tb\") pod \"barbican-1ccd-account-create-update-s59r2\" (UID: \"aad1e993-49a2-4984-9bf0-11c2a4190fd3\") " pod="openstack/barbican-1ccd-account-create-update-s59r2" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.736115 4672 reconciler_common.go:293] "Volume detached for volume \"pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.736688 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaf29db5-1c89-453b-8632-65a429e68374-operator-scripts\") pod \"cloudkitty-db-create-rxdz9\" (UID: \"aaf29db5-1c89-453b-8632-65a429e68374\") " pod="openstack/cloudkitty-db-create-rxdz9" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.736931 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab366b61-1428-4608-8ac3-2bb8063e88f2-operator-scripts\") pod \"cinder-ba30-account-create-update-mkpmg\" (UID: \"ab366b61-1428-4608-8ac3-2bb8063e88f2\") " pod="openstack/cinder-ba30-account-create-update-mkpmg" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.740778 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xv2td"] Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.770407 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r572\" (UniqueName: \"kubernetes.io/projected/aaf29db5-1c89-453b-8632-65a429e68374-kube-api-access-9r572\") pod \"cloudkitty-db-create-rxdz9\" (UID: \"aaf29db5-1c89-453b-8632-65a429e68374\") " pod="openstack/cloudkitty-db-create-rxdz9" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.799115 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-rxdz9" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.823446 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-7fmd6"] Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.824813 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7fmd6" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.826451 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbmms\" (UniqueName: \"kubernetes.io/projected/ab366b61-1428-4608-8ac3-2bb8063e88f2-kube-api-access-gbmms\") pod \"cinder-ba30-account-create-update-mkpmg\" (UID: \"ab366b61-1428-4608-8ac3-2bb8063e88f2\") " pod="openstack/cinder-ba30-account-create-update-mkpmg" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.827289 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2hf9g" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.827438 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.827614 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.831649 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-7fmd6"] Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.837085 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb24c242-f54a-40ba-8c88-f3dbed463abd-operator-scripts\") pod \"barbican-db-create-xv2td\" (UID: \"fb24c242-f54a-40ba-8c88-f3dbed463abd\") " pod="openstack/barbican-db-create-xv2td" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.837158 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.837186 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aad1e993-49a2-4984-9bf0-11c2a4190fd3-operator-scripts\") pod \"barbican-1ccd-account-create-update-s59r2\" (UID: \"aad1e993-49a2-4984-9bf0-11c2a4190fd3\") " pod="openstack/barbican-1ccd-account-create-update-s59r2" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.837235 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbfh6\" (UniqueName: \"kubernetes.io/projected/fb24c242-f54a-40ba-8c88-f3dbed463abd-kube-api-access-jbfh6\") pod \"barbican-db-create-xv2td\" (UID: \"fb24c242-f54a-40ba-8c88-f3dbed463abd\") " pod="openstack/barbican-db-create-xv2td" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.837268 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-488tb\" (UniqueName: \"kubernetes.io/projected/aad1e993-49a2-4984-9bf0-11c2a4190fd3-kube-api-access-488tb\") pod \"barbican-1ccd-account-create-update-s59r2\" (UID: \"aad1e993-49a2-4984-9bf0-11c2a4190fd3\") " pod="openstack/barbican-1ccd-account-create-update-s59r2" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.838810 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aad1e993-49a2-4984-9bf0-11c2a4190fd3-operator-scripts\") pod \"barbican-1ccd-account-create-update-s59r2\" (UID: \"aad1e993-49a2-4984-9bf0-11c2a4190fd3\") " pod="openstack/barbican-1ccd-account-create-update-s59r2" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.846689 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-hrfjv"] Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.847851 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hrfjv" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.878615 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-670a-account-create-update-7268z"] Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.884877 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-hrfjv"] Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.884975 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-670a-account-create-update-7268z" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.889829 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.906644 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-670a-account-create-update-7268z"] Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.921689 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-488tb\" (UniqueName: \"kubernetes.io/projected/aad1e993-49a2-4984-9bf0-11c2a4190fd3-kube-api-access-488tb\") pod \"barbican-1ccd-account-create-update-s59r2\" (UID: \"aad1e993-49a2-4984-9bf0-11c2a4190fd3\") " pod="openstack/barbican-1ccd-account-create-update-s59r2" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.938470 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xn2w\" (UniqueName: \"kubernetes.io/projected/6a856bff-885d-46ef-8ce3-300c89cfae1f-kube-api-access-9xn2w\") pod \"keystone-db-sync-7fmd6\" (UID: \"6a856bff-885d-46ef-8ce3-300c89cfae1f\") " pod="openstack/keystone-db-sync-7fmd6" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.938569 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a856bff-885d-46ef-8ce3-300c89cfae1f-config-data\") pod \"keystone-db-sync-7fmd6\" (UID: \"6a856bff-885d-46ef-8ce3-300c89cfae1f\") " pod="openstack/keystone-db-sync-7fmd6" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.938603 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbfh6\" (UniqueName: \"kubernetes.io/projected/fb24c242-f54a-40ba-8c88-f3dbed463abd-kube-api-access-jbfh6\") pod \"barbican-db-create-xv2td\" (UID: \"fb24c242-f54a-40ba-8c88-f3dbed463abd\") " pod="openstack/barbican-db-create-xv2td" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.938702 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb24c242-f54a-40ba-8c88-f3dbed463abd-operator-scripts\") pod \"barbican-db-create-xv2td\" (UID: \"fb24c242-f54a-40ba-8c88-f3dbed463abd\") " pod="openstack/barbican-db-create-xv2td" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.938719 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a856bff-885d-46ef-8ce3-300c89cfae1f-combined-ca-bundle\") pod \"keystone-db-sync-7fmd6\" (UID: \"6a856bff-885d-46ef-8ce3-300c89cfae1f\") " pod="openstack/keystone-db-sync-7fmd6" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.938739 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23ac1021-6630-413d-8f59-2ee8de8b22f6-operator-scripts\") pod \"neutron-db-create-hrfjv\" (UID: \"23ac1021-6630-413d-8f59-2ee8de8b22f6\") " pod="openstack/neutron-db-create-hrfjv" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.938760 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxnzn\" (UniqueName: \"kubernetes.io/projected/23ac1021-6630-413d-8f59-2ee8de8b22f6-kube-api-access-jxnzn\") pod \"neutron-db-create-hrfjv\" (UID: \"23ac1021-6630-413d-8f59-2ee8de8b22f6\") " pod="openstack/neutron-db-create-hrfjv" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.939425 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb24c242-f54a-40ba-8c88-f3dbed463abd-operator-scripts\") pod \"barbican-db-create-xv2td\" (UID: \"fb24c242-f54a-40ba-8c88-f3dbed463abd\") " pod="openstack/barbican-db-create-xv2td" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.955683 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bjfpp" event={"ID":"7508897a-e56b-444c-87c2-9d1cbc41170f","Type":"ContainerStarted","Data":"7dee2536b419de0589ef67fccc8660203bca4f9361bd99d65d873831aa236b7c"} Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.971541 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.975881 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"878cc257-0a03-44ea-ae70-356195dc5427","Type":"ContainerDied","Data":"7b5909351e992c394722dc301303b2f74942536825b506aed07a09cfe0864708"} Feb 17 16:22:56 crc kubenswrapper[4672]: I0217 16:22:56.975932 4672 scope.go:117] "RemoveContainer" containerID="ded6c95f43c9bd5001fe183b6701f3002c5ee4fc88d8a915bbc729bc99f91823" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.038041 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbfh6\" (UniqueName: \"kubernetes.io/projected/fb24c242-f54a-40ba-8c88-f3dbed463abd-kube-api-access-jbfh6\") pod \"barbican-db-create-xv2td\" (UID: \"fb24c242-f54a-40ba-8c88-f3dbed463abd\") " pod="openstack/barbican-db-create-xv2td" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.043977 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a856bff-885d-46ef-8ce3-300c89cfae1f-config-data\") pod \"keystone-db-sync-7fmd6\" (UID: \"6a856bff-885d-46ef-8ce3-300c89cfae1f\") " pod="openstack/keystone-db-sync-7fmd6" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.044083 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76e79bb4-99d3-4b9b-b496-f50ec996f5d4-operator-scripts\") pod \"neutron-670a-account-create-update-7268z\" (UID: \"76e79bb4-99d3-4b9b-b496-f50ec996f5d4\") " pod="openstack/neutron-670a-account-create-update-7268z" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.044111 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twrtx\" (UniqueName: \"kubernetes.io/projected/76e79bb4-99d3-4b9b-b496-f50ec996f5d4-kube-api-access-twrtx\") pod \"neutron-670a-account-create-update-7268z\" (UID: \"76e79bb4-99d3-4b9b-b496-f50ec996f5d4\") " pod="openstack/neutron-670a-account-create-update-7268z" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.044184 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a856bff-885d-46ef-8ce3-300c89cfae1f-combined-ca-bundle\") pod \"keystone-db-sync-7fmd6\" (UID: \"6a856bff-885d-46ef-8ce3-300c89cfae1f\") " pod="openstack/keystone-db-sync-7fmd6" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.044203 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23ac1021-6630-413d-8f59-2ee8de8b22f6-operator-scripts\") pod \"neutron-db-create-hrfjv\" (UID: \"23ac1021-6630-413d-8f59-2ee8de8b22f6\") " pod="openstack/neutron-db-create-hrfjv" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.044224 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxnzn\" (UniqueName: \"kubernetes.io/projected/23ac1021-6630-413d-8f59-2ee8de8b22f6-kube-api-access-jxnzn\") pod \"neutron-db-create-hrfjv\" (UID: \"23ac1021-6630-413d-8f59-2ee8de8b22f6\") " pod="openstack/neutron-db-create-hrfjv" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.044311 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xn2w\" (UniqueName: \"kubernetes.io/projected/6a856bff-885d-46ef-8ce3-300c89cfae1f-kube-api-access-9xn2w\") pod \"keystone-db-sync-7fmd6\" (UID: \"6a856bff-885d-46ef-8ce3-300c89cfae1f\") " pod="openstack/keystone-db-sync-7fmd6" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.056629 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a856bff-885d-46ef-8ce3-300c89cfae1f-combined-ca-bundle\") pod \"keystone-db-sync-7fmd6\" (UID: \"6a856bff-885d-46ef-8ce3-300c89cfae1f\") " pod="openstack/keystone-db-sync-7fmd6" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.057412 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23ac1021-6630-413d-8f59-2ee8de8b22f6-operator-scripts\") pod \"neutron-db-create-hrfjv\" (UID: \"23ac1021-6630-413d-8f59-2ee8de8b22f6\") " pod="openstack/neutron-db-create-hrfjv" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.058487 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a856bff-885d-46ef-8ce3-300c89cfae1f-config-data\") pod \"keystone-db-sync-7fmd6\" (UID: \"6a856bff-885d-46ef-8ce3-300c89cfae1f\") " pod="openstack/keystone-db-sync-7fmd6" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.069537 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-8bfa-account-create-update-v499m"] Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.075192 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-8bfa-account-create-update-v499m" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.082177 4672 scope.go:117] "RemoveContainer" containerID="d211f7c0a6efc1d39c5a671b547c6d127b5eec7be3d091240b64d2494b576ee4" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.085663 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.086031 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.098311 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-8bfa-account-create-update-v499m"] Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.103057 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.114365 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xn2w\" (UniqueName: \"kubernetes.io/projected/6a856bff-885d-46ef-8ce3-300c89cfae1f-kube-api-access-9xn2w\") pod \"keystone-db-sync-7fmd6\" (UID: \"6a856bff-885d-46ef-8ce3-300c89cfae1f\") " pod="openstack/keystone-db-sync-7fmd6" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.117665 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ba30-account-create-update-mkpmg" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.120187 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxnzn\" (UniqueName: \"kubernetes.io/projected/23ac1021-6630-413d-8f59-2ee8de8b22f6-kube-api-access-jxnzn\") pod \"neutron-db-create-hrfjv\" (UID: \"23ac1021-6630-413d-8f59-2ee8de8b22f6\") " pod="openstack/neutron-db-create-hrfjv" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.151958 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rvcz\" (UniqueName: \"kubernetes.io/projected/dfcd0d26-153d-463d-b38b-35b9fdbe6a53-kube-api-access-2rvcz\") pod \"cloudkitty-8bfa-account-create-update-v499m\" (UID: \"dfcd0d26-153d-463d-b38b-35b9fdbe6a53\") " pod="openstack/cloudkitty-8bfa-account-create-update-v499m" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.152065 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfcd0d26-153d-463d-b38b-35b9fdbe6a53-operator-scripts\") pod \"cloudkitty-8bfa-account-create-update-v499m\" (UID: \"dfcd0d26-153d-463d-b38b-35b9fdbe6a53\") " pod="openstack/cloudkitty-8bfa-account-create-update-v499m" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.152146 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76e79bb4-99d3-4b9b-b496-f50ec996f5d4-operator-scripts\") pod \"neutron-670a-account-create-update-7268z\" (UID: \"76e79bb4-99d3-4b9b-b496-f50ec996f5d4\") " pod="openstack/neutron-670a-account-create-update-7268z" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.152172 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twrtx\" (UniqueName: \"kubernetes.io/projected/76e79bb4-99d3-4b9b-b496-f50ec996f5d4-kube-api-access-twrtx\") pod \"neutron-670a-account-create-update-7268z\" (UID: \"76e79bb4-99d3-4b9b-b496-f50ec996f5d4\") " pod="openstack/neutron-670a-account-create-update-7268z" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.153166 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.153658 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76e79bb4-99d3-4b9b-b496-f50ec996f5d4-operator-scripts\") pod \"neutron-670a-account-create-update-7268z\" (UID: \"76e79bb4-99d3-4b9b-b496-f50ec996f5d4\") " pod="openstack/neutron-670a-account-create-update-7268z" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.155312 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.171312 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-bjfpp" podStartSLOduration=10.171288907 podStartE2EDuration="10.171288907s" podCreationTimestamp="2026-02-17 16:22:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:57.142824346 +0000 UTC m=+1185.896913068" watchObservedRunningTime="2026-02-17 16:22:57.171288907 +0000 UTC m=+1185.925377649" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.194731 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.194903 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-49mpb" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.195014 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.195120 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.195260 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.195460 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.195682 4672 scope.go:117] "RemoveContainer" containerID="81b9ff72fcf90aafc0a053c5d22382d36d35290cdf95ef286a4d0cc62e6eaff3" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.211772 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.211957 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.212056 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.217710 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.221157 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1ccd-account-create-update-s59r2" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.248031 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twrtx\" (UniqueName: \"kubernetes.io/projected/76e79bb4-99d3-4b9b-b496-f50ec996f5d4-kube-api-access-twrtx\") pod \"neutron-670a-account-create-update-7268z\" (UID: \"76e79bb4-99d3-4b9b-b496-f50ec996f5d4\") " pod="openstack/neutron-670a-account-create-update-7268z" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.250574 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xv2td" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.253413 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rvcz\" (UniqueName: \"kubernetes.io/projected/dfcd0d26-153d-463d-b38b-35b9fdbe6a53-kube-api-access-2rvcz\") pod \"cloudkitty-8bfa-account-create-update-v499m\" (UID: \"dfcd0d26-153d-463d-b38b-35b9fdbe6a53\") " pod="openstack/cloudkitty-8bfa-account-create-update-v499m" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.253447 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfcd0d26-153d-463d-b38b-35b9fdbe6a53-operator-scripts\") pod \"cloudkitty-8bfa-account-create-update-v499m\" (UID: \"dfcd0d26-153d-463d-b38b-35b9fdbe6a53\") " pod="openstack/cloudkitty-8bfa-account-create-update-v499m" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.253470 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1107a46-b916-4fe7-b4cc-a6576f242ec0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.253492 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e1107a46-b916-4fe7-b4cc-a6576f242ec0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.253528 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e1107a46-b916-4fe7-b4cc-a6576f242ec0-config\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.253546 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e1107a46-b916-4fe7-b4cc-a6576f242ec0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.253570 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.253592 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e1107a46-b916-4fe7-b4cc-a6576f242ec0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.253638 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e1107a46-b916-4fe7-b4cc-a6576f242ec0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.253663 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e1107a46-b916-4fe7-b4cc-a6576f242ec0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.253689 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-445gd\" (UniqueName: \"kubernetes.io/projected/e1107a46-b916-4fe7-b4cc-a6576f242ec0-kube-api-access-445gd\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.253727 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e1107a46-b916-4fe7-b4cc-a6576f242ec0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.253765 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e1107a46-b916-4fe7-b4cc-a6576f242ec0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.253782 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e1107a46-b916-4fe7-b4cc-a6576f242ec0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.253802 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e1107a46-b916-4fe7-b4cc-a6576f242ec0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.255074 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfcd0d26-153d-463d-b38b-35b9fdbe6a53-operator-scripts\") pod \"cloudkitty-8bfa-account-create-update-v499m\" (UID: \"dfcd0d26-153d-463d-b38b-35b9fdbe6a53\") " pod="openstack/cloudkitty-8bfa-account-create-update-v499m" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.277483 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7fmd6" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.282082 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rvcz\" (UniqueName: \"kubernetes.io/projected/dfcd0d26-153d-463d-b38b-35b9fdbe6a53-kube-api-access-2rvcz\") pod \"cloudkitty-8bfa-account-create-update-v499m\" (UID: \"dfcd0d26-153d-463d-b38b-35b9fdbe6a53\") " pod="openstack/cloudkitty-8bfa-account-create-update-v499m" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.293953 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hrfjv" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.321928 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-670a-account-create-update-7268z" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.346672 4672 scope.go:117] "RemoveContainer" containerID="8adee34dd3fa75a44a03d90eb2604f154c0a930cabee81bb6de7f08825978692" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.355634 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e1107a46-b916-4fe7-b4cc-a6576f242ec0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.355688 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e1107a46-b916-4fe7-b4cc-a6576f242ec0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.355721 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-445gd\" (UniqueName: \"kubernetes.io/projected/e1107a46-b916-4fe7-b4cc-a6576f242ec0-kube-api-access-445gd\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.355770 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e1107a46-b916-4fe7-b4cc-a6576f242ec0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.355813 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e1107a46-b916-4fe7-b4cc-a6576f242ec0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.355833 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e1107a46-b916-4fe7-b4cc-a6576f242ec0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.355855 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e1107a46-b916-4fe7-b4cc-a6576f242ec0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.355895 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1107a46-b916-4fe7-b4cc-a6576f242ec0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.355911 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e1107a46-b916-4fe7-b4cc-a6576f242ec0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.355932 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e1107a46-b916-4fe7-b4cc-a6576f242ec0-config\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.355948 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e1107a46-b916-4fe7-b4cc-a6576f242ec0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.355973 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.355994 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e1107a46-b916-4fe7-b4cc-a6576f242ec0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.361263 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e1107a46-b916-4fe7-b4cc-a6576f242ec0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.361762 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e1107a46-b916-4fe7-b4cc-a6576f242ec0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.365123 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e1107a46-b916-4fe7-b4cc-a6576f242ec0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.379524 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e1107a46-b916-4fe7-b4cc-a6576f242ec0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.379934 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e1107a46-b916-4fe7-b4cc-a6576f242ec0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.384043 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e1107a46-b916-4fe7-b4cc-a6576f242ec0-config\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.386936 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e1107a46-b916-4fe7-b4cc-a6576f242ec0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.399360 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e1107a46-b916-4fe7-b4cc-a6576f242ec0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.399464 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e1107a46-b916-4fe7-b4cc-a6576f242ec0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.408524 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-445gd\" (UniqueName: \"kubernetes.io/projected/e1107a46-b916-4fe7-b4cc-a6576f242ec0-kube-api-access-445gd\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.411854 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1107a46-b916-4fe7-b4cc-a6576f242ec0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.424018 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e1107a46-b916-4fe7-b4cc-a6576f242ec0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.439949 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-8bfa-account-create-update-v499m" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.456328 4672 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.456385 4672 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/703a962647886df8f581706a29afc229b08eaf30613cfc7e75745da710408f03/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.519309 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dgcpz"] Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.580522 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d40fcb6-031f-4b02-8b6e-a6b32aaabc38\") pod \"prometheus-metric-storage-0\" (UID: \"e1107a46-b916-4fe7-b4cc-a6576f242ec0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.589147 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.824647 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-rxdz9"] Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.967129 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4275129d-2c75-4df7-9c23-0f883ec0733d" path="/var/lib/kubelet/pods/4275129d-2c75-4df7-9c23-0f883ec0733d/volumes" Feb 17 16:22:57 crc kubenswrapper[4672]: I0217 16:22:57.968205 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="878cc257-0a03-44ea-ae70-356195dc5427" path="/var/lib/kubelet/pods/878cc257-0a03-44ea-ae70-356195dc5427/volumes" Feb 17 16:22:58 crc kubenswrapper[4672]: I0217 16:22:58.025837 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-rxdz9" event={"ID":"aaf29db5-1c89-453b-8632-65a429e68374","Type":"ContainerStarted","Data":"0bccc9c1a24c6eff1f149c95089c6edb937f4b0d7df0d7336b7ee3d6905c3820"} Feb 17 16:22:58 crc kubenswrapper[4672]: I0217 16:22:58.045244 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dgcpz" event={"ID":"06f2f111-8f19-433e-bb63-b57167c82e19","Type":"ContainerStarted","Data":"287424e30f511ad554f52bf6fa7cca97783aebd7260d8e28ab00eca276463a40"} Feb 17 16:22:58 crc kubenswrapper[4672]: I0217 16:22:58.167721 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-hrfjv"] Feb 17 16:22:58 crc kubenswrapper[4672]: I0217 16:22:58.191261 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ba30-account-create-update-mkpmg"] Feb 17 16:22:58 crc kubenswrapper[4672]: I0217 16:22:58.204146 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1ccd-account-create-update-s59r2"] Feb 17 16:22:58 crc kubenswrapper[4672]: I0217 16:22:58.412079 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xv2td"] Feb 17 16:22:58 crc kubenswrapper[4672]: I0217 16:22:58.427635 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-670a-account-create-update-7268z"] Feb 17 16:22:58 crc kubenswrapper[4672]: I0217 16:22:58.524602 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:22:58 crc kubenswrapper[4672]: I0217 16:22:58.533486 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-8bfa-account-create-update-v499m"] Feb 17 16:22:58 crc kubenswrapper[4672]: I0217 16:22:58.716107 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-7fmd6"] Feb 17 16:22:58 crc kubenswrapper[4672]: W0217 16:22:58.725780 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a856bff_885d_46ef_8ce3_300c89cfae1f.slice/crio-fb07326d265dc08b0cef27e1557ded964b16a53a9325eb2ce38b47af73b283b6 WatchSource:0}: Error finding container fb07326d265dc08b0cef27e1557ded964b16a53a9325eb2ce38b47af73b283b6: Status 404 returned error can't find the container with id fb07326d265dc08b0cef27e1557ded964b16a53a9325eb2ce38b47af73b283b6 Feb 17 16:22:59 crc kubenswrapper[4672]: I0217 16:22:59.062113 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hrfjv" event={"ID":"23ac1021-6630-413d-8f59-2ee8de8b22f6","Type":"ContainerStarted","Data":"86e446ebcf7786cb2860bea5fd7dea3f3b795218dcc2980195f54b2e5d27a3b2"} Feb 17 16:22:59 crc kubenswrapper[4672]: I0217 16:22:59.062156 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hrfjv" event={"ID":"23ac1021-6630-413d-8f59-2ee8de8b22f6","Type":"ContainerStarted","Data":"ef631ad657f5fb004515fd8811878cce09343527b6c4f9a62385e1ba0dcfaff4"} Feb 17 16:22:59 crc kubenswrapper[4672]: I0217 16:22:59.064461 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1ccd-account-create-update-s59r2" event={"ID":"aad1e993-49a2-4984-9bf0-11c2a4190fd3","Type":"ContainerStarted","Data":"27c1b61f7898fee06e951aa7fa606d09a4b72ab736d84e8ea4ecd6751612a292"} Feb 17 16:22:59 crc kubenswrapper[4672]: I0217 16:22:59.064520 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1ccd-account-create-update-s59r2" event={"ID":"aad1e993-49a2-4984-9bf0-11c2a4190fd3","Type":"ContainerStarted","Data":"db85c9a95df0ea8fca39d844989639bff7a4d7ba9345b72decfa27f154a644ce"} Feb 17 16:22:59 crc kubenswrapper[4672]: I0217 16:22:59.066357 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-670a-account-create-update-7268z" event={"ID":"76e79bb4-99d3-4b9b-b496-f50ec996f5d4","Type":"ContainerStarted","Data":"65eeb9157bdb3ade0b0de2e7089ead67024776eae335626b4d1b2730358502b3"} Feb 17 16:22:59 crc kubenswrapper[4672]: I0217 16:22:59.066381 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-670a-account-create-update-7268z" event={"ID":"76e79bb4-99d3-4b9b-b496-f50ec996f5d4","Type":"ContainerStarted","Data":"e42489f88d9d27d1f43b1fe6209282c83c4f47f9360e9db0fa297793ccfb1a82"} Feb 17 16:22:59 crc kubenswrapper[4672]: I0217 16:22:59.069415 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xv2td" event={"ID":"fb24c242-f54a-40ba-8c88-f3dbed463abd","Type":"ContainerStarted","Data":"87f21312737fd96118609fa1b74c8bd486cbd35c1b313058f6cd012a089d73c7"} Feb 17 16:22:59 crc kubenswrapper[4672]: I0217 16:22:59.071625 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ba30-account-create-update-mkpmg" event={"ID":"ab366b61-1428-4608-8ac3-2bb8063e88f2","Type":"ContainerStarted","Data":"539ecff9b25c888a8a0c59ec69ed05b6535c9bfc9f6358a2194a7eed93b07f25"} Feb 17 16:22:59 crc kubenswrapper[4672]: I0217 16:22:59.071650 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ba30-account-create-update-mkpmg" event={"ID":"ab366b61-1428-4608-8ac3-2bb8063e88f2","Type":"ContainerStarted","Data":"751edcc557315ef310d63cce839d2fc6d6908dc8d9d61416a9bb26c94431c139"} Feb 17 16:22:59 crc kubenswrapper[4672]: I0217 16:22:59.075630 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e1107a46-b916-4fe7-b4cc-a6576f242ec0","Type":"ContainerStarted","Data":"ee45bddc968b643db9778bf3ef5eb67dc4edc4f75ed5a3110d6a176d962bd91d"} Feb 17 16:22:59 crc kubenswrapper[4672]: I0217 16:22:59.077149 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7fmd6" event={"ID":"6a856bff-885d-46ef-8ce3-300c89cfae1f","Type":"ContainerStarted","Data":"fb07326d265dc08b0cef27e1557ded964b16a53a9325eb2ce38b47af73b283b6"} Feb 17 16:22:59 crc kubenswrapper[4672]: I0217 16:22:59.078790 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-8bfa-account-create-update-v499m" event={"ID":"dfcd0d26-153d-463d-b38b-35b9fdbe6a53","Type":"ContainerStarted","Data":"4396e8cb80ac538b2ebb15ebf5dcbf3dd7d714fdbfdb5bb35a0ed0b116530cd2"} Feb 17 16:22:59 crc kubenswrapper[4672]: I0217 16:22:59.078821 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-8bfa-account-create-update-v499m" event={"ID":"dfcd0d26-153d-463d-b38b-35b9fdbe6a53","Type":"ContainerStarted","Data":"6789015dd8a153f7b0d7cbdbd325962f2e67a03e985afd0dc422469cc639b99d"} Feb 17 16:22:59 crc kubenswrapper[4672]: I0217 16:22:59.086167 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-hrfjv" podStartSLOduration=3.086144031 podStartE2EDuration="3.086144031s" podCreationTimestamp="2026-02-17 16:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:59.078264003 +0000 UTC m=+1187.832352735" watchObservedRunningTime="2026-02-17 16:22:59.086144031 +0000 UTC m=+1187.840232763" Feb 17 16:22:59 crc kubenswrapper[4672]: I0217 16:22:59.108196 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-create-rxdz9" podStartSLOduration=3.108177123 podStartE2EDuration="3.108177123s" podCreationTimestamp="2026-02-17 16:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:59.093734651 +0000 UTC m=+1187.847823383" watchObservedRunningTime="2026-02-17 16:22:59.108177123 +0000 UTC m=+1187.862265855" Feb 17 16:22:59 crc kubenswrapper[4672]: I0217 16:22:59.131358 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-670a-account-create-update-7268z" podStartSLOduration=3.131335334 podStartE2EDuration="3.131335334s" podCreationTimestamp="2026-02-17 16:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:59.106694153 +0000 UTC m=+1187.860782885" watchObservedRunningTime="2026-02-17 16:22:59.131335334 +0000 UTC m=+1187.885424066" Feb 17 16:22:59 crc kubenswrapper[4672]: I0217 16:22:59.158315 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ba30-account-create-update-mkpmg" podStartSLOduration=3.158300076 podStartE2EDuration="3.158300076s" podCreationTimestamp="2026-02-17 16:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:59.121966597 +0000 UTC m=+1187.876055329" watchObservedRunningTime="2026-02-17 16:22:59.158300076 +0000 UTC m=+1187.912388808" Feb 17 16:22:59 crc kubenswrapper[4672]: I0217 16:22:59.163652 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-1ccd-account-create-update-s59r2" podStartSLOduration=3.1636350970000002 podStartE2EDuration="3.163635097s" podCreationTimestamp="2026-02-17 16:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:59.138566135 +0000 UTC m=+1187.892654877" watchObservedRunningTime="2026-02-17 16:22:59.163635097 +0000 UTC m=+1187.917723829" Feb 17 16:22:59 crc kubenswrapper[4672]: I0217 16:22:59.174430 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-xv2td" podStartSLOduration=3.174415381 podStartE2EDuration="3.174415381s" podCreationTimestamp="2026-02-17 16:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:59.154892146 +0000 UTC m=+1187.908980878" watchObservedRunningTime="2026-02-17 16:22:59.174415381 +0000 UTC m=+1187.928504113" Feb 17 16:22:59 crc kubenswrapper[4672]: I0217 16:22:59.188759 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-8bfa-account-create-update-v499m" podStartSLOduration=2.18873782 podStartE2EDuration="2.18873782s" podCreationTimestamp="2026-02-17 16:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:22:59.172017808 +0000 UTC m=+1187.926106540" watchObservedRunningTime="2026-02-17 16:22:59.18873782 +0000 UTC m=+1187.942826552" Feb 17 16:23:00 crc kubenswrapper[4672]: I0217 16:23:00.095363 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-rxdz9" event={"ID":"aaf29db5-1c89-453b-8632-65a429e68374","Type":"ContainerDied","Data":"a6b68fbc0dea0631fcd85b9189adbf348448827e893476ddaca4d4a50c7fe42a"} Feb 17 16:23:00 crc kubenswrapper[4672]: I0217 16:23:00.095762 4672 generic.go:334] "Generic (PLEG): container finished" podID="aaf29db5-1c89-453b-8632-65a429e68374" containerID="a6b68fbc0dea0631fcd85b9189adbf348448827e893476ddaca4d4a50c7fe42a" exitCode=0 Feb 17 16:23:00 crc kubenswrapper[4672]: I0217 16:23:00.102533 4672 generic.go:334] "Generic (PLEG): container finished" podID="aad1e993-49a2-4984-9bf0-11c2a4190fd3" containerID="27c1b61f7898fee06e951aa7fa606d09a4b72ab736d84e8ea4ecd6751612a292" exitCode=0 Feb 17 16:23:00 crc kubenswrapper[4672]: I0217 16:23:00.102621 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1ccd-account-create-update-s59r2" event={"ID":"aad1e993-49a2-4984-9bf0-11c2a4190fd3","Type":"ContainerDied","Data":"27c1b61f7898fee06e951aa7fa606d09a4b72ab736d84e8ea4ecd6751612a292"} Feb 17 16:23:00 crc kubenswrapper[4672]: I0217 16:23:00.106592 4672 generic.go:334] "Generic (PLEG): container finished" podID="76e79bb4-99d3-4b9b-b496-f50ec996f5d4" containerID="65eeb9157bdb3ade0b0de2e7089ead67024776eae335626b4d1b2730358502b3" exitCode=0 Feb 17 16:23:00 crc kubenswrapper[4672]: I0217 16:23:00.106623 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-670a-account-create-update-7268z" event={"ID":"76e79bb4-99d3-4b9b-b496-f50ec996f5d4","Type":"ContainerDied","Data":"65eeb9157bdb3ade0b0de2e7089ead67024776eae335626b4d1b2730358502b3"} Feb 17 16:23:00 crc kubenswrapper[4672]: I0217 16:23:00.111015 4672 generic.go:334] "Generic (PLEG): container finished" podID="7508897a-e56b-444c-87c2-9d1cbc41170f" containerID="7dee2536b419de0589ef67fccc8660203bca4f9361bd99d65d873831aa236b7c" exitCode=0 Feb 17 16:23:00 crc kubenswrapper[4672]: I0217 16:23:00.111125 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bjfpp" event={"ID":"7508897a-e56b-444c-87c2-9d1cbc41170f","Type":"ContainerDied","Data":"7dee2536b419de0589ef67fccc8660203bca4f9361bd99d65d873831aa236b7c"} Feb 17 16:23:00 crc kubenswrapper[4672]: I0217 16:23:00.113031 4672 generic.go:334] "Generic (PLEG): container finished" podID="fb24c242-f54a-40ba-8c88-f3dbed463abd" containerID="137f065b32b3c823f9f7f3fbe8b833f28a0d48e3bcaa3f659271b98e9cc20b80" exitCode=0 Feb 17 16:23:00 crc kubenswrapper[4672]: I0217 16:23:00.113120 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xv2td" event={"ID":"fb24c242-f54a-40ba-8c88-f3dbed463abd","Type":"ContainerDied","Data":"137f065b32b3c823f9f7f3fbe8b833f28a0d48e3bcaa3f659271b98e9cc20b80"} Feb 17 16:23:00 crc kubenswrapper[4672]: I0217 16:23:00.114851 4672 generic.go:334] "Generic (PLEG): container finished" podID="ab366b61-1428-4608-8ac3-2bb8063e88f2" containerID="539ecff9b25c888a8a0c59ec69ed05b6535c9bfc9f6358a2194a7eed93b07f25" exitCode=0 Feb 17 16:23:00 crc kubenswrapper[4672]: I0217 16:23:00.114901 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ba30-account-create-update-mkpmg" event={"ID":"ab366b61-1428-4608-8ac3-2bb8063e88f2","Type":"ContainerDied","Data":"539ecff9b25c888a8a0c59ec69ed05b6535c9bfc9f6358a2194a7eed93b07f25"} Feb 17 16:23:00 crc kubenswrapper[4672]: I0217 16:23:00.116799 4672 generic.go:334] "Generic (PLEG): container finished" podID="06f2f111-8f19-433e-bb63-b57167c82e19" containerID="f896f1fefbeeba338e7b53e25c15c071a98a9186350929435ebe360430f88c52" exitCode=0 Feb 17 16:23:00 crc kubenswrapper[4672]: I0217 16:23:00.116850 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dgcpz" event={"ID":"06f2f111-8f19-433e-bb63-b57167c82e19","Type":"ContainerDied","Data":"f896f1fefbeeba338e7b53e25c15c071a98a9186350929435ebe360430f88c52"} Feb 17 16:23:00 crc kubenswrapper[4672]: I0217 16:23:00.119049 4672 generic.go:334] "Generic (PLEG): container finished" podID="dfcd0d26-153d-463d-b38b-35b9fdbe6a53" containerID="4396e8cb80ac538b2ebb15ebf5dcbf3dd7d714fdbfdb5bb35a0ed0b116530cd2" exitCode=0 Feb 17 16:23:00 crc kubenswrapper[4672]: I0217 16:23:00.119133 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-8bfa-account-create-update-v499m" event={"ID":"dfcd0d26-153d-463d-b38b-35b9fdbe6a53","Type":"ContainerDied","Data":"4396e8cb80ac538b2ebb15ebf5dcbf3dd7d714fdbfdb5bb35a0ed0b116530cd2"} Feb 17 16:23:00 crc kubenswrapper[4672]: I0217 16:23:00.122502 4672 generic.go:334] "Generic (PLEG): container finished" podID="23ac1021-6630-413d-8f59-2ee8de8b22f6" containerID="86e446ebcf7786cb2860bea5fd7dea3f3b795218dcc2980195f54b2e5d27a3b2" exitCode=0 Feb 17 16:23:00 crc kubenswrapper[4672]: I0217 16:23:00.122587 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hrfjv" event={"ID":"23ac1021-6630-413d-8f59-2ee8de8b22f6","Type":"ContainerDied","Data":"86e446ebcf7786cb2860bea5fd7dea3f3b795218dcc2980195f54b2e5d27a3b2"} Feb 17 16:23:02 crc kubenswrapper[4672]: I0217 16:23:02.141625 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e1107a46-b916-4fe7-b4cc-a6576f242ec0","Type":"ContainerStarted","Data":"03396567c1af726acf7d0604a1d7a4c409400fa7b45c8b13cb609e93edc134a7"} Feb 17 16:23:02 crc kubenswrapper[4672]: I0217 16:23:02.960564 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.159323 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-670a-account-create-update-7268z" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.161290 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-rxdz9" event={"ID":"aaf29db5-1c89-453b-8632-65a429e68374","Type":"ContainerDied","Data":"0bccc9c1a24c6eff1f149c95089c6edb937f4b0d7df0d7336b7ee3d6905c3820"} Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.161343 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bccc9c1a24c6eff1f149c95089c6edb937f4b0d7df0d7336b7ee3d6905c3820" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.162758 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xv2td" event={"ID":"fb24c242-f54a-40ba-8c88-f3dbed463abd","Type":"ContainerDied","Data":"87f21312737fd96118609fa1b74c8bd486cbd35c1b313058f6cd012a089d73c7"} Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.162795 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87f21312737fd96118609fa1b74c8bd486cbd35c1b313058f6cd012a089d73c7" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.164271 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ba30-account-create-update-mkpmg" event={"ID":"ab366b61-1428-4608-8ac3-2bb8063e88f2","Type":"ContainerDied","Data":"751edcc557315ef310d63cce839d2fc6d6908dc8d9d61416a9bb26c94431c139"} Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.164330 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="751edcc557315ef310d63cce839d2fc6d6908dc8d9d61416a9bb26c94431c139" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.165774 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dgcpz" event={"ID":"06f2f111-8f19-433e-bb63-b57167c82e19","Type":"ContainerDied","Data":"287424e30f511ad554f52bf6fa7cca97783aebd7260d8e28ab00eca276463a40"} Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.165801 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="287424e30f511ad554f52bf6fa7cca97783aebd7260d8e28ab00eca276463a40" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.168186 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hrfjv" event={"ID":"23ac1021-6630-413d-8f59-2ee8de8b22f6","Type":"ContainerDied","Data":"ef631ad657f5fb004515fd8811878cce09343527b6c4f9a62385e1ba0dcfaff4"} Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.168337 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef631ad657f5fb004515fd8811878cce09343527b6c4f9a62385e1ba0dcfaff4" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.168860 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xv2td" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.169591 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bjfpp" event={"ID":"7508897a-e56b-444c-87c2-9d1cbc41170f","Type":"ContainerDied","Data":"b84f2d2164f5fc0acee385c6d5c9b293464f28acd4dfaaaef33f081ed6fe77db"} Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.169665 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b84f2d2164f5fc0acee385c6d5c9b293464f28acd4dfaaaef33f081ed6fe77db" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.170816 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-8bfa-account-create-update-v499m" event={"ID":"dfcd0d26-153d-463d-b38b-35b9fdbe6a53","Type":"ContainerDied","Data":"6789015dd8a153f7b0d7cbdbd325962f2e67a03e985afd0dc422469cc639b99d"} Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.170899 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6789015dd8a153f7b0d7cbdbd325962f2e67a03e985afd0dc422469cc639b99d" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.172462 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1ccd-account-create-update-s59r2" event={"ID":"aad1e993-49a2-4984-9bf0-11c2a4190fd3","Type":"ContainerDied","Data":"db85c9a95df0ea8fca39d844989639bff7a4d7ba9345b72decfa27f154a644ce"} Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.172613 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db85c9a95df0ea8fca39d844989639bff7a4d7ba9345b72decfa27f154a644ce" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.173859 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-670a-account-create-update-7268z" event={"ID":"76e79bb4-99d3-4b9b-b496-f50ec996f5d4","Type":"ContainerDied","Data":"e42489f88d9d27d1f43b1fe6209282c83c4f47f9360e9db0fa297793ccfb1a82"} Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.173884 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-670a-account-create-update-7268z" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.173900 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e42489f88d9d27d1f43b1fe6209282c83c4f47f9360e9db0fa297793ccfb1a82" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.177807 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1ccd-account-create-update-s59r2" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.204529 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-8bfa-account-create-update-v499m" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.225795 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twrtx\" (UniqueName: \"kubernetes.io/projected/76e79bb4-99d3-4b9b-b496-f50ec996f5d4-kube-api-access-twrtx\") pod \"76e79bb4-99d3-4b9b-b496-f50ec996f5d4\" (UID: \"76e79bb4-99d3-4b9b-b496-f50ec996f5d4\") " Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.225924 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76e79bb4-99d3-4b9b-b496-f50ec996f5d4-operator-scripts\") pod \"76e79bb4-99d3-4b9b-b496-f50ec996f5d4\" (UID: \"76e79bb4-99d3-4b9b-b496-f50ec996f5d4\") " Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.227272 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e79bb4-99d3-4b9b-b496-f50ec996f5d4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "76e79bb4-99d3-4b9b-b496-f50ec996f5d4" (UID: "76e79bb4-99d3-4b9b-b496-f50ec996f5d4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.234181 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e79bb4-99d3-4b9b-b496-f50ec996f5d4-kube-api-access-twrtx" (OuterVolumeSpecName: "kube-api-access-twrtx") pod "76e79bb4-99d3-4b9b-b496-f50ec996f5d4" (UID: "76e79bb4-99d3-4b9b-b496-f50ec996f5d4"). InnerVolumeSpecName "kube-api-access-twrtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.237235 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ba30-account-create-update-mkpmg" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.295864 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hrfjv" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.300685 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bjfpp" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.318806 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dgcpz" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.320067 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-rxdz9" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.329311 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb24c242-f54a-40ba-8c88-f3dbed463abd-operator-scripts\") pod \"fb24c242-f54a-40ba-8c88-f3dbed463abd\" (UID: \"fb24c242-f54a-40ba-8c88-f3dbed463abd\") " Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.329390 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-488tb\" (UniqueName: \"kubernetes.io/projected/aad1e993-49a2-4984-9bf0-11c2a4190fd3-kube-api-access-488tb\") pod \"aad1e993-49a2-4984-9bf0-11c2a4190fd3\" (UID: \"aad1e993-49a2-4984-9bf0-11c2a4190fd3\") " Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.329447 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbfh6\" (UniqueName: \"kubernetes.io/projected/fb24c242-f54a-40ba-8c88-f3dbed463abd-kube-api-access-jbfh6\") pod \"fb24c242-f54a-40ba-8c88-f3dbed463abd\" (UID: \"fb24c242-f54a-40ba-8c88-f3dbed463abd\") " Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.329478 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aad1e993-49a2-4984-9bf0-11c2a4190fd3-operator-scripts\") pod \"aad1e993-49a2-4984-9bf0-11c2a4190fd3\" (UID: \"aad1e993-49a2-4984-9bf0-11c2a4190fd3\") " Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.329499 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfcd0d26-153d-463d-b38b-35b9fdbe6a53-operator-scripts\") pod \"dfcd0d26-153d-463d-b38b-35b9fdbe6a53\" (UID: \"dfcd0d26-153d-463d-b38b-35b9fdbe6a53\") " Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.329547 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbmms\" (UniqueName: \"kubernetes.io/projected/ab366b61-1428-4608-8ac3-2bb8063e88f2-kube-api-access-gbmms\") pod \"ab366b61-1428-4608-8ac3-2bb8063e88f2\" (UID: \"ab366b61-1428-4608-8ac3-2bb8063e88f2\") " Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.329612 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rvcz\" (UniqueName: \"kubernetes.io/projected/dfcd0d26-153d-463d-b38b-35b9fdbe6a53-kube-api-access-2rvcz\") pod \"dfcd0d26-153d-463d-b38b-35b9fdbe6a53\" (UID: \"dfcd0d26-153d-463d-b38b-35b9fdbe6a53\") " Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.329669 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab366b61-1428-4608-8ac3-2bb8063e88f2-operator-scripts\") pod \"ab366b61-1428-4608-8ac3-2bb8063e88f2\" (UID: \"ab366b61-1428-4608-8ac3-2bb8063e88f2\") " Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.330021 4672 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76e79bb4-99d3-4b9b-b496-f50ec996f5d4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.330035 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twrtx\" (UniqueName: \"kubernetes.io/projected/76e79bb4-99d3-4b9b-b496-f50ec996f5d4-kube-api-access-twrtx\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.330076 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aad1e993-49a2-4984-9bf0-11c2a4190fd3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aad1e993-49a2-4984-9bf0-11c2a4190fd3" (UID: "aad1e993-49a2-4984-9bf0-11c2a4190fd3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.330397 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb24c242-f54a-40ba-8c88-f3dbed463abd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb24c242-f54a-40ba-8c88-f3dbed463abd" (UID: "fb24c242-f54a-40ba-8c88-f3dbed463abd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.330726 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab366b61-1428-4608-8ac3-2bb8063e88f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab366b61-1428-4608-8ac3-2bb8063e88f2" (UID: "ab366b61-1428-4608-8ac3-2bb8063e88f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.332155 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfcd0d26-153d-463d-b38b-35b9fdbe6a53-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dfcd0d26-153d-463d-b38b-35b9fdbe6a53" (UID: "dfcd0d26-153d-463d-b38b-35b9fdbe6a53"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.336310 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab366b61-1428-4608-8ac3-2bb8063e88f2-kube-api-access-gbmms" (OuterVolumeSpecName: "kube-api-access-gbmms") pod "ab366b61-1428-4608-8ac3-2bb8063e88f2" (UID: "ab366b61-1428-4608-8ac3-2bb8063e88f2"). InnerVolumeSpecName "kube-api-access-gbmms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.340825 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb24c242-f54a-40ba-8c88-f3dbed463abd-kube-api-access-jbfh6" (OuterVolumeSpecName: "kube-api-access-jbfh6") pod "fb24c242-f54a-40ba-8c88-f3dbed463abd" (UID: "fb24c242-f54a-40ba-8c88-f3dbed463abd"). InnerVolumeSpecName "kube-api-access-jbfh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.341269 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfcd0d26-153d-463d-b38b-35b9fdbe6a53-kube-api-access-2rvcz" (OuterVolumeSpecName: "kube-api-access-2rvcz") pod "dfcd0d26-153d-463d-b38b-35b9fdbe6a53" (UID: "dfcd0d26-153d-463d-b38b-35b9fdbe6a53"). InnerVolumeSpecName "kube-api-access-2rvcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.341326 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aad1e993-49a2-4984-9bf0-11c2a4190fd3-kube-api-access-488tb" (OuterVolumeSpecName: "kube-api-access-488tb") pod "aad1e993-49a2-4984-9bf0-11c2a4190fd3" (UID: "aad1e993-49a2-4984-9bf0-11c2a4190fd3"). InnerVolumeSpecName "kube-api-access-488tb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.430987 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxx7w\" (UniqueName: \"kubernetes.io/projected/06f2f111-8f19-433e-bb63-b57167c82e19-kube-api-access-wxx7w\") pod \"06f2f111-8f19-433e-bb63-b57167c82e19\" (UID: \"06f2f111-8f19-433e-bb63-b57167c82e19\") " Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.431074 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23ac1021-6630-413d-8f59-2ee8de8b22f6-operator-scripts\") pod \"23ac1021-6630-413d-8f59-2ee8de8b22f6\" (UID: \"23ac1021-6630-413d-8f59-2ee8de8b22f6\") " Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.431094 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7508897a-e56b-444c-87c2-9d1cbc41170f-operator-scripts\") pod \"7508897a-e56b-444c-87c2-9d1cbc41170f\" (UID: \"7508897a-e56b-444c-87c2-9d1cbc41170f\") " Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.431155 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r572\" (UniqueName: \"kubernetes.io/projected/aaf29db5-1c89-453b-8632-65a429e68374-kube-api-access-9r572\") pod \"aaf29db5-1c89-453b-8632-65a429e68374\" (UID: \"aaf29db5-1c89-453b-8632-65a429e68374\") " Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.431305 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaf29db5-1c89-453b-8632-65a429e68374-operator-scripts\") pod \"aaf29db5-1c89-453b-8632-65a429e68374\" (UID: \"aaf29db5-1c89-453b-8632-65a429e68374\") " Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.431353 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06f2f111-8f19-433e-bb63-b57167c82e19-operator-scripts\") pod \"06f2f111-8f19-433e-bb63-b57167c82e19\" (UID: \"06f2f111-8f19-433e-bb63-b57167c82e19\") " Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.431373 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7vd5\" (UniqueName: \"kubernetes.io/projected/7508897a-e56b-444c-87c2-9d1cbc41170f-kube-api-access-z7vd5\") pod \"7508897a-e56b-444c-87c2-9d1cbc41170f\" (UID: \"7508897a-e56b-444c-87c2-9d1cbc41170f\") " Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.431416 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxnzn\" (UniqueName: \"kubernetes.io/projected/23ac1021-6630-413d-8f59-2ee8de8b22f6-kube-api-access-jxnzn\") pod \"23ac1021-6630-413d-8f59-2ee8de8b22f6\" (UID: \"23ac1021-6630-413d-8f59-2ee8de8b22f6\") " Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.433314 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23ac1021-6630-413d-8f59-2ee8de8b22f6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23ac1021-6630-413d-8f59-2ee8de8b22f6" (UID: "23ac1021-6630-413d-8f59-2ee8de8b22f6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.433680 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7508897a-e56b-444c-87c2-9d1cbc41170f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7508897a-e56b-444c-87c2-9d1cbc41170f" (UID: "7508897a-e56b-444c-87c2-9d1cbc41170f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.433962 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaf29db5-1c89-453b-8632-65a429e68374-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aaf29db5-1c89-453b-8632-65a429e68374" (UID: "aaf29db5-1c89-453b-8632-65a429e68374"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.434092 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06f2f111-8f19-433e-bb63-b57167c82e19-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06f2f111-8f19-433e-bb63-b57167c82e19" (UID: "06f2f111-8f19-433e-bb63-b57167c82e19"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.434343 4672 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aad1e993-49a2-4984-9bf0-11c2a4190fd3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.434375 4672 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfcd0d26-153d-463d-b38b-35b9fdbe6a53-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.434406 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbmms\" (UniqueName: \"kubernetes.io/projected/ab366b61-1428-4608-8ac3-2bb8063e88f2-kube-api-access-gbmms\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.434423 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rvcz\" (UniqueName: \"kubernetes.io/projected/dfcd0d26-153d-463d-b38b-35b9fdbe6a53-kube-api-access-2rvcz\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.434432 4672 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab366b61-1428-4608-8ac3-2bb8063e88f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.434441 4672 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaf29db5-1c89-453b-8632-65a429e68374-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.434453 4672 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb24c242-f54a-40ba-8c88-f3dbed463abd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.434461 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-488tb\" (UniqueName: \"kubernetes.io/projected/aad1e993-49a2-4984-9bf0-11c2a4190fd3-kube-api-access-488tb\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.434472 4672 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23ac1021-6630-413d-8f59-2ee8de8b22f6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.434485 4672 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7508897a-e56b-444c-87c2-9d1cbc41170f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.434494 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbfh6\" (UniqueName: \"kubernetes.io/projected/fb24c242-f54a-40ba-8c88-f3dbed463abd-kube-api-access-jbfh6\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.436275 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaf29db5-1c89-453b-8632-65a429e68374-kube-api-access-9r572" (OuterVolumeSpecName: "kube-api-access-9r572") pod "aaf29db5-1c89-453b-8632-65a429e68374" (UID: "aaf29db5-1c89-453b-8632-65a429e68374"). InnerVolumeSpecName "kube-api-access-9r572". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.437635 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7508897a-e56b-444c-87c2-9d1cbc41170f-kube-api-access-z7vd5" (OuterVolumeSpecName: "kube-api-access-z7vd5") pod "7508897a-e56b-444c-87c2-9d1cbc41170f" (UID: "7508897a-e56b-444c-87c2-9d1cbc41170f"). InnerVolumeSpecName "kube-api-access-z7vd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.439086 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23ac1021-6630-413d-8f59-2ee8de8b22f6-kube-api-access-jxnzn" (OuterVolumeSpecName: "kube-api-access-jxnzn") pod "23ac1021-6630-413d-8f59-2ee8de8b22f6" (UID: "23ac1021-6630-413d-8f59-2ee8de8b22f6"). InnerVolumeSpecName "kube-api-access-jxnzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.453524 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06f2f111-8f19-433e-bb63-b57167c82e19-kube-api-access-wxx7w" (OuterVolumeSpecName: "kube-api-access-wxx7w") pod "06f2f111-8f19-433e-bb63-b57167c82e19" (UID: "06f2f111-8f19-433e-bb63-b57167c82e19"). InnerVolumeSpecName "kube-api-access-wxx7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.536557 4672 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06f2f111-8f19-433e-bb63-b57167c82e19-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.536592 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7vd5\" (UniqueName: \"kubernetes.io/projected/7508897a-e56b-444c-87c2-9d1cbc41170f-kube-api-access-z7vd5\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.536603 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxnzn\" (UniqueName: \"kubernetes.io/projected/23ac1021-6630-413d-8f59-2ee8de8b22f6-kube-api-access-jxnzn\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.536837 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxx7w\" (UniqueName: \"kubernetes.io/projected/06f2f111-8f19-433e-bb63-b57167c82e19-kube-api-access-wxx7w\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:04 crc kubenswrapper[4672]: I0217 16:23:04.536847 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r572\" (UniqueName: \"kubernetes.io/projected/aaf29db5-1c89-453b-8632-65a429e68374-kube-api-access-9r572\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:05 crc kubenswrapper[4672]: I0217 16:23:05.188506 4672 generic.go:334] "Generic (PLEG): container finished" podID="53f2f5a7-17a6-4145-8b1b-f15d7a5309ac" containerID="6829a2817ffaca776f8c96dbc0fbf6f639a72fe83e59f4e17909f920b30ced24" exitCode=0 Feb 17 16:23:05 crc kubenswrapper[4672]: I0217 16:23:05.189072 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-p284t" event={"ID":"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac","Type":"ContainerDied","Data":"6829a2817ffaca776f8c96dbc0fbf6f639a72fe83e59f4e17909f920b30ced24"} Feb 17 16:23:05 crc kubenswrapper[4672]: I0217 16:23:05.193585 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ba30-account-create-update-mkpmg" Feb 17 16:23:05 crc kubenswrapper[4672]: I0217 16:23:05.195837 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dgcpz" Feb 17 16:23:05 crc kubenswrapper[4672]: I0217 16:23:05.196783 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bjfpp" Feb 17 16:23:05 crc kubenswrapper[4672]: I0217 16:23:05.196852 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-8bfa-account-create-update-v499m" Feb 17 16:23:05 crc kubenswrapper[4672]: I0217 16:23:05.196924 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hrfjv" Feb 17 16:23:05 crc kubenswrapper[4672]: I0217 16:23:05.196787 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7fmd6" event={"ID":"6a856bff-885d-46ef-8ce3-300c89cfae1f","Type":"ContainerStarted","Data":"5ff3e15f00da0b56a090b787d113eb457f30c50193e1cb0c76e09b12bba8f327"} Feb 17 16:23:05 crc kubenswrapper[4672]: I0217 16:23:05.196992 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1ccd-account-create-update-s59r2" Feb 17 16:23:05 crc kubenswrapper[4672]: I0217 16:23:05.196962 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-rxdz9" Feb 17 16:23:05 crc kubenswrapper[4672]: I0217 16:23:05.197018 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xv2td" Feb 17 16:23:05 crc kubenswrapper[4672]: I0217 16:23:05.296978 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-7fmd6" podStartSLOduration=4.031530891 podStartE2EDuration="9.296961263s" podCreationTimestamp="2026-02-17 16:22:56 +0000 UTC" firstStartedPulling="2026-02-17 16:22:58.728992122 +0000 UTC m=+1187.483080904" lastFinishedPulling="2026-02-17 16:23:03.994422544 +0000 UTC m=+1192.748511276" observedRunningTime="2026-02-17 16:23:05.257411339 +0000 UTC m=+1194.011500081" watchObservedRunningTime="2026-02-17 16:23:05.296961263 +0000 UTC m=+1194.051049995" Feb 17 16:23:06 crc kubenswrapper[4672]: I0217 16:23:06.725280 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-p284t" Feb 17 16:23:06 crc kubenswrapper[4672]: I0217 16:23:06.877726 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-config-data\") pod \"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac\" (UID: \"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac\") " Feb 17 16:23:06 crc kubenswrapper[4672]: I0217 16:23:06.877819 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-combined-ca-bundle\") pod \"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac\" (UID: \"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac\") " Feb 17 16:23:06 crc kubenswrapper[4672]: I0217 16:23:06.877942 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bltzw\" (UniqueName: \"kubernetes.io/projected/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-kube-api-access-bltzw\") pod \"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac\" (UID: \"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac\") " Feb 17 16:23:06 crc kubenswrapper[4672]: I0217 16:23:06.878130 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-db-sync-config-data\") pod \"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac\" (UID: \"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac\") " Feb 17 16:23:06 crc kubenswrapper[4672]: I0217 16:23:06.883872 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "53f2f5a7-17a6-4145-8b1b-f15d7a5309ac" (UID: "53f2f5a7-17a6-4145-8b1b-f15d7a5309ac"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:06 crc kubenswrapper[4672]: I0217 16:23:06.886569 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-kube-api-access-bltzw" (OuterVolumeSpecName: "kube-api-access-bltzw") pod "53f2f5a7-17a6-4145-8b1b-f15d7a5309ac" (UID: "53f2f5a7-17a6-4145-8b1b-f15d7a5309ac"). InnerVolumeSpecName "kube-api-access-bltzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:06 crc kubenswrapper[4672]: I0217 16:23:06.908618 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53f2f5a7-17a6-4145-8b1b-f15d7a5309ac" (UID: "53f2f5a7-17a6-4145-8b1b-f15d7a5309ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:06 crc kubenswrapper[4672]: I0217 16:23:06.939903 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-config-data" (OuterVolumeSpecName: "config-data") pod "53f2f5a7-17a6-4145-8b1b-f15d7a5309ac" (UID: "53f2f5a7-17a6-4145-8b1b-f15d7a5309ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:06 crc kubenswrapper[4672]: I0217 16:23:06.980432 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:06 crc kubenswrapper[4672]: I0217 16:23:06.980464 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:06 crc kubenswrapper[4672]: I0217 16:23:06.980476 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bltzw\" (UniqueName: \"kubernetes.io/projected/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-kube-api-access-bltzw\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:06 crc kubenswrapper[4672]: I0217 16:23:06.980485 4672 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.210944 4672 generic.go:334] "Generic (PLEG): container finished" podID="6a856bff-885d-46ef-8ce3-300c89cfae1f" containerID="5ff3e15f00da0b56a090b787d113eb457f30c50193e1cb0c76e09b12bba8f327" exitCode=0 Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.211012 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7fmd6" event={"ID":"6a856bff-885d-46ef-8ce3-300c89cfae1f","Type":"ContainerDied","Data":"5ff3e15f00da0b56a090b787d113eb457f30c50193e1cb0c76e09b12bba8f327"} Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.212277 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-p284t" event={"ID":"53f2f5a7-17a6-4145-8b1b-f15d7a5309ac","Type":"ContainerDied","Data":"2b56796a92ebf963bc897e4aa91891fee0a562aff1ca6180e797b4ac37af6bff"} Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.212300 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b56796a92ebf963bc897e4aa91891fee0a562aff1ca6180e797b4ac37af6bff" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.212343 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-p284t" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.607263 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-hmdg9"] Feb 17 16:23:07 crc kubenswrapper[4672]: E0217 16:23:07.607829 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf29db5-1c89-453b-8632-65a429e68374" containerName="mariadb-database-create" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.607845 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf29db5-1c89-453b-8632-65a429e68374" containerName="mariadb-database-create" Feb 17 16:23:07 crc kubenswrapper[4672]: E0217 16:23:07.607857 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb24c242-f54a-40ba-8c88-f3dbed463abd" containerName="mariadb-database-create" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.607864 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb24c242-f54a-40ba-8c88-f3dbed463abd" containerName="mariadb-database-create" Feb 17 16:23:07 crc kubenswrapper[4672]: E0217 16:23:07.607877 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad1e993-49a2-4984-9bf0-11c2a4190fd3" containerName="mariadb-account-create-update" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.607884 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad1e993-49a2-4984-9bf0-11c2a4190fd3" containerName="mariadb-account-create-update" Feb 17 16:23:07 crc kubenswrapper[4672]: E0217 16:23:07.607898 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ac1021-6630-413d-8f59-2ee8de8b22f6" containerName="mariadb-database-create" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.607903 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ac1021-6630-413d-8f59-2ee8de8b22f6" containerName="mariadb-database-create" Feb 17 16:23:07 crc kubenswrapper[4672]: E0217 16:23:07.607918 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e79bb4-99d3-4b9b-b496-f50ec996f5d4" containerName="mariadb-account-create-update" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.607924 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e79bb4-99d3-4b9b-b496-f50ec996f5d4" containerName="mariadb-account-create-update" Feb 17 16:23:07 crc kubenswrapper[4672]: E0217 16:23:07.607933 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfcd0d26-153d-463d-b38b-35b9fdbe6a53" containerName="mariadb-account-create-update" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.607938 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfcd0d26-153d-463d-b38b-35b9fdbe6a53" containerName="mariadb-account-create-update" Feb 17 16:23:07 crc kubenswrapper[4672]: E0217 16:23:07.607950 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7508897a-e56b-444c-87c2-9d1cbc41170f" containerName="mariadb-account-create-update" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.607958 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="7508897a-e56b-444c-87c2-9d1cbc41170f" containerName="mariadb-account-create-update" Feb 17 16:23:07 crc kubenswrapper[4672]: E0217 16:23:07.607968 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab366b61-1428-4608-8ac3-2bb8063e88f2" containerName="mariadb-account-create-update" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.607973 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab366b61-1428-4608-8ac3-2bb8063e88f2" containerName="mariadb-account-create-update" Feb 17 16:23:07 crc kubenswrapper[4672]: E0217 16:23:07.607983 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f2f111-8f19-433e-bb63-b57167c82e19" containerName="mariadb-database-create" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.607989 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f2f111-8f19-433e-bb63-b57167c82e19" containerName="mariadb-database-create" Feb 17 16:23:07 crc kubenswrapper[4672]: E0217 16:23:07.608001 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f2f5a7-17a6-4145-8b1b-f15d7a5309ac" containerName="glance-db-sync" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.608007 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f2f5a7-17a6-4145-8b1b-f15d7a5309ac" containerName="glance-db-sync" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.608155 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="7508897a-e56b-444c-87c2-9d1cbc41170f" containerName="mariadb-account-create-update" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.608165 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad1e993-49a2-4984-9bf0-11c2a4190fd3" containerName="mariadb-account-create-update" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.608179 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb24c242-f54a-40ba-8c88-f3dbed463abd" containerName="mariadb-database-create" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.608190 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaf29db5-1c89-453b-8632-65a429e68374" containerName="mariadb-database-create" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.608197 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab366b61-1428-4608-8ac3-2bb8063e88f2" containerName="mariadb-account-create-update" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.608209 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="53f2f5a7-17a6-4145-8b1b-f15d7a5309ac" containerName="glance-db-sync" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.608218 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="76e79bb4-99d3-4b9b-b496-f50ec996f5d4" containerName="mariadb-account-create-update" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.608227 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfcd0d26-153d-463d-b38b-35b9fdbe6a53" containerName="mariadb-account-create-update" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.608235 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="23ac1021-6630-413d-8f59-2ee8de8b22f6" containerName="mariadb-database-create" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.608244 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="06f2f111-8f19-433e-bb63-b57167c82e19" containerName="mariadb-database-create" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.610560 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.644723 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-hmdg9"] Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.691813 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-hmdg9\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.691917 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-hmdg9\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.691961 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-config\") pod \"dnsmasq-dns-7ff5475cc9-hmdg9\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.691979 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-hmdg9\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.692115 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-hmdg9\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.692168 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45tzw\" (UniqueName: \"kubernetes.io/projected/7ec642e7-9918-41e7-a64b-b9d84832d0d5-kube-api-access-45tzw\") pod \"dnsmasq-dns-7ff5475cc9-hmdg9\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.794429 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-hmdg9\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.794484 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45tzw\" (UniqueName: \"kubernetes.io/projected/7ec642e7-9918-41e7-a64b-b9d84832d0d5-kube-api-access-45tzw\") pod \"dnsmasq-dns-7ff5475cc9-hmdg9\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.794568 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-hmdg9\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.794639 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-hmdg9\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.794698 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-config\") pod \"dnsmasq-dns-7ff5475cc9-hmdg9\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.794720 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-hmdg9\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.795687 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-hmdg9\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.795757 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-hmdg9\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.796226 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-config\") pod \"dnsmasq-dns-7ff5475cc9-hmdg9\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.796233 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-hmdg9\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.796837 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-hmdg9\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.811688 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45tzw\" (UniqueName: \"kubernetes.io/projected/7ec642e7-9918-41e7-a64b-b9d84832d0d5-kube-api-access-45tzw\") pod \"dnsmasq-dns-7ff5475cc9-hmdg9\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" Feb 17 16:23:07 crc kubenswrapper[4672]: I0217 16:23:07.928694 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" Feb 17 16:23:08 crc kubenswrapper[4672]: I0217 16:23:08.224625 4672 generic.go:334] "Generic (PLEG): container finished" podID="e1107a46-b916-4fe7-b4cc-a6576f242ec0" containerID="03396567c1af726acf7d0604a1d7a4c409400fa7b45c8b13cb609e93edc134a7" exitCode=0 Feb 17 16:23:08 crc kubenswrapper[4672]: I0217 16:23:08.224710 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e1107a46-b916-4fe7-b4cc-a6576f242ec0","Type":"ContainerDied","Data":"03396567c1af726acf7d0604a1d7a4c409400fa7b45c8b13cb609e93edc134a7"} Feb 17 16:23:08 crc kubenswrapper[4672]: I0217 16:23:08.434049 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-hmdg9"] Feb 17 16:23:08 crc kubenswrapper[4672]: W0217 16:23:08.461543 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ec642e7_9918_41e7_a64b_b9d84832d0d5.slice/crio-d03dc688d7d42b0bbdee5eea9f17d264bd9784f092ac041cb269ab7f0934f085 WatchSource:0}: Error finding container d03dc688d7d42b0bbdee5eea9f17d264bd9784f092ac041cb269ab7f0934f085: Status 404 returned error can't find the container with id d03dc688d7d42b0bbdee5eea9f17d264bd9784f092ac041cb269ab7f0934f085 Feb 17 16:23:08 crc kubenswrapper[4672]: I0217 16:23:08.690427 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7fmd6" Feb 17 16:23:08 crc kubenswrapper[4672]: I0217 16:23:08.720170 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xn2w\" (UniqueName: \"kubernetes.io/projected/6a856bff-885d-46ef-8ce3-300c89cfae1f-kube-api-access-9xn2w\") pod \"6a856bff-885d-46ef-8ce3-300c89cfae1f\" (UID: \"6a856bff-885d-46ef-8ce3-300c89cfae1f\") " Feb 17 16:23:08 crc kubenswrapper[4672]: I0217 16:23:08.720346 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a856bff-885d-46ef-8ce3-300c89cfae1f-config-data\") pod \"6a856bff-885d-46ef-8ce3-300c89cfae1f\" (UID: \"6a856bff-885d-46ef-8ce3-300c89cfae1f\") " Feb 17 16:23:08 crc kubenswrapper[4672]: I0217 16:23:08.720371 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a856bff-885d-46ef-8ce3-300c89cfae1f-combined-ca-bundle\") pod \"6a856bff-885d-46ef-8ce3-300c89cfae1f\" (UID: \"6a856bff-885d-46ef-8ce3-300c89cfae1f\") " Feb 17 16:23:08 crc kubenswrapper[4672]: I0217 16:23:08.727495 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a856bff-885d-46ef-8ce3-300c89cfae1f-kube-api-access-9xn2w" (OuterVolumeSpecName: "kube-api-access-9xn2w") pod "6a856bff-885d-46ef-8ce3-300c89cfae1f" (UID: "6a856bff-885d-46ef-8ce3-300c89cfae1f"). InnerVolumeSpecName "kube-api-access-9xn2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:08 crc kubenswrapper[4672]: I0217 16:23:08.752635 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a856bff-885d-46ef-8ce3-300c89cfae1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a856bff-885d-46ef-8ce3-300c89cfae1f" (UID: "6a856bff-885d-46ef-8ce3-300c89cfae1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:08 crc kubenswrapper[4672]: I0217 16:23:08.786436 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a856bff-885d-46ef-8ce3-300c89cfae1f-config-data" (OuterVolumeSpecName: "config-data") pod "6a856bff-885d-46ef-8ce3-300c89cfae1f" (UID: "6a856bff-885d-46ef-8ce3-300c89cfae1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:08 crc kubenswrapper[4672]: I0217 16:23:08.821953 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a856bff-885d-46ef-8ce3-300c89cfae1f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:08 crc kubenswrapper[4672]: I0217 16:23:08.821985 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a856bff-885d-46ef-8ce3-300c89cfae1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:08 crc kubenswrapper[4672]: I0217 16:23:08.822000 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xn2w\" (UniqueName: \"kubernetes.io/projected/6a856bff-885d-46ef-8ce3-300c89cfae1f-kube-api-access-9xn2w\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:09 crc kubenswrapper[4672]: I0217 16:23:09.233926 4672 generic.go:334] "Generic (PLEG): container finished" podID="7ec642e7-9918-41e7-a64b-b9d84832d0d5" containerID="9f303414e07f072c5df4e6a3650e1baa13ed934c6b004acbe73ce85f5382e73a" exitCode=0 Feb 17 16:23:09 crc kubenswrapper[4672]: I0217 16:23:09.234019 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" event={"ID":"7ec642e7-9918-41e7-a64b-b9d84832d0d5","Type":"ContainerDied","Data":"9f303414e07f072c5df4e6a3650e1baa13ed934c6b004acbe73ce85f5382e73a"} Feb 17 16:23:09 crc kubenswrapper[4672]: I0217 16:23:09.234069 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" event={"ID":"7ec642e7-9918-41e7-a64b-b9d84832d0d5","Type":"ContainerStarted","Data":"d03dc688d7d42b0bbdee5eea9f17d264bd9784f092ac041cb269ab7f0934f085"} Feb 17 16:23:09 crc kubenswrapper[4672]: I0217 16:23:09.236076 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e1107a46-b916-4fe7-b4cc-a6576f242ec0","Type":"ContainerStarted","Data":"78e6c284b6506d37da275dbe428fdb9dc4cfc41dfef97035e668afce7168acf3"} Feb 17 16:23:09 crc kubenswrapper[4672]: I0217 16:23:09.240023 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7fmd6" event={"ID":"6a856bff-885d-46ef-8ce3-300c89cfae1f","Type":"ContainerDied","Data":"fb07326d265dc08b0cef27e1557ded964b16a53a9325eb2ce38b47af73b283b6"} Feb 17 16:23:09 crc kubenswrapper[4672]: I0217 16:23:09.241401 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb07326d265dc08b0cef27e1557ded964b16a53a9325eb2ce38b47af73b283b6" Feb 17 16:23:09 crc kubenswrapper[4672]: I0217 16:23:09.240114 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7fmd6" Feb 17 16:23:09 crc kubenswrapper[4672]: I0217 16:23:09.972980 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-hmdg9"] Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.054734 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-sld74"] Feb 17 16:23:10 crc kubenswrapper[4672]: E0217 16:23:10.055146 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a856bff-885d-46ef-8ce3-300c89cfae1f" containerName="keystone-db-sync" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.055163 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a856bff-885d-46ef-8ce3-300c89cfae1f" containerName="keystone-db-sync" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.055375 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a856bff-885d-46ef-8ce3-300c89cfae1f" containerName="keystone-db-sync" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.056399 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.075080 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-59nrk"] Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.076217 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-59nrk" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.084196 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.084416 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.084647 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.084756 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.084850 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2hf9g" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.116201 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-sld74"] Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.132570 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-59nrk"] Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.148799 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-combined-ca-bundle\") pod \"keystone-bootstrap-59nrk\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " pod="openstack/keystone-bootstrap-59nrk" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.148847 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-sld74\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.148882 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-sld74\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.148899 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdq8p\" (UniqueName: \"kubernetes.io/projected/8bbcec81-911b-4091-89b5-8b6fb58ceed0-kube-api-access-pdq8p\") pod \"keystone-bootstrap-59nrk\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " pod="openstack/keystone-bootstrap-59nrk" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.148913 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-credential-keys\") pod \"keystone-bootstrap-59nrk\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " pod="openstack/keystone-bootstrap-59nrk" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.148968 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-fernet-keys\") pod \"keystone-bootstrap-59nrk\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " pod="openstack/keystone-bootstrap-59nrk" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.148985 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-config\") pod \"dnsmasq-dns-5c5cc7c5ff-sld74\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.149005 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-sld74\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.149026 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-scripts\") pod \"keystone-bootstrap-59nrk\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " pod="openstack/keystone-bootstrap-59nrk" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.149047 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-config-data\") pod \"keystone-bootstrap-59nrk\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " pod="openstack/keystone-bootstrap-59nrk" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.149068 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-sld74\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.149089 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66sbx\" (UniqueName: \"kubernetes.io/projected/4a98c242-d0bc-4671-9945-b94da6491983-kube-api-access-66sbx\") pod \"dnsmasq-dns-5c5cc7c5ff-sld74\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.207429 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-sk2p2"] Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.210046 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sk2p2" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.218810 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.219036 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4qb54" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.219186 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.227391 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sk2p2"] Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.252564 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af496dd6-1cd8-4f50-b4e0-b96466c6eac4-combined-ca-bundle\") pod \"neutron-db-sync-sk2p2\" (UID: \"af496dd6-1cd8-4f50-b4e0-b96466c6eac4\") " pod="openstack/neutron-db-sync-sk2p2" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.252629 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-fernet-keys\") pod \"keystone-bootstrap-59nrk\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " pod="openstack/keystone-bootstrap-59nrk" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.252653 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-config\") pod \"dnsmasq-dns-5c5cc7c5ff-sld74\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.252672 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-sld74\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.252701 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-scripts\") pod \"keystone-bootstrap-59nrk\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " pod="openstack/keystone-bootstrap-59nrk" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.252720 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/af496dd6-1cd8-4f50-b4e0-b96466c6eac4-config\") pod \"neutron-db-sync-sk2p2\" (UID: \"af496dd6-1cd8-4f50-b4e0-b96466c6eac4\") " pod="openstack/neutron-db-sync-sk2p2" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.252749 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-config-data\") pod \"keystone-bootstrap-59nrk\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " pod="openstack/keystone-bootstrap-59nrk" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.252770 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-sld74\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.252793 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66sbx\" (UniqueName: \"kubernetes.io/projected/4a98c242-d0bc-4671-9945-b94da6491983-kube-api-access-66sbx\") pod \"dnsmasq-dns-5c5cc7c5ff-sld74\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.252851 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dcwf\" (UniqueName: \"kubernetes.io/projected/af496dd6-1cd8-4f50-b4e0-b96466c6eac4-kube-api-access-2dcwf\") pod \"neutron-db-sync-sk2p2\" (UID: \"af496dd6-1cd8-4f50-b4e0-b96466c6eac4\") " pod="openstack/neutron-db-sync-sk2p2" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.252878 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-combined-ca-bundle\") pod \"keystone-bootstrap-59nrk\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " pod="openstack/keystone-bootstrap-59nrk" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.252895 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-sld74\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.252924 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-sld74\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.252939 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdq8p\" (UniqueName: \"kubernetes.io/projected/8bbcec81-911b-4091-89b5-8b6fb58ceed0-kube-api-access-pdq8p\") pod \"keystone-bootstrap-59nrk\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " pod="openstack/keystone-bootstrap-59nrk" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.252953 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-credential-keys\") pod \"keystone-bootstrap-59nrk\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " pod="openstack/keystone-bootstrap-59nrk" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.256237 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-config\") pod \"dnsmasq-dns-5c5cc7c5ff-sld74\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.259132 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-sld74\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.259795 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-sld74\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.260192 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-scpk5"] Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.260304 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-sld74\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.261460 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-scpk5" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.262440 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" event={"ID":"7ec642e7-9918-41e7-a64b-b9d84832d0d5","Type":"ContainerStarted","Data":"48380b4aa1a3f0d40f57388c4dbc876142478edd7c475e03e58a03298d3086b8"} Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.263378 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.270748 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-qptlj" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.270935 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.271160 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.271293 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.272766 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-config-data\") pod \"keystone-bootstrap-59nrk\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " pod="openstack/keystone-bootstrap-59nrk" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.277173 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-fernet-keys\") pod \"keystone-bootstrap-59nrk\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " pod="openstack/keystone-bootstrap-59nrk" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.278263 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-credential-keys\") pod \"keystone-bootstrap-59nrk\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " pod="openstack/keystone-bootstrap-59nrk" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.279053 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-combined-ca-bundle\") pod \"keystone-bootstrap-59nrk\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " pod="openstack/keystone-bootstrap-59nrk" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.282142 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-sld74\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.286487 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-scripts\") pod \"keystone-bootstrap-59nrk\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " pod="openstack/keystone-bootstrap-59nrk" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.299273 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdq8p\" (UniqueName: \"kubernetes.io/projected/8bbcec81-911b-4091-89b5-8b6fb58ceed0-kube-api-access-pdq8p\") pod \"keystone-bootstrap-59nrk\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " pod="openstack/keystone-bootstrap-59nrk" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.308255 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66sbx\" (UniqueName: \"kubernetes.io/projected/4a98c242-d0bc-4671-9945-b94da6491983-kube-api-access-66sbx\") pod \"dnsmasq-dns-5c5cc7c5ff-sld74\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.335996 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.337985 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.346973 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.347327 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.350901 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-scpk5"] Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.355470 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dcwf\" (UniqueName: \"kubernetes.io/projected/af496dd6-1cd8-4f50-b4e0-b96466c6eac4-kube-api-access-2dcwf\") pod \"neutron-db-sync-sk2p2\" (UID: \"af496dd6-1cd8-4f50-b4e0-b96466c6eac4\") " pod="openstack/neutron-db-sync-sk2p2" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.355683 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-certs\") pod \"cloudkitty-db-sync-scpk5\" (UID: \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\") " pod="openstack/cloudkitty-db-sync-scpk5" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.355792 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-config-data\") pod \"cloudkitty-db-sync-scpk5\" (UID: \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\") " pod="openstack/cloudkitty-db-sync-scpk5" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.355903 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af496dd6-1cd8-4f50-b4e0-b96466c6eac4-combined-ca-bundle\") pod \"neutron-db-sync-sk2p2\" (UID: \"af496dd6-1cd8-4f50-b4e0-b96466c6eac4\") " pod="openstack/neutron-db-sync-sk2p2" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.356016 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/af496dd6-1cd8-4f50-b4e0-b96466c6eac4-config\") pod \"neutron-db-sync-sk2p2\" (UID: \"af496dd6-1cd8-4f50-b4e0-b96466c6eac4\") " pod="openstack/neutron-db-sync-sk2p2" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.356080 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-combined-ca-bundle\") pod \"cloudkitty-db-sync-scpk5\" (UID: \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\") " pod="openstack/cloudkitty-db-sync-scpk5" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.356171 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k62j9\" (UniqueName: \"kubernetes.io/projected/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-kube-api-access-k62j9\") pod \"cloudkitty-db-sync-scpk5\" (UID: \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\") " pod="openstack/cloudkitty-db-sync-scpk5" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.356267 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-scripts\") pod \"cloudkitty-db-sync-scpk5\" (UID: \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\") " pod="openstack/cloudkitty-db-sync-scpk5" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.365824 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af496dd6-1cd8-4f50-b4e0-b96466c6eac4-combined-ca-bundle\") pod \"neutron-db-sync-sk2p2\" (UID: \"af496dd6-1cd8-4f50-b4e0-b96466c6eac4\") " pod="openstack/neutron-db-sync-sk2p2" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.370061 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/af496dd6-1cd8-4f50-b4e0-b96466c6eac4-config\") pod \"neutron-db-sync-sk2p2\" (UID: \"af496dd6-1cd8-4f50-b4e0-b96466c6eac4\") " pod="openstack/neutron-db-sync-sk2p2" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.371869 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-4vtt8"] Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.373030 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4vtt8" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.384060 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hfmxf" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.384538 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.384740 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.395095 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.404952 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4vtt8"] Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.417311 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-59nrk" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.431672 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.435547 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dcwf\" (UniqueName: \"kubernetes.io/projected/af496dd6-1cd8-4f50-b4e0-b96466c6eac4-kube-api-access-2dcwf\") pod \"neutron-db-sync-sk2p2\" (UID: \"af496dd6-1cd8-4f50-b4e0-b96466c6eac4\") " pod="openstack/neutron-db-sync-sk2p2" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.438856 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-5spsr"] Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.440401 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5spsr" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.450552 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-sld74"] Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.458536 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk6bj\" (UniqueName: \"kubernetes.io/projected/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-kube-api-access-bk6bj\") pod \"ceilometer-0\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " pod="openstack/ceilometer-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.458576 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-scripts\") pod \"cinder-db-sync-4vtt8\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " pod="openstack/cinder-db-sync-4vtt8" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.458626 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-certs\") pod \"cloudkitty-db-sync-scpk5\" (UID: \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\") " pod="openstack/cloudkitty-db-sync-scpk5" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.458650 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-combined-ca-bundle\") pod \"cinder-db-sync-4vtt8\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " pod="openstack/cinder-db-sync-4vtt8" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.458675 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-config-data\") pod \"cloudkitty-db-sync-scpk5\" (UID: \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\") " pod="openstack/cloudkitty-db-sync-scpk5" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.458693 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-run-httpd\") pod \"ceilometer-0\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " pod="openstack/ceilometer-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.458718 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjvkc\" (UniqueName: \"kubernetes.io/projected/352f61db-51f9-425a-9ee2-78f681033626-kube-api-access-wjvkc\") pod \"cinder-db-sync-4vtt8\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " pod="openstack/cinder-db-sync-4vtt8" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.458756 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-config-data\") pod \"ceilometer-0\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " pod="openstack/ceilometer-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.458788 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-config-data\") pod \"cinder-db-sync-4vtt8\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " pod="openstack/cinder-db-sync-4vtt8" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.458811 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/352f61db-51f9-425a-9ee2-78f681033626-etc-machine-id\") pod \"cinder-db-sync-4vtt8\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " pod="openstack/cinder-db-sync-4vtt8" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.458831 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-scripts\") pod \"ceilometer-0\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " pod="openstack/ceilometer-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.458853 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-combined-ca-bundle\") pod \"cloudkitty-db-sync-scpk5\" (UID: \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\") " pod="openstack/cloudkitty-db-sync-scpk5" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.458872 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " pod="openstack/ceilometer-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.458893 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k62j9\" (UniqueName: \"kubernetes.io/projected/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-kube-api-access-k62j9\") pod \"cloudkitty-db-sync-scpk5\" (UID: \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\") " pod="openstack/cloudkitty-db-sync-scpk5" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.458921 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-scripts\") pod \"cloudkitty-db-sync-scpk5\" (UID: \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\") " pod="openstack/cloudkitty-db-sync-scpk5" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.458939 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-log-httpd\") pod \"ceilometer-0\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " pod="openstack/ceilometer-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.458954 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " pod="openstack/ceilometer-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.458977 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-db-sync-config-data\") pod \"cinder-db-sync-4vtt8\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " pod="openstack/cinder-db-sync-4vtt8" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.464047 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-865d7" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.464247 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-5spsr"] Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.464308 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.469821 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-w8fzc"] Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.472487 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w8fzc" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.473014 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" podStartSLOduration=3.472992665 podStartE2EDuration="3.472992665s" podCreationTimestamp="2026-02-17 16:23:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:10.384975291 +0000 UTC m=+1199.139064023" watchObservedRunningTime="2026-02-17 16:23:10.472992665 +0000 UTC m=+1199.227081397" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.490830 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-xn68c" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.491192 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.491388 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.542539 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-txwzd"] Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.543767 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.562376 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-db-sync-config-data\") pod \"cinder-db-sync-4vtt8\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " pod="openstack/cinder-db-sync-4vtt8" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.562600 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zn85\" (UniqueName: \"kubernetes.io/projected/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-kube-api-access-2zn85\") pod \"dnsmasq-dns-8b5c85b87-txwzd\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.562675 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk6bj\" (UniqueName: \"kubernetes.io/projected/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-kube-api-access-bk6bj\") pod \"ceilometer-0\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " pod="openstack/ceilometer-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.562761 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-scripts\") pod \"cinder-db-sync-4vtt8\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " pod="openstack/cinder-db-sync-4vtt8" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.562827 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/649147ca-1dbd-4260-8d7c-8077186059f1-logs\") pod \"placement-db-sync-w8fzc\" (UID: \"649147ca-1dbd-4260-8d7c-8077186059f1\") " pod="openstack/placement-db-sync-w8fzc" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.562989 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-txwzd\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.563026 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a68f7c0-293c-434c-8e63-c6855ba4d822-combined-ca-bundle\") pod \"barbican-db-sync-5spsr\" (UID: \"4a68f7c0-293c-434c-8e63-c6855ba4d822\") " pod="openstack/barbican-db-sync-5spsr" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.563071 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-combined-ca-bundle\") pod \"cinder-db-sync-4vtt8\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " pod="openstack/cinder-db-sync-4vtt8" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.563095 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/649147ca-1dbd-4260-8d7c-8077186059f1-combined-ca-bundle\") pod \"placement-db-sync-w8fzc\" (UID: \"649147ca-1dbd-4260-8d7c-8077186059f1\") " pod="openstack/placement-db-sync-w8fzc" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.563130 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/649147ca-1dbd-4260-8d7c-8077186059f1-config-data\") pod \"placement-db-sync-w8fzc\" (UID: \"649147ca-1dbd-4260-8d7c-8077186059f1\") " pod="openstack/placement-db-sync-w8fzc" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.563161 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-run-httpd\") pod \"ceilometer-0\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " pod="openstack/ceilometer-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.563211 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjvkc\" (UniqueName: \"kubernetes.io/projected/352f61db-51f9-425a-9ee2-78f681033626-kube-api-access-wjvkc\") pod \"cinder-db-sync-4vtt8\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " pod="openstack/cinder-db-sync-4vtt8" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.563273 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-txwzd\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.563321 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-config-data\") pod \"ceilometer-0\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " pod="openstack/ceilometer-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.563347 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/649147ca-1dbd-4260-8d7c-8077186059f1-scripts\") pod \"placement-db-sync-w8fzc\" (UID: \"649147ca-1dbd-4260-8d7c-8077186059f1\") " pod="openstack/placement-db-sync-w8fzc" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.563386 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4a68f7c0-293c-434c-8e63-c6855ba4d822-db-sync-config-data\") pod \"barbican-db-sync-5spsr\" (UID: \"4a68f7c0-293c-434c-8e63-c6855ba4d822\") " pod="openstack/barbican-db-sync-5spsr" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.563419 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-config-data\") pod \"cinder-db-sync-4vtt8\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " pod="openstack/cinder-db-sync-4vtt8" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.563460 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/352f61db-51f9-425a-9ee2-78f681033626-etc-machine-id\") pod \"cinder-db-sync-4vtt8\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " pod="openstack/cinder-db-sync-4vtt8" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.563475 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-txwzd\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.563495 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqk9l\" (UniqueName: \"kubernetes.io/projected/649147ca-1dbd-4260-8d7c-8077186059f1-kube-api-access-gqk9l\") pod \"placement-db-sync-w8fzc\" (UID: \"649147ca-1dbd-4260-8d7c-8077186059f1\") " pod="openstack/placement-db-sync-w8fzc" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.563544 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-scripts\") pod \"ceilometer-0\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " pod="openstack/ceilometer-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.563580 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " pod="openstack/ceilometer-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.563650 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-txwzd\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.563666 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5vsh\" (UniqueName: \"kubernetes.io/projected/4a68f7c0-293c-434c-8e63-c6855ba4d822-kube-api-access-l5vsh\") pod \"barbican-db-sync-5spsr\" (UID: \"4a68f7c0-293c-434c-8e63-c6855ba4d822\") " pod="openstack/barbican-db-sync-5spsr" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.563683 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-log-httpd\") pod \"ceilometer-0\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " pod="openstack/ceilometer-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.563697 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " pod="openstack/ceilometer-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.563721 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-config\") pod \"dnsmasq-dns-8b5c85b87-txwzd\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.575188 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-log-httpd\") pod \"ceilometer-0\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " pod="openstack/ceilometer-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.575568 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sk2p2" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.586589 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-run-httpd\") pod \"ceilometer-0\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " pod="openstack/ceilometer-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.587481 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/352f61db-51f9-425a-9ee2-78f681033626-etc-machine-id\") pod \"cinder-db-sync-4vtt8\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " pod="openstack/cinder-db-sync-4vtt8" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.615684 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-w8fzc"] Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.666742 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-txwzd\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.666811 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/649147ca-1dbd-4260-8d7c-8077186059f1-scripts\") pod \"placement-db-sync-w8fzc\" (UID: \"649147ca-1dbd-4260-8d7c-8077186059f1\") " pod="openstack/placement-db-sync-w8fzc" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.666837 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4a68f7c0-293c-434c-8e63-c6855ba4d822-db-sync-config-data\") pod \"barbican-db-sync-5spsr\" (UID: \"4a68f7c0-293c-434c-8e63-c6855ba4d822\") " pod="openstack/barbican-db-sync-5spsr" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.666877 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-txwzd\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.666895 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqk9l\" (UniqueName: \"kubernetes.io/projected/649147ca-1dbd-4260-8d7c-8077186059f1-kube-api-access-gqk9l\") pod \"placement-db-sync-w8fzc\" (UID: \"649147ca-1dbd-4260-8d7c-8077186059f1\") " pod="openstack/placement-db-sync-w8fzc" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.666957 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-txwzd\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.666973 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5vsh\" (UniqueName: \"kubernetes.io/projected/4a68f7c0-293c-434c-8e63-c6855ba4d822-kube-api-access-l5vsh\") pod \"barbican-db-sync-5spsr\" (UID: \"4a68f7c0-293c-434c-8e63-c6855ba4d822\") " pod="openstack/barbican-db-sync-5spsr" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.666998 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-config\") pod \"dnsmasq-dns-8b5c85b87-txwzd\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.667019 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zn85\" (UniqueName: \"kubernetes.io/projected/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-kube-api-access-2zn85\") pod \"dnsmasq-dns-8b5c85b87-txwzd\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.667045 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/649147ca-1dbd-4260-8d7c-8077186059f1-logs\") pod \"placement-db-sync-w8fzc\" (UID: \"649147ca-1dbd-4260-8d7c-8077186059f1\") " pod="openstack/placement-db-sync-w8fzc" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.667079 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-txwzd\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.667097 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a68f7c0-293c-434c-8e63-c6855ba4d822-combined-ca-bundle\") pod \"barbican-db-sync-5spsr\" (UID: \"4a68f7c0-293c-434c-8e63-c6855ba4d822\") " pod="openstack/barbican-db-sync-5spsr" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.667124 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/649147ca-1dbd-4260-8d7c-8077186059f1-combined-ca-bundle\") pod \"placement-db-sync-w8fzc\" (UID: \"649147ca-1dbd-4260-8d7c-8077186059f1\") " pod="openstack/placement-db-sync-w8fzc" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.667146 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/649147ca-1dbd-4260-8d7c-8077186059f1-config-data\") pod \"placement-db-sync-w8fzc\" (UID: \"649147ca-1dbd-4260-8d7c-8077186059f1\") " pod="openstack/placement-db-sync-w8fzc" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.668228 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-scripts\") pod \"cloudkitty-db-sync-scpk5\" (UID: \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\") " pod="openstack/cloudkitty-db-sync-scpk5" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.670131 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-combined-ca-bundle\") pod \"cloudkitty-db-sync-scpk5\" (UID: \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\") " pod="openstack/cloudkitty-db-sync-scpk5" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.670572 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/649147ca-1dbd-4260-8d7c-8077186059f1-logs\") pod \"placement-db-sync-w8fzc\" (UID: \"649147ca-1dbd-4260-8d7c-8077186059f1\") " pod="openstack/placement-db-sync-w8fzc" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.671339 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-config-data\") pod \"cloudkitty-db-sync-scpk5\" (UID: \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\") " pod="openstack/cloudkitty-db-sync-scpk5" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.672318 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k62j9\" (UniqueName: \"kubernetes.io/projected/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-kube-api-access-k62j9\") pod \"cloudkitty-db-sync-scpk5\" (UID: \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\") " pod="openstack/cloudkitty-db-sync-scpk5" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.674413 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-certs\") pod \"cloudkitty-db-sync-scpk5\" (UID: \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\") " pod="openstack/cloudkitty-db-sync-scpk5" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.680001 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-txwzd\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.680613 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-txwzd\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.688371 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-scripts\") pod \"cinder-db-sync-4vtt8\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " pod="openstack/cinder-db-sync-4vtt8" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.688854 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-scripts\") pod \"ceilometer-0\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " pod="openstack/ceilometer-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.690127 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-txwzd\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.690447 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-db-sync-config-data\") pod \"cinder-db-sync-4vtt8\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " pod="openstack/cinder-db-sync-4vtt8" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.690715 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/649147ca-1dbd-4260-8d7c-8077186059f1-scripts\") pod \"placement-db-sync-w8fzc\" (UID: \"649147ca-1dbd-4260-8d7c-8077186059f1\") " pod="openstack/placement-db-sync-w8fzc" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.729161 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-txwzd\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.732899 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-config\") pod \"dnsmasq-dns-8b5c85b87-txwzd\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.733142 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-combined-ca-bundle\") pod \"cinder-db-sync-4vtt8\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " pod="openstack/cinder-db-sync-4vtt8" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.733500 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5vsh\" (UniqueName: \"kubernetes.io/projected/4a68f7c0-293c-434c-8e63-c6855ba4d822-kube-api-access-l5vsh\") pod \"barbican-db-sync-5spsr\" (UID: \"4a68f7c0-293c-434c-8e63-c6855ba4d822\") " pod="openstack/barbican-db-sync-5spsr" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.739208 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/649147ca-1dbd-4260-8d7c-8077186059f1-combined-ca-bundle\") pod \"placement-db-sync-w8fzc\" (UID: \"649147ca-1dbd-4260-8d7c-8077186059f1\") " pod="openstack/placement-db-sync-w8fzc" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.739580 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4a68f7c0-293c-434c-8e63-c6855ba4d822-db-sync-config-data\") pod \"barbican-db-sync-5spsr\" (UID: \"4a68f7c0-293c-434c-8e63-c6855ba4d822\") " pod="openstack/barbican-db-sync-5spsr" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.740452 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjvkc\" (UniqueName: \"kubernetes.io/projected/352f61db-51f9-425a-9ee2-78f681033626-kube-api-access-wjvkc\") pod \"cinder-db-sync-4vtt8\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " pod="openstack/cinder-db-sync-4vtt8" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.742276 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-config-data\") pod \"ceilometer-0\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " pod="openstack/ceilometer-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.744793 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " pod="openstack/ceilometer-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.745371 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk6bj\" (UniqueName: \"kubernetes.io/projected/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-kube-api-access-bk6bj\") pod \"ceilometer-0\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " pod="openstack/ceilometer-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.746015 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/649147ca-1dbd-4260-8d7c-8077186059f1-config-data\") pod \"placement-db-sync-w8fzc\" (UID: \"649147ca-1dbd-4260-8d7c-8077186059f1\") " pod="openstack/placement-db-sync-w8fzc" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.754921 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " pod="openstack/ceilometer-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.771208 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-config-data\") pod \"cinder-db-sync-4vtt8\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " pod="openstack/cinder-db-sync-4vtt8" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.775864 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqk9l\" (UniqueName: \"kubernetes.io/projected/649147ca-1dbd-4260-8d7c-8077186059f1-kube-api-access-gqk9l\") pod \"placement-db-sync-w8fzc\" (UID: \"649147ca-1dbd-4260-8d7c-8077186059f1\") " pod="openstack/placement-db-sync-w8fzc" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.788106 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-txwzd"] Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.793157 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a68f7c0-293c-434c-8e63-c6855ba4d822-combined-ca-bundle\") pod \"barbican-db-sync-5spsr\" (UID: \"4a68f7c0-293c-434c-8e63-c6855ba4d822\") " pod="openstack/barbican-db-sync-5spsr" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.817576 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zn85\" (UniqueName: \"kubernetes.io/projected/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-kube-api-access-2zn85\") pod \"dnsmasq-dns-8b5c85b87-txwzd\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.846541 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w8fzc" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.847723 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-scpk5" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.885337 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.908754 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4vtt8" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.909322 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.929244 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.930904 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.932834 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.935638 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.935838 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qkr6q" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.935976 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.951046 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:23:10 crc kubenswrapper[4672]: I0217 16:23:10.954011 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5spsr" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.134371 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-config-data\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.134717 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/146627ed-88c2-4845-8f17-a52e47fbb924-logs\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.134756 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/146627ed-88c2-4845-8f17-a52e47fbb924-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.134774 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-scripts\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.134915 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.135040 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.135134 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.135159 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctv2k\" (UniqueName: \"kubernetes.io/projected/146627ed-88c2-4845-8f17-a52e47fbb924-kube-api-access-ctv2k\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.236349 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/146627ed-88c2-4845-8f17-a52e47fbb924-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.236387 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-scripts\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.236435 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.236478 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.236530 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.236548 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctv2k\" (UniqueName: \"kubernetes.io/projected/146627ed-88c2-4845-8f17-a52e47fbb924-kube-api-access-ctv2k\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.236623 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-config-data\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.236642 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/146627ed-88c2-4845-8f17-a52e47fbb924-logs\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.237087 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/146627ed-88c2-4845-8f17-a52e47fbb924-logs\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.237175 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/146627ed-88c2-4845-8f17-a52e47fbb924-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.241952 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.244696 4672 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.244718 4672 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f554e28ce6891cf21f3390de6086eedf40118aa722324f2faa0d19b98e9f8a02/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.247529 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-config-data\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.252062 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.252683 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-scripts\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.297234 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctv2k\" (UniqueName: \"kubernetes.io/projected/146627ed-88c2-4845-8f17-a52e47fbb924-kube-api-access-ctv2k\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.314194 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.317174 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.326335 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.326587 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.346703 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.353693 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" podUID="7ec642e7-9918-41e7-a64b-b9d84832d0d5" containerName="dnsmasq-dns" containerID="cri-o://48380b4aa1a3f0d40f57388c4dbc876142478edd7c475e03e58a03298d3086b8" gracePeriod=10 Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.354049 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e1107a46-b916-4fe7-b4cc-a6576f242ec0","Type":"ContainerStarted","Data":"a22e94015d2c091dc49c7965984eb52a99824d795462e7c00e7707d9dd3ea02b"} Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.354722 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\") pod \"glance-default-internal-api-0\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.385983 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-sld74"] Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.445590 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.446094 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-scripts\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.446248 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-303d4d28-face-45fe-b658-7e12a6040182\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-303d4d28-face-45fe-b658-7e12a6040182\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.446379 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-config-data\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.446427 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efd8ef8f-d736-47d8-a135-c076b3c97b33-logs\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.446548 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.446609 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/efd8ef8f-d736-47d8-a135-c076b3c97b33-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.446756 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqg5l\" (UniqueName: \"kubernetes.io/projected/efd8ef8f-d736-47d8-a135-c076b3c97b33-kube-api-access-pqg5l\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: W0217 16:23:11.487634 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bbcec81_911b_4091_89b5_8b6fb58ceed0.slice/crio-a55ce6fa6bd295a3f5f800e96a2893caf428ef874ea27ca8a8e2400f10e0c29d WatchSource:0}: Error finding container a55ce6fa6bd295a3f5f800e96a2893caf428ef874ea27ca8a8e2400f10e0c29d: Status 404 returned error can't find the container with id a55ce6fa6bd295a3f5f800e96a2893caf428ef874ea27ca8a8e2400f10e0c29d Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.498481 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-59nrk"] Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.549162 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-config-data\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.549202 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efd8ef8f-d736-47d8-a135-c076b3c97b33-logs\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.549245 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.549279 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/efd8ef8f-d736-47d8-a135-c076b3c97b33-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.549304 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqg5l\" (UniqueName: \"kubernetes.io/projected/efd8ef8f-d736-47d8-a135-c076b3c97b33-kube-api-access-pqg5l\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.549338 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.549401 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-scripts\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.549450 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-303d4d28-face-45fe-b658-7e12a6040182\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-303d4d28-face-45fe-b658-7e12a6040182\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.553402 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-config-data\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.553718 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efd8ef8f-d736-47d8-a135-c076b3c97b33-logs\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.557729 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/efd8ef8f-d736-47d8-a135-c076b3c97b33-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.558185 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.558719 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-scripts\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.562326 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.574023 4672 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.574063 4672 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-303d4d28-face-45fe-b658-7e12a6040182\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-303d4d28-face-45fe-b658-7e12a6040182\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d02cc0527f533ee65b155f740f514c1487916eea5ba6e0a075365c01e7203db4/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.582198 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.582346 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqg5l\" (UniqueName: \"kubernetes.io/projected/efd8ef8f-d736-47d8-a135-c076b3c97b33-kube-api-access-pqg5l\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.644868 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sk2p2"] Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.657051 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-scpk5"] Feb 17 16:23:11 crc kubenswrapper[4672]: W0217 16:23:11.671192 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf496dd6_1cd8_4f50_b4e0_b96466c6eac4.slice/crio-fbcde7de0313dd9e66129fd366dcd9c46ee3ea6a8b2a23a13569b2577685e23b WatchSource:0}: Error finding container fbcde7de0313dd9e66129fd366dcd9c46ee3ea6a8b2a23a13569b2577685e23b: Status 404 returned error can't find the container with id fbcde7de0313dd9e66129fd366dcd9c46ee3ea6a8b2a23a13569b2577685e23b Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.681365 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-303d4d28-face-45fe-b658-7e12a6040182\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-303d4d28-face-45fe-b658-7e12a6040182\") pod \"glance-default-external-api-0\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:11 crc kubenswrapper[4672]: W0217 16:23:11.689912 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb223fa0_5bed_4291_bc2d_3e1f6c90e6f2.slice/crio-923bfe0ff7b74505e673b35dec55ff8b807af5db0da4ef4593a1f23f433cd321 WatchSource:0}: Error finding container 923bfe0ff7b74505e673b35dec55ff8b807af5db0da4ef4593a1f23f433cd321: Status 404 returned error can't find the container with id 923bfe0ff7b74505e673b35dec55ff8b807af5db0da4ef4593a1f23f433cd321 Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.890575 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4vtt8"] Feb 17 16:23:11 crc kubenswrapper[4672]: W0217 16:23:11.919033 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod352f61db_51f9_425a_9ee2_78f681033626.slice/crio-0d61b937fd288a2a36bbc9e01bcbc763358acd480681b440550eaa5a1657e184 WatchSource:0}: Error finding container 0d61b937fd288a2a36bbc9e01bcbc763358acd480681b440550eaa5a1657e184: Status 404 returned error can't find the container with id 0d61b937fd288a2a36bbc9e01bcbc763358acd480681b440550eaa5a1657e184 Feb 17 16:23:11 crc kubenswrapper[4672]: I0217 16:23:11.971391 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:23:12 crc kubenswrapper[4672]: W0217 16:23:12.030803 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod649147ca_1dbd_4260_8d7c_8077186059f1.slice/crio-b7dac8b3c065a1556a1deca5f7762ad45931a2b9753c71c66743afd96f2ae5a4 WatchSource:0}: Error finding container b7dac8b3c065a1556a1deca5f7762ad45931a2b9753c71c66743afd96f2ae5a4: Status 404 returned error can't find the container with id b7dac8b3c065a1556a1deca5f7762ad45931a2b9753c71c66743afd96f2ae5a4 Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.071696 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-w8fzc"] Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.211159 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-txwzd"] Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.238738 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-5spsr"] Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.285272 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.334676 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.405294 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5spsr" event={"ID":"4a68f7c0-293c-434c-8e63-c6855ba4d822","Type":"ContainerStarted","Data":"d02223497d52a7f9ee5c5571b7b0fb08e9c3a5249684423458b897aa179f3739"} Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.461134 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" event={"ID":"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6","Type":"ContainerStarted","Data":"4fede41a6b3d442704cc0b64a71cfcde9ecee5251694f4b5e0c64343367e5adb"} Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.467114 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-scpk5" event={"ID":"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2","Type":"ContainerStarted","Data":"923bfe0ff7b74505e673b35dec55ff8b807af5db0da4ef4593a1f23f433cd321"} Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.473673 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46707458-3c2e-4f29-bda9-dd5ebc8b60cb","Type":"ContainerStarted","Data":"2d5defdc7740fe73a7d51e814c1b8c52984eafa390fafd11864047877b4412c0"} Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.485853 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4vtt8" event={"ID":"352f61db-51f9-425a-9ee2-78f681033626","Type":"ContainerStarted","Data":"0d61b937fd288a2a36bbc9e01bcbc763358acd480681b440550eaa5a1657e184"} Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.518318 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e1107a46-b916-4fe7-b4cc-a6576f242ec0","Type":"ContainerStarted","Data":"fc783ba0dc701f5a2eb0720d90ee48b6f93e10cb5542709baf905e4af9b702e3"} Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.535254 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sk2p2" event={"ID":"af496dd6-1cd8-4f50-b4e0-b96466c6eac4","Type":"ContainerStarted","Data":"440037cfb042404133cd6e1415ee9574a5f59fc811a735253ecc04485a5ea597"} Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.535302 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sk2p2" event={"ID":"af496dd6-1cd8-4f50-b4e0-b96466c6eac4","Type":"ContainerStarted","Data":"fbcde7de0313dd9e66129fd366dcd9c46ee3ea6a8b2a23a13569b2577685e23b"} Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.546280 4672 generic.go:334] "Generic (PLEG): container finished" podID="4a98c242-d0bc-4671-9945-b94da6491983" containerID="1116a2e6b6453d5f1f9dd02e067b6d5eabb8ffa55ef045fde170f0a46451c770" exitCode=0 Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.546366 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" event={"ID":"4a98c242-d0bc-4671-9945-b94da6491983","Type":"ContainerDied","Data":"1116a2e6b6453d5f1f9dd02e067b6d5eabb8ffa55ef045fde170f0a46451c770"} Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.546390 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" event={"ID":"4a98c242-d0bc-4671-9945-b94da6491983","Type":"ContainerStarted","Data":"044fbed9fd1db9713227915f444266e2022d2dca0d16f3f3e85d0ac90f228318"} Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.554731 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=15.554715025 podStartE2EDuration="15.554715025s" podCreationTimestamp="2026-02-17 16:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:12.552122876 +0000 UTC m=+1201.306211638" watchObservedRunningTime="2026-02-17 16:23:12.554715025 +0000 UTC m=+1201.308803757" Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.565440 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"146627ed-88c2-4845-8f17-a52e47fbb924","Type":"ContainerStarted","Data":"2ad23bb38fd105c773b5c0bb2bbf899a624618680b94caede8a21aff7b170163"} Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.580372 4672 generic.go:334] "Generic (PLEG): container finished" podID="7ec642e7-9918-41e7-a64b-b9d84832d0d5" containerID="48380b4aa1a3f0d40f57388c4dbc876142478edd7c475e03e58a03298d3086b8" exitCode=0 Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.580447 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" event={"ID":"7ec642e7-9918-41e7-a64b-b9d84832d0d5","Type":"ContainerDied","Data":"48380b4aa1a3f0d40f57388c4dbc876142478edd7c475e03e58a03298d3086b8"} Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.593218 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.593244 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.615835 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w8fzc" event={"ID":"649147ca-1dbd-4260-8d7c-8077186059f1","Type":"ContainerStarted","Data":"b7dac8b3c065a1556a1deca5f7762ad45931a2b9753c71c66743afd96f2ae5a4"} Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.625984 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.628060 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-59nrk" event={"ID":"8bbcec81-911b-4091-89b5-8b6fb58ceed0","Type":"ContainerStarted","Data":"9ec3b47d69ecbc02bf5535cd18c1587b2c5efea38a7090c4b5037148d6f43f52"} Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.628084 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-59nrk" event={"ID":"8bbcec81-911b-4091-89b5-8b6fb58ceed0","Type":"ContainerStarted","Data":"a55ce6fa6bd295a3f5f800e96a2893caf428ef874ea27ca8a8e2400f10e0c29d"} Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.652569 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-sk2p2" podStartSLOduration=2.652549947 podStartE2EDuration="2.652549947s" podCreationTimestamp="2026-02-17 16:23:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:12.615106099 +0000 UTC m=+1201.369194821" watchObservedRunningTime="2026-02-17 16:23:12.652549947 +0000 UTC m=+1201.406638679" Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.721197 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-59nrk" podStartSLOduration=3.721183439 podStartE2EDuration="3.721183439s" podCreationTimestamp="2026-02-17 16:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:12.673385567 +0000 UTC m=+1201.427474299" watchObservedRunningTime="2026-02-17 16:23:12.721183439 +0000 UTC m=+1201.475272171" Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.737279 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.819756 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-dns-swift-storage-0\") pod \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.819793 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-config\") pod \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.819868 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-ovsdbserver-nb\") pod \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.819949 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45tzw\" (UniqueName: \"kubernetes.io/projected/7ec642e7-9918-41e7-a64b-b9d84832d0d5-kube-api-access-45tzw\") pod \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.820000 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-ovsdbserver-sb\") pod \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.820065 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-dns-svc\") pod \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\" (UID: \"7ec642e7-9918-41e7-a64b-b9d84832d0d5\") " Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.844174 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ec642e7-9918-41e7-a64b-b9d84832d0d5-kube-api-access-45tzw" (OuterVolumeSpecName: "kube-api-access-45tzw") pod "7ec642e7-9918-41e7-a64b-b9d84832d0d5" (UID: "7ec642e7-9918-41e7-a64b-b9d84832d0d5"). InnerVolumeSpecName "kube-api-access-45tzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:12 crc kubenswrapper[4672]: I0217 16:23:12.921929 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45tzw\" (UniqueName: \"kubernetes.io/projected/7ec642e7-9918-41e7-a64b-b9d84832d0d5-kube-api-access-45tzw\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:12.996443 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.067311 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7ec642e7-9918-41e7-a64b-b9d84832d0d5" (UID: "7ec642e7-9918-41e7-a64b-b9d84832d0d5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.126586 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.169583 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7ec642e7-9918-41e7-a64b-b9d84832d0d5" (UID: "7ec642e7-9918-41e7-a64b-b9d84832d0d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.170554 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7ec642e7-9918-41e7-a64b-b9d84832d0d5" (UID: "7ec642e7-9918-41e7-a64b-b9d84832d0d5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.240570 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.240623 4672 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.241231 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-config" (OuterVolumeSpecName: "config") pod "7ec642e7-9918-41e7-a64b-b9d84832d0d5" (UID: "7ec642e7-9918-41e7-a64b-b9d84832d0d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.245045 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7ec642e7-9918-41e7-a64b-b9d84832d0d5" (UID: "7ec642e7-9918-41e7-a64b-b9d84832d0d5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.342080 4672 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.342111 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ec642e7-9918-41e7-a64b-b9d84832d0d5-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.425691 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.546394 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-ovsdbserver-nb\") pod \"4a98c242-d0bc-4671-9945-b94da6491983\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.546463 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-config\") pod \"4a98c242-d0bc-4671-9945-b94da6491983\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.546614 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66sbx\" (UniqueName: \"kubernetes.io/projected/4a98c242-d0bc-4671-9945-b94da6491983-kube-api-access-66sbx\") pod \"4a98c242-d0bc-4671-9945-b94da6491983\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.546762 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-dns-swift-storage-0\") pod \"4a98c242-d0bc-4671-9945-b94da6491983\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.546865 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-ovsdbserver-sb\") pod \"4a98c242-d0bc-4671-9945-b94da6491983\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.546917 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-dns-svc\") pod \"4a98c242-d0bc-4671-9945-b94da6491983\" (UID: \"4a98c242-d0bc-4671-9945-b94da6491983\") " Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.589349 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a98c242-d0bc-4671-9945-b94da6491983-kube-api-access-66sbx" (OuterVolumeSpecName: "kube-api-access-66sbx") pod "4a98c242-d0bc-4671-9945-b94da6491983" (UID: "4a98c242-d0bc-4671-9945-b94da6491983"). InnerVolumeSpecName "kube-api-access-66sbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.649377 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66sbx\" (UniqueName: \"kubernetes.io/projected/4a98c242-d0bc-4671-9945-b94da6491983-kube-api-access-66sbx\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.688078 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.709813 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.711753 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-hmdg9" event={"ID":"7ec642e7-9918-41e7-a64b-b9d84832d0d5","Type":"ContainerDied","Data":"d03dc688d7d42b0bbdee5eea9f17d264bd9784f092ac041cb269ab7f0934f085"} Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.711820 4672 scope.go:117] "RemoveContainer" containerID="48380b4aa1a3f0d40f57388c4dbc876142478edd7c475e03e58a03298d3086b8" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.734778 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"efd8ef8f-d736-47d8-a135-c076b3c97b33","Type":"ContainerStarted","Data":"3b37ab631e0e57dc3a89d7c632f27657e1aaaee3d0e3d645419da822f07e24a9"} Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.736172 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4a98c242-d0bc-4671-9945-b94da6491983" (UID: "4a98c242-d0bc-4671-9945-b94da6491983"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.751731 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" event={"ID":"4a98c242-d0bc-4671-9945-b94da6491983","Type":"ContainerDied","Data":"044fbed9fd1db9713227915f444266e2022d2dca0d16f3f3e85d0ac90f228318"} Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.751809 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-sld74" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.758765 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.775429 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.780001 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-config" (OuterVolumeSpecName: "config") pod "4a98c242-d0bc-4671-9945-b94da6491983" (UID: "4a98c242-d0bc-4671-9945-b94da6491983"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.799854 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.800940 4672 generic.go:334] "Generic (PLEG): container finished" podID="c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6" containerID="72ec42bda12a322ab0d79d2b2bdb335b6a491e27fa6bf7c3927a0fbf39b7de2e" exitCode=0 Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.804381 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" event={"ID":"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6","Type":"ContainerDied","Data":"72ec42bda12a322ab0d79d2b2bdb335b6a491e27fa6bf7c3927a0fbf39b7de2e"} Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.816122 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.816343 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4a98c242-d0bc-4671-9945-b94da6491983" (UID: "4a98c242-d0bc-4671-9945-b94da6491983"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.865335 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.865365 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.879540 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4a98c242-d0bc-4671-9945-b94da6491983" (UID: "4a98c242-d0bc-4671-9945-b94da6491983"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.879963 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4a98c242-d0bc-4671-9945-b94da6491983" (UID: "4a98c242-d0bc-4671-9945-b94da6491983"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.967483 4672 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:13 crc kubenswrapper[4672]: I0217 16:23:13.967523 4672 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a98c242-d0bc-4671-9945-b94da6491983-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:14 crc kubenswrapper[4672]: I0217 16:23:14.024267 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-hmdg9"] Feb 17 16:23:14 crc kubenswrapper[4672]: I0217 16:23:14.036746 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-hmdg9"] Feb 17 16:23:14 crc kubenswrapper[4672]: I0217 16:23:14.178607 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-sld74"] Feb 17 16:23:14 crc kubenswrapper[4672]: I0217 16:23:14.206859 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-sld74"] Feb 17 16:23:15 crc kubenswrapper[4672]: I0217 16:23:15.963214 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a98c242-d0bc-4671-9945-b94da6491983" path="/var/lib/kubelet/pods/4a98c242-d0bc-4671-9945-b94da6491983/volumes" Feb 17 16:23:15 crc kubenswrapper[4672]: I0217 16:23:15.964196 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ec642e7-9918-41e7-a64b-b9d84832d0d5" path="/var/lib/kubelet/pods/7ec642e7-9918-41e7-a64b-b9d84832d0d5/volumes" Feb 17 16:23:16 crc kubenswrapper[4672]: I0217 16:23:16.224406 4672 scope.go:117] "RemoveContainer" containerID="9f303414e07f072c5df4e6a3650e1baa13ed934c6b004acbe73ce85f5382e73a" Feb 17 16:23:16 crc kubenswrapper[4672]: I0217 16:23:16.288789 4672 scope.go:117] "RemoveContainer" containerID="1116a2e6b6453d5f1f9dd02e067b6d5eabb8ffa55ef045fde170f0a46451c770" Feb 17 16:23:16 crc kubenswrapper[4672]: I0217 16:23:16.863669 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"efd8ef8f-d736-47d8-a135-c076b3c97b33","Type":"ContainerStarted","Data":"122b2ee228358e3e941a682c7e7fb4a5ef75a7c4d857a2ed7bc2f9265a56403d"} Feb 17 16:23:16 crc kubenswrapper[4672]: I0217 16:23:16.871314 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" event={"ID":"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6","Type":"ContainerStarted","Data":"bec6e8802ffbb46e4ea68ddc569cbf88eceab6369527a15a8405f0eb1391cd4e"} Feb 17 16:23:16 crc kubenswrapper[4672]: I0217 16:23:16.871395 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:23:16 crc kubenswrapper[4672]: I0217 16:23:16.888874 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"146627ed-88c2-4845-8f17-a52e47fbb924","Type":"ContainerStarted","Data":"5cb4a444212489abaf460b09acc40aafefeb499efee0862a014083ecea168e37"} Feb 17 16:23:16 crc kubenswrapper[4672]: I0217 16:23:16.889150 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" podStartSLOduration=6.8891327570000005 podStartE2EDuration="6.889132757s" podCreationTimestamp="2026-02-17 16:23:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:16.888457949 +0000 UTC m=+1205.642546681" watchObservedRunningTime="2026-02-17 16:23:16.889132757 +0000 UTC m=+1205.643221489" Feb 17 16:23:17 crc kubenswrapper[4672]: I0217 16:23:17.919226 4672 generic.go:334] "Generic (PLEG): container finished" podID="8bbcec81-911b-4091-89b5-8b6fb58ceed0" containerID="9ec3b47d69ecbc02bf5535cd18c1587b2c5efea38a7090c4b5037148d6f43f52" exitCode=0 Feb 17 16:23:17 crc kubenswrapper[4672]: I0217 16:23:17.920319 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-59nrk" event={"ID":"8bbcec81-911b-4091-89b5-8b6fb58ceed0","Type":"ContainerDied","Data":"9ec3b47d69ecbc02bf5535cd18c1587b2c5efea38a7090c4b5037148d6f43f52"} Feb 17 16:23:24 crc kubenswrapper[4672]: I0217 16:23:24.117050 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-59nrk" Feb 17 16:23:24 crc kubenswrapper[4672]: I0217 16:23:24.205335 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-credential-keys\") pod \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " Feb 17 16:23:24 crc kubenswrapper[4672]: I0217 16:23:24.205433 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdq8p\" (UniqueName: \"kubernetes.io/projected/8bbcec81-911b-4091-89b5-8b6fb58ceed0-kube-api-access-pdq8p\") pod \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " Feb 17 16:23:24 crc kubenswrapper[4672]: I0217 16:23:24.205533 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-config-data\") pod \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " Feb 17 16:23:24 crc kubenswrapper[4672]: I0217 16:23:24.205567 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-scripts\") pod \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " Feb 17 16:23:24 crc kubenswrapper[4672]: I0217 16:23:24.205614 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-fernet-keys\") pod \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " Feb 17 16:23:24 crc kubenswrapper[4672]: I0217 16:23:24.205906 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-combined-ca-bundle\") pod \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\" (UID: \"8bbcec81-911b-4091-89b5-8b6fb58ceed0\") " Feb 17 16:23:24 crc kubenswrapper[4672]: I0217 16:23:24.212042 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8bbcec81-911b-4091-89b5-8b6fb58ceed0" (UID: "8bbcec81-911b-4091-89b5-8b6fb58ceed0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:24 crc kubenswrapper[4672]: I0217 16:23:24.212640 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bbcec81-911b-4091-89b5-8b6fb58ceed0-kube-api-access-pdq8p" (OuterVolumeSpecName: "kube-api-access-pdq8p") pod "8bbcec81-911b-4091-89b5-8b6fb58ceed0" (UID: "8bbcec81-911b-4091-89b5-8b6fb58ceed0"). InnerVolumeSpecName "kube-api-access-pdq8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:24 crc kubenswrapper[4672]: I0217 16:23:24.216387 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-scripts" (OuterVolumeSpecName: "scripts") pod "8bbcec81-911b-4091-89b5-8b6fb58ceed0" (UID: "8bbcec81-911b-4091-89b5-8b6fb58ceed0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:24 crc kubenswrapper[4672]: I0217 16:23:24.229706 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8bbcec81-911b-4091-89b5-8b6fb58ceed0" (UID: "8bbcec81-911b-4091-89b5-8b6fb58ceed0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:24 crc kubenswrapper[4672]: I0217 16:23:24.240312 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8bbcec81-911b-4091-89b5-8b6fb58ceed0" (UID: "8bbcec81-911b-4091-89b5-8b6fb58ceed0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:24 crc kubenswrapper[4672]: I0217 16:23:24.251016 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-config-data" (OuterVolumeSpecName: "config-data") pod "8bbcec81-911b-4091-89b5-8b6fb58ceed0" (UID: "8bbcec81-911b-4091-89b5-8b6fb58ceed0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:24 crc kubenswrapper[4672]: I0217 16:23:24.308136 4672 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:24 crc kubenswrapper[4672]: I0217 16:23:24.308174 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdq8p\" (UniqueName: \"kubernetes.io/projected/8bbcec81-911b-4091-89b5-8b6fb58ceed0-kube-api-access-pdq8p\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:24 crc kubenswrapper[4672]: I0217 16:23:24.308190 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:24 crc kubenswrapper[4672]: I0217 16:23:24.308202 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:24 crc kubenswrapper[4672]: I0217 16:23:24.308214 4672 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:24 crc kubenswrapper[4672]: I0217 16:23:24.308226 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bbcec81-911b-4091-89b5-8b6fb58ceed0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:24 crc kubenswrapper[4672]: I0217 16:23:24.999105 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-59nrk" event={"ID":"8bbcec81-911b-4091-89b5-8b6fb58ceed0","Type":"ContainerDied","Data":"a55ce6fa6bd295a3f5f800e96a2893caf428ef874ea27ca8a8e2400f10e0c29d"} Feb 17 16:23:24 crc kubenswrapper[4672]: I0217 16:23:24.999392 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a55ce6fa6bd295a3f5f800e96a2893caf428ef874ea27ca8a8e2400f10e0c29d" Feb 17 16:23:24 crc kubenswrapper[4672]: I0217 16:23:24.999153 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-59nrk" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.221375 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-59nrk"] Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.229913 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-59nrk"] Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.326999 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-72t4g"] Feb 17 16:23:25 crc kubenswrapper[4672]: E0217 16:23:25.327433 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec642e7-9918-41e7-a64b-b9d84832d0d5" containerName="init" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.327460 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec642e7-9918-41e7-a64b-b9d84832d0d5" containerName="init" Feb 17 16:23:25 crc kubenswrapper[4672]: E0217 16:23:25.327489 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec642e7-9918-41e7-a64b-b9d84832d0d5" containerName="dnsmasq-dns" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.327501 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec642e7-9918-41e7-a64b-b9d84832d0d5" containerName="dnsmasq-dns" Feb 17 16:23:25 crc kubenswrapper[4672]: E0217 16:23:25.327534 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a98c242-d0bc-4671-9945-b94da6491983" containerName="init" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.327544 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a98c242-d0bc-4671-9945-b94da6491983" containerName="init" Feb 17 16:23:25 crc kubenswrapper[4672]: E0217 16:23:25.327559 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bbcec81-911b-4091-89b5-8b6fb58ceed0" containerName="keystone-bootstrap" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.327566 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bbcec81-911b-4091-89b5-8b6fb58ceed0" containerName="keystone-bootstrap" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.327795 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bbcec81-911b-4091-89b5-8b6fb58ceed0" containerName="keystone-bootstrap" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.327829 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ec642e7-9918-41e7-a64b-b9d84832d0d5" containerName="dnsmasq-dns" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.327843 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a98c242-d0bc-4671-9945-b94da6491983" containerName="init" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.328873 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-72t4g" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.333312 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.333354 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.335953 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.338810 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2hf9g" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.338995 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-72t4g"] Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.430791 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-fernet-keys\") pod \"keystone-bootstrap-72t4g\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " pod="openstack/keystone-bootstrap-72t4g" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.431664 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b77sp\" (UniqueName: \"kubernetes.io/projected/1d098964-5b23-460e-bb88-42ed525b84ed-kube-api-access-b77sp\") pod \"keystone-bootstrap-72t4g\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " pod="openstack/keystone-bootstrap-72t4g" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.431707 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-scripts\") pod \"keystone-bootstrap-72t4g\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " pod="openstack/keystone-bootstrap-72t4g" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.431745 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-credential-keys\") pod \"keystone-bootstrap-72t4g\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " pod="openstack/keystone-bootstrap-72t4g" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.432105 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-config-data\") pod \"keystone-bootstrap-72t4g\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " pod="openstack/keystone-bootstrap-72t4g" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.432217 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-combined-ca-bundle\") pod \"keystone-bootstrap-72t4g\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " pod="openstack/keystone-bootstrap-72t4g" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.533621 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b77sp\" (UniqueName: \"kubernetes.io/projected/1d098964-5b23-460e-bb88-42ed525b84ed-kube-api-access-b77sp\") pod \"keystone-bootstrap-72t4g\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " pod="openstack/keystone-bootstrap-72t4g" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.533688 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-scripts\") pod \"keystone-bootstrap-72t4g\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " pod="openstack/keystone-bootstrap-72t4g" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.533724 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-credential-keys\") pod \"keystone-bootstrap-72t4g\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " pod="openstack/keystone-bootstrap-72t4g" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.533768 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-config-data\") pod \"keystone-bootstrap-72t4g\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " pod="openstack/keystone-bootstrap-72t4g" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.533796 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-combined-ca-bundle\") pod \"keystone-bootstrap-72t4g\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " pod="openstack/keystone-bootstrap-72t4g" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.533849 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-fernet-keys\") pod \"keystone-bootstrap-72t4g\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " pod="openstack/keystone-bootstrap-72t4g" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.539294 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-fernet-keys\") pod \"keystone-bootstrap-72t4g\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " pod="openstack/keystone-bootstrap-72t4g" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.539545 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-scripts\") pod \"keystone-bootstrap-72t4g\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " pod="openstack/keystone-bootstrap-72t4g" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.540793 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-combined-ca-bundle\") pod \"keystone-bootstrap-72t4g\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " pod="openstack/keystone-bootstrap-72t4g" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.556417 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-config-data\") pod \"keystone-bootstrap-72t4g\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " pod="openstack/keystone-bootstrap-72t4g" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.557582 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-credential-keys\") pod \"keystone-bootstrap-72t4g\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " pod="openstack/keystone-bootstrap-72t4g" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.563755 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b77sp\" (UniqueName: \"kubernetes.io/projected/1d098964-5b23-460e-bb88-42ed525b84ed-kube-api-access-b77sp\") pod \"keystone-bootstrap-72t4g\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " pod="openstack/keystone-bootstrap-72t4g" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.649386 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-72t4g" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.912542 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.987561 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bbcec81-911b-4091-89b5-8b6fb58ceed0" path="/var/lib/kubelet/pods/8bbcec81-911b-4091-89b5-8b6fb58ceed0/volumes" Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.988373 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-bg42z"] Feb 17 16:23:25 crc kubenswrapper[4672]: I0217 16:23:25.989174 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" podUID="1fa5ede2-7956-4541-8edf-8a5937a9f85d" containerName="dnsmasq-dns" containerID="cri-o://815fba74963cb8511025b006d9ec173b0c5ae8f7fc16d19bf0ca00ce401d5f1e" gracePeriod=10 Feb 17 16:23:27 crc kubenswrapper[4672]: I0217 16:23:27.025654 4672 generic.go:334] "Generic (PLEG): container finished" podID="1fa5ede2-7956-4541-8edf-8a5937a9f85d" containerID="815fba74963cb8511025b006d9ec173b0c5ae8f7fc16d19bf0ca00ce401d5f1e" exitCode=0 Feb 17 16:23:27 crc kubenswrapper[4672]: I0217 16:23:27.025682 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" event={"ID":"1fa5ede2-7956-4541-8edf-8a5937a9f85d","Type":"ContainerDied","Data":"815fba74963cb8511025b006d9ec173b0c5ae8f7fc16d19bf0ca00ce401d5f1e"} Feb 17 16:23:27 crc kubenswrapper[4672]: I0217 16:23:27.356226 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" podUID="1fa5ede2-7956-4541-8edf-8a5937a9f85d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: connect: connection refused" Feb 17 16:23:32 crc kubenswrapper[4672]: I0217 16:23:32.356093 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" podUID="1fa5ede2-7956-4541-8edf-8a5937a9f85d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: connect: connection refused" Feb 17 16:23:33 crc kubenswrapper[4672]: E0217 16:23:33.750616 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 17 16:23:33 crc kubenswrapper[4672]: E0217 16:23:33.751107 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66dh4h65dh68ch545h85h54bh5cfh5f4h599h64h59dhf8h68h5c9h57chf8h65fh695h66h5b6h64chcch58h66bh58h58bhf4hfh68bh9ch59bq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bk6bj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(46707458-3c2e-4f29-bda9-dd5ebc8b60cb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:23:34 crc kubenswrapper[4672]: I0217 16:23:34.118084 4672 generic.go:334] "Generic (PLEG): container finished" podID="af496dd6-1cd8-4f50-b4e0-b96466c6eac4" containerID="440037cfb042404133cd6e1415ee9574a5f59fc811a735253ecc04485a5ea597" exitCode=0 Feb 17 16:23:34 crc kubenswrapper[4672]: I0217 16:23:34.118132 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sk2p2" event={"ID":"af496dd6-1cd8-4f50-b4e0-b96466c6eac4","Type":"ContainerDied","Data":"440037cfb042404133cd6e1415ee9574a5f59fc811a735253ecc04485a5ea597"} Feb 17 16:23:35 crc kubenswrapper[4672]: E0217 16:23:35.008061 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 17 16:23:35 crc kubenswrapper[4672]: E0217 16:23:35.008486 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wjvkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-4vtt8_openstack(352f61db-51f9-425a-9ee2-78f681033626): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:23:35 crc kubenswrapper[4672]: E0217 16:23:35.009677 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-4vtt8" podUID="352f61db-51f9-425a-9ee2-78f681033626" Feb 17 16:23:35 crc kubenswrapper[4672]: E0217 16:23:35.132073 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-4vtt8" podUID="352f61db-51f9-425a-9ee2-78f681033626" Feb 17 16:23:35 crc kubenswrapper[4672]: E0217 16:23:35.470706 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 17 16:23:35 crc kubenswrapper[4672]: E0217 16:23:35.471105 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l5vsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-5spsr_openstack(4a68f7c0-293c-434c-8e63-c6855ba4d822): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:23:35 crc kubenswrapper[4672]: E0217 16:23:35.472534 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-5spsr" podUID="4a68f7c0-293c-434c-8e63-c6855ba4d822" Feb 17 16:23:36 crc kubenswrapper[4672]: E0217 16:23:36.164318 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-5spsr" podUID="4a68f7c0-293c-434c-8e63-c6855ba4d822" Feb 17 16:23:38 crc kubenswrapper[4672]: I0217 16:23:38.848637 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:23:38 crc kubenswrapper[4672]: I0217 16:23:38.854639 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sk2p2" Feb 17 16:23:38 crc kubenswrapper[4672]: I0217 16:23:38.950211 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/af496dd6-1cd8-4f50-b4e0-b96466c6eac4-config\") pod \"af496dd6-1cd8-4f50-b4e0-b96466c6eac4\" (UID: \"af496dd6-1cd8-4f50-b4e0-b96466c6eac4\") " Feb 17 16:23:38 crc kubenswrapper[4672]: I0217 16:23:38.950306 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-ovsdbserver-sb\") pod \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " Feb 17 16:23:38 crc kubenswrapper[4672]: I0217 16:23:38.950396 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dcwf\" (UniqueName: \"kubernetes.io/projected/af496dd6-1cd8-4f50-b4e0-b96466c6eac4-kube-api-access-2dcwf\") pod \"af496dd6-1cd8-4f50-b4e0-b96466c6eac4\" (UID: \"af496dd6-1cd8-4f50-b4e0-b96466c6eac4\") " Feb 17 16:23:38 crc kubenswrapper[4672]: I0217 16:23:38.950483 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rz64\" (UniqueName: \"kubernetes.io/projected/1fa5ede2-7956-4541-8edf-8a5937a9f85d-kube-api-access-9rz64\") pod \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " Feb 17 16:23:38 crc kubenswrapper[4672]: I0217 16:23:38.950568 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-ovsdbserver-nb\") pod \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " Feb 17 16:23:38 crc kubenswrapper[4672]: I0217 16:23:38.950623 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-config\") pod \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " Feb 17 16:23:38 crc kubenswrapper[4672]: I0217 16:23:38.950745 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-dns-svc\") pod \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " Feb 17 16:23:38 crc kubenswrapper[4672]: I0217 16:23:38.950786 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-dns-swift-storage-0\") pod \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\" (UID: \"1fa5ede2-7956-4541-8edf-8a5937a9f85d\") " Feb 17 16:23:38 crc kubenswrapper[4672]: I0217 16:23:38.950866 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af496dd6-1cd8-4f50-b4e0-b96466c6eac4-combined-ca-bundle\") pod \"af496dd6-1cd8-4f50-b4e0-b96466c6eac4\" (UID: \"af496dd6-1cd8-4f50-b4e0-b96466c6eac4\") " Feb 17 16:23:38 crc kubenswrapper[4672]: I0217 16:23:38.954797 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af496dd6-1cd8-4f50-b4e0-b96466c6eac4-kube-api-access-2dcwf" (OuterVolumeSpecName: "kube-api-access-2dcwf") pod "af496dd6-1cd8-4f50-b4e0-b96466c6eac4" (UID: "af496dd6-1cd8-4f50-b4e0-b96466c6eac4"). InnerVolumeSpecName "kube-api-access-2dcwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:38 crc kubenswrapper[4672]: I0217 16:23:38.954872 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fa5ede2-7956-4541-8edf-8a5937a9f85d-kube-api-access-9rz64" (OuterVolumeSpecName: "kube-api-access-9rz64") pod "1fa5ede2-7956-4541-8edf-8a5937a9f85d" (UID: "1fa5ede2-7956-4541-8edf-8a5937a9f85d"). InnerVolumeSpecName "kube-api-access-9rz64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:38 crc kubenswrapper[4672]: I0217 16:23:38.995706 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af496dd6-1cd8-4f50-b4e0-b96466c6eac4-config" (OuterVolumeSpecName: "config") pod "af496dd6-1cd8-4f50-b4e0-b96466c6eac4" (UID: "af496dd6-1cd8-4f50-b4e0-b96466c6eac4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:38.999407 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af496dd6-1cd8-4f50-b4e0-b96466c6eac4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af496dd6-1cd8-4f50-b4e0-b96466c6eac4" (UID: "af496dd6-1cd8-4f50-b4e0-b96466c6eac4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:39.005278 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-config" (OuterVolumeSpecName: "config") pod "1fa5ede2-7956-4541-8edf-8a5937a9f85d" (UID: "1fa5ede2-7956-4541-8edf-8a5937a9f85d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:39.005468 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1fa5ede2-7956-4541-8edf-8a5937a9f85d" (UID: "1fa5ede2-7956-4541-8edf-8a5937a9f85d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:39.015995 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1fa5ede2-7956-4541-8edf-8a5937a9f85d" (UID: "1fa5ede2-7956-4541-8edf-8a5937a9f85d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:39.020048 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1fa5ede2-7956-4541-8edf-8a5937a9f85d" (UID: "1fa5ede2-7956-4541-8edf-8a5937a9f85d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:39.025048 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1fa5ede2-7956-4541-8edf-8a5937a9f85d" (UID: "1fa5ede2-7956-4541-8edf-8a5937a9f85d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:39.053556 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dcwf\" (UniqueName: \"kubernetes.io/projected/af496dd6-1cd8-4f50-b4e0-b96466c6eac4-kube-api-access-2dcwf\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:39.053581 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rz64\" (UniqueName: \"kubernetes.io/projected/1fa5ede2-7956-4541-8edf-8a5937a9f85d-kube-api-access-9rz64\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:39.053591 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:39.053617 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:39.053648 4672 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:39.053672 4672 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:39.053686 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af496dd6-1cd8-4f50-b4e0-b96466c6eac4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:39.053698 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/af496dd6-1cd8-4f50-b4e0-b96466c6eac4-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:39.053707 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fa5ede2-7956-4541-8edf-8a5937a9f85d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:39.176737 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" event={"ID":"1fa5ede2-7956-4541-8edf-8a5937a9f85d","Type":"ContainerDied","Data":"69ae77bc84eccd51dce2bb1706574a47833007000898225471ef079f7379fd8a"} Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:39.176760 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:39.176791 4672 scope.go:117] "RemoveContainer" containerID="815fba74963cb8511025b006d9ec173b0c5ae8f7fc16d19bf0ca00ce401d5f1e" Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:39.188887 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sk2p2" event={"ID":"af496dd6-1cd8-4f50-b4e0-b96466c6eac4","Type":"ContainerDied","Data":"fbcde7de0313dd9e66129fd366dcd9c46ee3ea6a8b2a23a13569b2577685e23b"} Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:39.188934 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbcde7de0313dd9e66129fd366dcd9c46ee3ea6a8b2a23a13569b2577685e23b" Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:39.188935 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sk2p2" Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:39.218399 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-bg42z"] Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:39.228009 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-bg42z"] Feb 17 16:23:39 crc kubenswrapper[4672]: I0217 16:23:39.959798 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fa5ede2-7956-4541-8edf-8a5937a9f85d" path="/var/lib/kubelet/pods/1fa5ede2-7956-4541-8edf-8a5937a9f85d/volumes" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.211663 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"efd8ef8f-d736-47d8-a135-c076b3c97b33","Type":"ContainerStarted","Data":"365d0f2543e2a13bfb2703c31daf41060be3bf15ad7a0e93980ba7d95778b176"} Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.212124 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="efd8ef8f-d736-47d8-a135-c076b3c97b33" containerName="glance-log" containerID="cri-o://122b2ee228358e3e941a682c7e7fb4a5ef75a7c4d857a2ed7bc2f9265a56403d" gracePeriod=30 Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.212709 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="efd8ef8f-d736-47d8-a135-c076b3c97b33" containerName="glance-httpd" containerID="cri-o://365d0f2543e2a13bfb2703c31daf41060be3bf15ad7a0e93980ba7d95778b176" gracePeriod=30 Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.255163 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-txp6k"] Feb 17 16:23:40 crc kubenswrapper[4672]: E0217 16:23:40.262824 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa5ede2-7956-4541-8edf-8a5937a9f85d" containerName="dnsmasq-dns" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.262872 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa5ede2-7956-4541-8edf-8a5937a9f85d" containerName="dnsmasq-dns" Feb 17 16:23:40 crc kubenswrapper[4672]: E0217 16:23:40.262905 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af496dd6-1cd8-4f50-b4e0-b96466c6eac4" containerName="neutron-db-sync" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.262913 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="af496dd6-1cd8-4f50-b4e0-b96466c6eac4" containerName="neutron-db-sync" Feb 17 16:23:40 crc kubenswrapper[4672]: E0217 16:23:40.262944 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa5ede2-7956-4541-8edf-8a5937a9f85d" containerName="init" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.262951 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa5ede2-7956-4541-8edf-8a5937a9f85d" containerName="init" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.272504 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fa5ede2-7956-4541-8edf-8a5937a9f85d" containerName="dnsmasq-dns" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.272650 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="af496dd6-1cd8-4f50-b4e0-b96466c6eac4" containerName="neutron-db-sync" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.276840 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.294914 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-txp6k"] Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.300504 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=30.300485228 podStartE2EDuration="30.300485228s" podCreationTimestamp="2026-02-17 16:23:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:40.242321772 +0000 UTC m=+1228.996410504" watchObservedRunningTime="2026-02-17 16:23:40.300485228 +0000 UTC m=+1229.054573960" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.359798 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b745f78d8-8tmpn"] Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.361668 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b745f78d8-8tmpn" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.363537 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4qb54" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.364637 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.364746 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b745f78d8-8tmpn"] Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.364976 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.365169 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.382835 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-txp6k\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.382875 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-txp6k\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.382937 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-txp6k\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.382964 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gt8r\" (UniqueName: \"kubernetes.io/projected/31275847-8cb0-4fe6-9a21-68c3f99727ed-kube-api-access-2gt8r\") pod \"dnsmasq-dns-84b966f6c9-txp6k\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.383027 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-txp6k\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.383065 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-config\") pod \"dnsmasq-dns-84b966f6c9-txp6k\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.484796 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-txp6k\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.484874 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-config\") pod \"dnsmasq-dns-84b966f6c9-txp6k\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.484902 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-combined-ca-bundle\") pod \"neutron-b745f78d8-8tmpn\" (UID: \"c7309196-390c-4dc4-b9a0-a88f48e270db\") " pod="openstack/neutron-b745f78d8-8tmpn" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.484923 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-ovndb-tls-certs\") pod \"neutron-b745f78d8-8tmpn\" (UID: \"c7309196-390c-4dc4-b9a0-a88f48e270db\") " pod="openstack/neutron-b745f78d8-8tmpn" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.484946 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-txp6k\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.484964 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-txp6k\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.485017 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-txp6k\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.485044 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gt8r\" (UniqueName: \"kubernetes.io/projected/31275847-8cb0-4fe6-9a21-68c3f99727ed-kube-api-access-2gt8r\") pod \"dnsmasq-dns-84b966f6c9-txp6k\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.485086 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgdhk\" (UniqueName: \"kubernetes.io/projected/c7309196-390c-4dc4-b9a0-a88f48e270db-kube-api-access-hgdhk\") pod \"neutron-b745f78d8-8tmpn\" (UID: \"c7309196-390c-4dc4-b9a0-a88f48e270db\") " pod="openstack/neutron-b745f78d8-8tmpn" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.485103 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-config\") pod \"neutron-b745f78d8-8tmpn\" (UID: \"c7309196-390c-4dc4-b9a0-a88f48e270db\") " pod="openstack/neutron-b745f78d8-8tmpn" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.485121 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-httpd-config\") pod \"neutron-b745f78d8-8tmpn\" (UID: \"c7309196-390c-4dc4-b9a0-a88f48e270db\") " pod="openstack/neutron-b745f78d8-8tmpn" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.485655 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-txp6k\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.485882 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-config\") pod \"dnsmasq-dns-84b966f6c9-txp6k\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.486257 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-txp6k\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.486466 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-txp6k\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.486814 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-txp6k\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.507432 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gt8r\" (UniqueName: \"kubernetes.io/projected/31275847-8cb0-4fe6-9a21-68c3f99727ed-kube-api-access-2gt8r\") pod \"dnsmasq-dns-84b966f6c9-txp6k\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.586774 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-combined-ca-bundle\") pod \"neutron-b745f78d8-8tmpn\" (UID: \"c7309196-390c-4dc4-b9a0-a88f48e270db\") " pod="openstack/neutron-b745f78d8-8tmpn" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.587155 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-ovndb-tls-certs\") pod \"neutron-b745f78d8-8tmpn\" (UID: \"c7309196-390c-4dc4-b9a0-a88f48e270db\") " pod="openstack/neutron-b745f78d8-8tmpn" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.587280 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgdhk\" (UniqueName: \"kubernetes.io/projected/c7309196-390c-4dc4-b9a0-a88f48e270db-kube-api-access-hgdhk\") pod \"neutron-b745f78d8-8tmpn\" (UID: \"c7309196-390c-4dc4-b9a0-a88f48e270db\") " pod="openstack/neutron-b745f78d8-8tmpn" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.587305 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-config\") pod \"neutron-b745f78d8-8tmpn\" (UID: \"c7309196-390c-4dc4-b9a0-a88f48e270db\") " pod="openstack/neutron-b745f78d8-8tmpn" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.587327 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-httpd-config\") pod \"neutron-b745f78d8-8tmpn\" (UID: \"c7309196-390c-4dc4-b9a0-a88f48e270db\") " pod="openstack/neutron-b745f78d8-8tmpn" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.591633 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-httpd-config\") pod \"neutron-b745f78d8-8tmpn\" (UID: \"c7309196-390c-4dc4-b9a0-a88f48e270db\") " pod="openstack/neutron-b745f78d8-8tmpn" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.591653 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-ovndb-tls-certs\") pod \"neutron-b745f78d8-8tmpn\" (UID: \"c7309196-390c-4dc4-b9a0-a88f48e270db\") " pod="openstack/neutron-b745f78d8-8tmpn" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.591991 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-combined-ca-bundle\") pod \"neutron-b745f78d8-8tmpn\" (UID: \"c7309196-390c-4dc4-b9a0-a88f48e270db\") " pod="openstack/neutron-b745f78d8-8tmpn" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.597501 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-config\") pod \"neutron-b745f78d8-8tmpn\" (UID: \"c7309196-390c-4dc4-b9a0-a88f48e270db\") " pod="openstack/neutron-b745f78d8-8tmpn" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.607314 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgdhk\" (UniqueName: \"kubernetes.io/projected/c7309196-390c-4dc4-b9a0-a88f48e270db-kube-api-access-hgdhk\") pod \"neutron-b745f78d8-8tmpn\" (UID: \"c7309196-390c-4dc4-b9a0-a88f48e270db\") " pod="openstack/neutron-b745f78d8-8tmpn" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.616250 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:23:40 crc kubenswrapper[4672]: I0217 16:23:40.739770 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b745f78d8-8tmpn" Feb 17 16:23:41 crc kubenswrapper[4672]: I0217 16:23:41.222133 4672 generic.go:334] "Generic (PLEG): container finished" podID="efd8ef8f-d736-47d8-a135-c076b3c97b33" containerID="365d0f2543e2a13bfb2703c31daf41060be3bf15ad7a0e93980ba7d95778b176" exitCode=0 Feb 17 16:23:41 crc kubenswrapper[4672]: I0217 16:23:41.222162 4672 generic.go:334] "Generic (PLEG): container finished" podID="efd8ef8f-d736-47d8-a135-c076b3c97b33" containerID="122b2ee228358e3e941a682c7e7fb4a5ef75a7c4d857a2ed7bc2f9265a56403d" exitCode=143 Feb 17 16:23:41 crc kubenswrapper[4672]: I0217 16:23:41.222179 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"efd8ef8f-d736-47d8-a135-c076b3c97b33","Type":"ContainerDied","Data":"365d0f2543e2a13bfb2703c31daf41060be3bf15ad7a0e93980ba7d95778b176"} Feb 17 16:23:41 crc kubenswrapper[4672]: I0217 16:23:41.222202 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"efd8ef8f-d736-47d8-a135-c076b3c97b33","Type":"ContainerDied","Data":"122b2ee228358e3e941a682c7e7fb4a5ef75a7c4d857a2ed7bc2f9265a56403d"} Feb 17 16:23:41 crc kubenswrapper[4672]: I0217 16:23:41.972864 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:23:41 crc kubenswrapper[4672]: I0217 16:23:41.973181 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.356180 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-bg42z" podUID="1fa5ede2-7956-4541-8edf-8a5937a9f85d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: i/o timeout" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.520001 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6fdb669fcc-nckw2"] Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.522469 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.526050 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.526132 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.543354 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6fdb669fcc-nckw2"] Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.627749 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-public-tls-certs\") pod \"neutron-6fdb669fcc-nckw2\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.627798 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-combined-ca-bundle\") pod \"neutron-6fdb669fcc-nckw2\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.627868 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-internal-tls-certs\") pod \"neutron-6fdb669fcc-nckw2\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.627892 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-config\") pod \"neutron-6fdb669fcc-nckw2\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.627921 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-httpd-config\") pod \"neutron-6fdb669fcc-nckw2\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.627940 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-ovndb-tls-certs\") pod \"neutron-6fdb669fcc-nckw2\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.627972 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp86x\" (UniqueName: \"kubernetes.io/projected/4ec72261-e568-4e6d-83e7-aee39c008aab-kube-api-access-lp86x\") pod \"neutron-6fdb669fcc-nckw2\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.731091 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-internal-tls-certs\") pod \"neutron-6fdb669fcc-nckw2\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.731209 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-config\") pod \"neutron-6fdb669fcc-nckw2\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.731308 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-httpd-config\") pod \"neutron-6fdb669fcc-nckw2\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.731361 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-ovndb-tls-certs\") pod \"neutron-6fdb669fcc-nckw2\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.731465 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp86x\" (UniqueName: \"kubernetes.io/projected/4ec72261-e568-4e6d-83e7-aee39c008aab-kube-api-access-lp86x\") pod \"neutron-6fdb669fcc-nckw2\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.731643 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-public-tls-certs\") pod \"neutron-6fdb669fcc-nckw2\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.731704 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-combined-ca-bundle\") pod \"neutron-6fdb669fcc-nckw2\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.737134 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-combined-ca-bundle\") pod \"neutron-6fdb669fcc-nckw2\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.737495 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-internal-tls-certs\") pod \"neutron-6fdb669fcc-nckw2\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.738670 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-config\") pod \"neutron-6fdb669fcc-nckw2\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.739472 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-public-tls-certs\") pod \"neutron-6fdb669fcc-nckw2\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.739755 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-httpd-config\") pod \"neutron-6fdb669fcc-nckw2\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.744569 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-ovndb-tls-certs\") pod \"neutron-6fdb669fcc-nckw2\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.747659 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp86x\" (UniqueName: \"kubernetes.io/projected/4ec72261-e568-4e6d-83e7-aee39c008aab-kube-api-access-lp86x\") pod \"neutron-6fdb669fcc-nckw2\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:42 crc kubenswrapper[4672]: I0217 16:23:42.856202 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:46 crc kubenswrapper[4672]: I0217 16:23:46.484216 4672 scope.go:117] "RemoveContainer" containerID="54c66b4ed13aaab76473befc15bbe8f6fa7e1a3d19ec2c3f9d49f2c0ccf90fea" Feb 17 16:23:47 crc kubenswrapper[4672]: I0217 16:23:47.282573 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"146627ed-88c2-4845-8f17-a52e47fbb924","Type":"ContainerStarted","Data":"7b3b373261bd88a8c8779d05ad1c5ddf94d60d586a6491caaebf805029599b89"} Feb 17 16:23:47 crc kubenswrapper[4672]: I0217 16:23:47.282883 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="146627ed-88c2-4845-8f17-a52e47fbb924" containerName="glance-log" containerID="cri-o://5cb4a444212489abaf460b09acc40aafefeb499efee0862a014083ecea168e37" gracePeriod=30 Feb 17 16:23:47 crc kubenswrapper[4672]: I0217 16:23:47.283182 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="146627ed-88c2-4845-8f17-a52e47fbb924" containerName="glance-httpd" containerID="cri-o://7b3b373261bd88a8c8779d05ad1c5ddf94d60d586a6491caaebf805029599b89" gracePeriod=30 Feb 17 16:23:47 crc kubenswrapper[4672]: I0217 16:23:47.313615 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=37.31358912 podStartE2EDuration="37.31358912s" podCreationTimestamp="2026-02-17 16:23:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:47.304260294 +0000 UTC m=+1236.058349026" watchObservedRunningTime="2026-02-17 16:23:47.31358912 +0000 UTC m=+1236.067677852" Feb 17 16:23:48 crc kubenswrapper[4672]: I0217 16:23:48.293433 4672 generic.go:334] "Generic (PLEG): container finished" podID="146627ed-88c2-4845-8f17-a52e47fbb924" containerID="7b3b373261bd88a8c8779d05ad1c5ddf94d60d586a6491caaebf805029599b89" exitCode=0 Feb 17 16:23:48 crc kubenswrapper[4672]: I0217 16:23:48.293471 4672 generic.go:334] "Generic (PLEG): container finished" podID="146627ed-88c2-4845-8f17-a52e47fbb924" containerID="5cb4a444212489abaf460b09acc40aafefeb499efee0862a014083ecea168e37" exitCode=143 Feb 17 16:23:48 crc kubenswrapper[4672]: I0217 16:23:48.293493 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"146627ed-88c2-4845-8f17-a52e47fbb924","Type":"ContainerDied","Data":"7b3b373261bd88a8c8779d05ad1c5ddf94d60d586a6491caaebf805029599b89"} Feb 17 16:23:48 crc kubenswrapper[4672]: I0217 16:23:48.293558 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"146627ed-88c2-4845-8f17-a52e47fbb924","Type":"ContainerDied","Data":"5cb4a444212489abaf460b09acc40aafefeb499efee0862a014083ecea168e37"} Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.287833 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-72t4g"] Feb 17 16:23:49 crc kubenswrapper[4672]: E0217 16:23:49.496496 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 17 16:23:49 crc kubenswrapper[4672]: E0217 16:23:49.496855 4672 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 17 16:23:49 crc kubenswrapper[4672]: E0217 16:23:49.497035 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k62j9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-scpk5_openstack(fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:23:49 crc kubenswrapper[4672]: E0217 16:23:49.498746 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cloudkitty-db-sync-scpk5" podUID="fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2" Feb 17 16:23:49 crc kubenswrapper[4672]: E0217 16:23:49.680628 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified" Feb 17 16:23:49 crc kubenswrapper[4672]: E0217 16:23:49.680779 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-notification-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66dh4h65dh68ch545h85h54bh5cfh5f4h599h64h59dhf8h68h5c9h57chf8h65fh695h66h5b6h64chcch58h66bh58h58bhf4hfh68bh9ch59bq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-notification-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bk6bj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/notificationhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(46707458-3c2e-4f29-bda9-dd5ebc8b60cb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:23:49 crc kubenswrapper[4672]: W0217 16:23:49.695168 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d098964_5b23_460e_bb88_42ed525b84ed.slice/crio-99084dc0ba32fb7505f9ab6f248d09d65ad9a5ee6235ba4dd9ab7c12fbe275ab WatchSource:0}: Error finding container 99084dc0ba32fb7505f9ab6f248d09d65ad9a5ee6235ba4dd9ab7c12fbe275ab: Status 404 returned error can't find the container with id 99084dc0ba32fb7505f9ab6f248d09d65ad9a5ee6235ba4dd9ab7c12fbe275ab Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.798867 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.875268 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-combined-ca-bundle\") pod \"efd8ef8f-d736-47d8-a135-c076b3c97b33\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.875316 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-config-data\") pod \"efd8ef8f-d736-47d8-a135-c076b3c97b33\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.875359 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/efd8ef8f-d736-47d8-a135-c076b3c97b33-httpd-run\") pod \"efd8ef8f-d736-47d8-a135-c076b3c97b33\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.875593 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqg5l\" (UniqueName: \"kubernetes.io/projected/efd8ef8f-d736-47d8-a135-c076b3c97b33-kube-api-access-pqg5l\") pod \"efd8ef8f-d736-47d8-a135-c076b3c97b33\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.875682 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efd8ef8f-d736-47d8-a135-c076b3c97b33-logs\") pod \"efd8ef8f-d736-47d8-a135-c076b3c97b33\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.875714 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-public-tls-certs\") pod \"efd8ef8f-d736-47d8-a135-c076b3c97b33\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.875744 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-scripts\") pod \"efd8ef8f-d736-47d8-a135-c076b3c97b33\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.875888 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-303d4d28-face-45fe-b658-7e12a6040182\") pod \"efd8ef8f-d736-47d8-a135-c076b3c97b33\" (UID: \"efd8ef8f-d736-47d8-a135-c076b3c97b33\") " Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.877619 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efd8ef8f-d736-47d8-a135-c076b3c97b33-logs" (OuterVolumeSpecName: "logs") pod "efd8ef8f-d736-47d8-a135-c076b3c97b33" (UID: "efd8ef8f-d736-47d8-a135-c076b3c97b33"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.877648 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efd8ef8f-d736-47d8-a135-c076b3c97b33-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "efd8ef8f-d736-47d8-a135-c076b3c97b33" (UID: "efd8ef8f-d736-47d8-a135-c076b3c97b33"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.884401 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efd8ef8f-d736-47d8-a135-c076b3c97b33-kube-api-access-pqg5l" (OuterVolumeSpecName: "kube-api-access-pqg5l") pod "efd8ef8f-d736-47d8-a135-c076b3c97b33" (UID: "efd8ef8f-d736-47d8-a135-c076b3c97b33"). InnerVolumeSpecName "kube-api-access-pqg5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.886866 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-scripts" (OuterVolumeSpecName: "scripts") pod "efd8ef8f-d736-47d8-a135-c076b3c97b33" (UID: "efd8ef8f-d736-47d8-a135-c076b3c97b33"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.899155 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-303d4d28-face-45fe-b658-7e12a6040182" (OuterVolumeSpecName: "glance") pod "efd8ef8f-d736-47d8-a135-c076b3c97b33" (UID: "efd8ef8f-d736-47d8-a135-c076b3c97b33"). InnerVolumeSpecName "pvc-303d4d28-face-45fe-b658-7e12a6040182". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.940668 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efd8ef8f-d736-47d8-a135-c076b3c97b33" (UID: "efd8ef8f-d736-47d8-a135-c076b3c97b33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.946093 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-config-data" (OuterVolumeSpecName: "config-data") pod "efd8ef8f-d736-47d8-a135-c076b3c97b33" (UID: "efd8ef8f-d736-47d8-a135-c076b3c97b33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.976702 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "efd8ef8f-d736-47d8-a135-c076b3c97b33" (UID: "efd8ef8f-d736-47d8-a135-c076b3c97b33"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.978316 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.978337 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.978348 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqg5l\" (UniqueName: \"kubernetes.io/projected/efd8ef8f-d736-47d8-a135-c076b3c97b33-kube-api-access-pqg5l\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.978361 4672 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/efd8ef8f-d736-47d8-a135-c076b3c97b33-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.978371 4672 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efd8ef8f-d736-47d8-a135-c076b3c97b33-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.978382 4672 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.978395 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efd8ef8f-d736-47d8-a135-c076b3c97b33-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:49 crc kubenswrapper[4672]: I0217 16:23:49.978424 4672 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-303d4d28-face-45fe-b658-7e12a6040182\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-303d4d28-face-45fe-b658-7e12a6040182\") on node \"crc\" " Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.009402 4672 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.009975 4672 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-303d4d28-face-45fe-b658-7e12a6040182" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-303d4d28-face-45fe-b658-7e12a6040182") on node "crc" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.085559 4672 reconciler_common.go:293] "Volume detached for volume \"pvc-303d4d28-face-45fe-b658-7e12a6040182\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-303d4d28-face-45fe-b658-7e12a6040182\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.292591 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6fdb669fcc-nckw2"] Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.310188 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-txp6k"] Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.337881 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w8fzc" event={"ID":"649147ca-1dbd-4260-8d7c-8077186059f1","Type":"ContainerStarted","Data":"fc791be504d7919ae381e82cce9626958016ce9136a99d702f5f1a07fd157209"} Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.342811 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b745f78d8-8tmpn"] Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.345089 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" event={"ID":"31275847-8cb0-4fe6-9a21-68c3f99727ed","Type":"ContainerStarted","Data":"2e3bc9de5a4e43a9146b67dbf198c3d20804a79e7d1e4ac14ce612e57092c4df"} Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.355825 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.356030 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"efd8ef8f-d736-47d8-a135-c076b3c97b33","Type":"ContainerDied","Data":"3b37ab631e0e57dc3a89d7c632f27657e1aaaee3d0e3d645419da822f07e24a9"} Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.356083 4672 scope.go:117] "RemoveContainer" containerID="365d0f2543e2a13bfb2703c31daf41060be3bf15ad7a0e93980ba7d95778b176" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.356244 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.358066 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-w8fzc" podStartSLOduration=13.720741513 podStartE2EDuration="40.358047101s" podCreationTimestamp="2026-02-17 16:23:10 +0000 UTC" firstStartedPulling="2026-02-17 16:23:12.070326996 +0000 UTC m=+1200.824415728" lastFinishedPulling="2026-02-17 16:23:38.707632584 +0000 UTC m=+1227.461721316" observedRunningTime="2026-02-17 16:23:50.350871702 +0000 UTC m=+1239.104960434" watchObservedRunningTime="2026-02-17 16:23:50.358047101 +0000 UTC m=+1239.112135823" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.379657 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5spsr" event={"ID":"4a68f7c0-293c-434c-8e63-c6855ba4d822","Type":"ContainerStarted","Data":"0a8ca4ed628b1d7249393b041a19d78fb3e129bea47af3a23f562581869139a0"} Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.386255 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-72t4g" event={"ID":"1d098964-5b23-460e-bb88-42ed525b84ed","Type":"ContainerStarted","Data":"5e465adee39fc2fcc0779a67dab0cd184e31994bcad65a8ca61dfbe0edcf675c"} Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.386311 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-72t4g" event={"ID":"1d098964-5b23-460e-bb88-42ed525b84ed","Type":"ContainerStarted","Data":"99084dc0ba32fb7505f9ab6f248d09d65ad9a5ee6235ba4dd9ab7c12fbe275ab"} Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.390128 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-scripts\") pod \"146627ed-88c2-4845-8f17-a52e47fbb924\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.390200 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/146627ed-88c2-4845-8f17-a52e47fbb924-httpd-run\") pod \"146627ed-88c2-4845-8f17-a52e47fbb924\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.390258 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctv2k\" (UniqueName: \"kubernetes.io/projected/146627ed-88c2-4845-8f17-a52e47fbb924-kube-api-access-ctv2k\") pod \"146627ed-88c2-4845-8f17-a52e47fbb924\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.390313 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-internal-tls-certs\") pod \"146627ed-88c2-4845-8f17-a52e47fbb924\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.390354 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/146627ed-88c2-4845-8f17-a52e47fbb924-logs\") pod \"146627ed-88c2-4845-8f17-a52e47fbb924\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.390373 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-config-data\") pod \"146627ed-88c2-4845-8f17-a52e47fbb924\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.390485 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\") pod \"146627ed-88c2-4845-8f17-a52e47fbb924\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.390530 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-combined-ca-bundle\") pod \"146627ed-88c2-4845-8f17-a52e47fbb924\" (UID: \"146627ed-88c2-4845-8f17-a52e47fbb924\") " Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.391846 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/146627ed-88c2-4845-8f17-a52e47fbb924-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "146627ed-88c2-4845-8f17-a52e47fbb924" (UID: "146627ed-88c2-4845-8f17-a52e47fbb924"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.391871 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/146627ed-88c2-4845-8f17-a52e47fbb924-logs" (OuterVolumeSpecName: "logs") pod "146627ed-88c2-4845-8f17-a52e47fbb924" (UID: "146627ed-88c2-4845-8f17-a52e47fbb924"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.392236 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fdb669fcc-nckw2" event={"ID":"4ec72261-e568-4e6d-83e7-aee39c008aab","Type":"ContainerStarted","Data":"65df6360f4f34229e2dbc12ac319a1b0e7a26e3a8668d97175d0226e6983e55b"} Feb 17 16:23:50 crc kubenswrapper[4672]: E0217 16:23:50.400571 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-scpk5" podUID="fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.401329 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/146627ed-88c2-4845-8f17-a52e47fbb924-kube-api-access-ctv2k" (OuterVolumeSpecName: "kube-api-access-ctv2k") pod "146627ed-88c2-4845-8f17-a52e47fbb924" (UID: "146627ed-88c2-4845-8f17-a52e47fbb924"). InnerVolumeSpecName "kube-api-access-ctv2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.402202 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-scripts" (OuterVolumeSpecName: "scripts") pod "146627ed-88c2-4845-8f17-a52e47fbb924" (UID: "146627ed-88c2-4845-8f17-a52e47fbb924"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.408382 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-5spsr" podStartSLOduration=2.75259299 podStartE2EDuration="40.408368308s" podCreationTimestamp="2026-02-17 16:23:10 +0000 UTC" firstStartedPulling="2026-02-17 16:23:12.245759998 +0000 UTC m=+1200.999848730" lastFinishedPulling="2026-02-17 16:23:49.901535316 +0000 UTC m=+1238.655624048" observedRunningTime="2026-02-17 16:23:50.402124093 +0000 UTC m=+1239.156212825" watchObservedRunningTime="2026-02-17 16:23:50.408368308 +0000 UTC m=+1239.162457040" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.438713 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7" (OuterVolumeSpecName: "glance") pod "146627ed-88c2-4845-8f17-a52e47fbb924" (UID: "146627ed-88c2-4845-8f17-a52e47fbb924"). InnerVolumeSpecName "pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.449062 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.463861 4672 scope.go:117] "RemoveContainer" containerID="122b2ee228358e3e941a682c7e7fb4a5ef75a7c4d857a2ed7bc2f9265a56403d" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.468589 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.477819 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "146627ed-88c2-4845-8f17-a52e47fbb924" (UID: "146627ed-88c2-4845-8f17-a52e47fbb924"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.481077 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:23:50 crc kubenswrapper[4672]: E0217 16:23:50.481599 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146627ed-88c2-4845-8f17-a52e47fbb924" containerName="glance-httpd" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.481618 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="146627ed-88c2-4845-8f17-a52e47fbb924" containerName="glance-httpd" Feb 17 16:23:50 crc kubenswrapper[4672]: E0217 16:23:50.481635 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efd8ef8f-d736-47d8-a135-c076b3c97b33" containerName="glance-log" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.481641 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="efd8ef8f-d736-47d8-a135-c076b3c97b33" containerName="glance-log" Feb 17 16:23:50 crc kubenswrapper[4672]: E0217 16:23:50.481653 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efd8ef8f-d736-47d8-a135-c076b3c97b33" containerName="glance-httpd" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.481659 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="efd8ef8f-d736-47d8-a135-c076b3c97b33" containerName="glance-httpd" Feb 17 16:23:50 crc kubenswrapper[4672]: E0217 16:23:50.481674 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146627ed-88c2-4845-8f17-a52e47fbb924" containerName="glance-log" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.481680 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="146627ed-88c2-4845-8f17-a52e47fbb924" containerName="glance-log" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.481839 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="146627ed-88c2-4845-8f17-a52e47fbb924" containerName="glance-httpd" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.481851 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="efd8ef8f-d736-47d8-a135-c076b3c97b33" containerName="glance-httpd" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.481871 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="efd8ef8f-d736-47d8-a135-c076b3c97b33" containerName="glance-log" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.481881 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="146627ed-88c2-4845-8f17-a52e47fbb924" containerName="glance-log" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.482979 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.484235 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-72t4g" podStartSLOduration=25.484218017 podStartE2EDuration="25.484218017s" podCreationTimestamp="2026-02-17 16:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:50.438871912 +0000 UTC m=+1239.192960644" watchObservedRunningTime="2026-02-17 16:23:50.484218017 +0000 UTC m=+1239.238306749" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.488169 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.488423 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.489056 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-config-data" (OuterVolumeSpecName: "config-data") pod "146627ed-88c2-4845-8f17-a52e47fbb924" (UID: "146627ed-88c2-4845-8f17-a52e47fbb924"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.495725 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "146627ed-88c2-4845-8f17-a52e47fbb924" (UID: "146627ed-88c2-4845-8f17-a52e47fbb924"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.496499 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.496540 4672 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/146627ed-88c2-4845-8f17-a52e47fbb924-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.496550 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctv2k\" (UniqueName: \"kubernetes.io/projected/146627ed-88c2-4845-8f17-a52e47fbb924-kube-api-access-ctv2k\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.496560 4672 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.496569 4672 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/146627ed-88c2-4845-8f17-a52e47fbb924-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.496577 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.496595 4672 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\") on node \"crc\" " Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.496605 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146627ed-88c2-4845-8f17-a52e47fbb924-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.503738 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.539583 4672 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.539722 4672 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7") on node "crc" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.600563 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6cb035c-108b-40c2-82fa-bc9db8599b1a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.600613 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnzsf\" (UniqueName: \"kubernetes.io/projected/b6cb035c-108b-40c2-82fa-bc9db8599b1a-kube-api-access-rnzsf\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.600665 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-303d4d28-face-45fe-b658-7e12a6040182\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-303d4d28-face-45fe-b658-7e12a6040182\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.600853 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.600881 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-config-data\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.600895 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6cb035c-108b-40c2-82fa-bc9db8599b1a-logs\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.600912 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.600978 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-scripts\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.601046 4672 reconciler_common.go:293] "Volume detached for volume \"pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.703478 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-303d4d28-face-45fe-b658-7e12a6040182\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-303d4d28-face-45fe-b658-7e12a6040182\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.704957 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.705307 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-config-data\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.705812 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6cb035c-108b-40c2-82fa-bc9db8599b1a-logs\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.706374 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.706210 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6cb035c-108b-40c2-82fa-bc9db8599b1a-logs\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.708865 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-scripts\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.709094 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6cb035c-108b-40c2-82fa-bc9db8599b1a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.709471 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnzsf\" (UniqueName: \"kubernetes.io/projected/b6cb035c-108b-40c2-82fa-bc9db8599b1a-kube-api-access-rnzsf\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.709232 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.709401 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6cb035c-108b-40c2-82fa-bc9db8599b1a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.709182 4672 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.710337 4672 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-303d4d28-face-45fe-b658-7e12a6040182\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-303d4d28-face-45fe-b658-7e12a6040182\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d02cc0527f533ee65b155f740f514c1487916eea5ba6e0a075365c01e7203db4/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.713403 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-scripts\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.714330 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-config-data\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.714566 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.728999 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnzsf\" (UniqueName: \"kubernetes.io/projected/b6cb035c-108b-40c2-82fa-bc9db8599b1a-kube-api-access-rnzsf\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.754385 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-303d4d28-face-45fe-b658-7e12a6040182\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-303d4d28-face-45fe-b658-7e12a6040182\") pod \"glance-default-external-api-0\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " pod="openstack/glance-default-external-api-0" Feb 17 16:23:50 crc kubenswrapper[4672]: I0217 16:23:50.825894 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.419207 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.422178 4672 generic.go:334] "Generic (PLEG): container finished" podID="31275847-8cb0-4fe6-9a21-68c3f99727ed" containerID="b2b6ac28007988e899e118d5a61c650e0e43fa9761908deda1189f6f603b53db" exitCode=0 Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.422233 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" event={"ID":"31275847-8cb0-4fe6-9a21-68c3f99727ed","Type":"ContainerDied","Data":"b2b6ac28007988e899e118d5a61c650e0e43fa9761908deda1189f6f603b53db"} Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.443663 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fdb669fcc-nckw2" event={"ID":"4ec72261-e568-4e6d-83e7-aee39c008aab","Type":"ContainerStarted","Data":"6b9b058ffb60c57f7cf20d1ae869e53314675588f1f076a940daf371457e0622"} Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.444767 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fdb669fcc-nckw2" event={"ID":"4ec72261-e568-4e6d-83e7-aee39c008aab","Type":"ContainerStarted","Data":"0154bd3b20823bfae5abe9a8d2be4113635ea2b75020d704b5e87c5b4dda3d5b"} Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.444812 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.449424 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b745f78d8-8tmpn" event={"ID":"c7309196-390c-4dc4-b9a0-a88f48e270db","Type":"ContainerStarted","Data":"55a2f0e4b37b6e17d04f9873a319170827fcc78003e5c7c76ad8375346ca3b3e"} Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.449466 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b745f78d8-8tmpn" event={"ID":"c7309196-390c-4dc4-b9a0-a88f48e270db","Type":"ContainerStarted","Data":"5b2cbea1afc020385cf8f7fca1f19050ede9a7becdf554b76f685bc785707433"} Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.449477 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b745f78d8-8tmpn" event={"ID":"c7309196-390c-4dc4-b9a0-a88f48e270db","Type":"ContainerStarted","Data":"520785baedee2ef2049423c6c2023ede3985b18a9b0fd31c7881ac220ab8ab65"} Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.449646 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-b745f78d8-8tmpn" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.475318 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"146627ed-88c2-4845-8f17-a52e47fbb924","Type":"ContainerDied","Data":"2ad23bb38fd105c773b5c0bb2bbf899a624618680b94caede8a21aff7b170163"} Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.475374 4672 scope.go:117] "RemoveContainer" containerID="7b3b373261bd88a8c8779d05ad1c5ddf94d60d586a6491caaebf805029599b89" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.475814 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.484244 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6fdb669fcc-nckw2" podStartSLOduration=9.484224351 podStartE2EDuration="9.484224351s" podCreationTimestamp="2026-02-17 16:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:51.460127006 +0000 UTC m=+1240.214215748" watchObservedRunningTime="2026-02-17 16:23:51.484224351 +0000 UTC m=+1240.238313083" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.484280 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4vtt8" event={"ID":"352f61db-51f9-425a-9ee2-78f681033626","Type":"ContainerStarted","Data":"30718458db68d0f429d661b7899a51b27814db48281672e25b55a8fceeeb4bc1"} Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.504554 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b745f78d8-8tmpn" podStartSLOduration=11.504533106 podStartE2EDuration="11.504533106s" podCreationTimestamp="2026-02-17 16:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:51.488597246 +0000 UTC m=+1240.242685978" watchObservedRunningTime="2026-02-17 16:23:51.504533106 +0000 UTC m=+1240.258621838" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.515827 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-4vtt8" podStartSLOduration=3.566182677 podStartE2EDuration="41.515812164s" podCreationTimestamp="2026-02-17 16:23:10 +0000 UTC" firstStartedPulling="2026-02-17 16:23:11.951398376 +0000 UTC m=+1200.705487108" lastFinishedPulling="2026-02-17 16:23:49.901027863 +0000 UTC m=+1238.655116595" observedRunningTime="2026-02-17 16:23:51.508873241 +0000 UTC m=+1240.262961973" watchObservedRunningTime="2026-02-17 16:23:51.515812164 +0000 UTC m=+1240.269900886" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.589869 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.601940 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.614152 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.615671 4672 scope.go:117] "RemoveContainer" containerID="5cb4a444212489abaf460b09acc40aafefeb499efee0862a014083ecea168e37" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.615761 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.628828 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.628902 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.667391 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.761418 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.762924 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf2xh\" (UniqueName: \"kubernetes.io/projected/4640eeb0-bf75-4e1b-a291-964288b3ecb1-kube-api-access-xf2xh\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.762966 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.763004 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.763040 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.763113 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4640eeb0-bf75-4e1b-a291-964288b3ecb1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.763136 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4640eeb0-bf75-4e1b-a291-964288b3ecb1-logs\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.763187 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.864330 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.865374 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.865704 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf2xh\" (UniqueName: \"kubernetes.io/projected/4640eeb0-bf75-4e1b-a291-964288b3ecb1-kube-api-access-xf2xh\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.865736 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.865765 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.865790 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.865828 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4640eeb0-bf75-4e1b-a291-964288b3ecb1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.865845 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4640eeb0-bf75-4e1b-a291-964288b3ecb1-logs\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.866207 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4640eeb0-bf75-4e1b-a291-964288b3ecb1-logs\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.870803 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.871026 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4640eeb0-bf75-4e1b-a291-964288b3ecb1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.884219 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.884654 4672 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.884684 4672 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f554e28ce6891cf21f3390de6086eedf40118aa722324f2faa0d19b98e9f8a02/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.884816 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.885091 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.888653 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf2xh\" (UniqueName: \"kubernetes.io/projected/4640eeb0-bf75-4e1b-a291-964288b3ecb1-kube-api-access-xf2xh\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.956727 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\") pod \"glance-default-internal-api-0\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.963294 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="146627ed-88c2-4845-8f17-a52e47fbb924" path="/var/lib/kubelet/pods/146627ed-88c2-4845-8f17-a52e47fbb924/volumes" Feb 17 16:23:51 crc kubenswrapper[4672]: I0217 16:23:51.964009 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efd8ef8f-d736-47d8-a135-c076b3c97b33" path="/var/lib/kubelet/pods/efd8ef8f-d736-47d8-a135-c076b3c97b33/volumes" Feb 17 16:23:52 crc kubenswrapper[4672]: I0217 16:23:52.259074 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:23:52 crc kubenswrapper[4672]: I0217 16:23:52.500199 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6cb035c-108b-40c2-82fa-bc9db8599b1a","Type":"ContainerStarted","Data":"bc0dd05b1c9dbb98013f4c0dba7ee9d268926ef1b38ce101cae5c17ac3f09a91"} Feb 17 16:23:52 crc kubenswrapper[4672]: I0217 16:23:52.502673 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" event={"ID":"31275847-8cb0-4fe6-9a21-68c3f99727ed","Type":"ContainerStarted","Data":"fdc9d579a10bd973eca9655d89287299c27ef0a941e3b099bd6219004d0850ab"} Feb 17 16:23:52 crc kubenswrapper[4672]: I0217 16:23:52.502751 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:23:52 crc kubenswrapper[4672]: I0217 16:23:52.539255 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" podStartSLOduration=12.539238873 podStartE2EDuration="12.539238873s" podCreationTimestamp="2026-02-17 16:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:52.528586603 +0000 UTC m=+1241.282675335" watchObservedRunningTime="2026-02-17 16:23:52.539238873 +0000 UTC m=+1241.293327605" Feb 17 16:23:53 crc kubenswrapper[4672]: I0217 16:23:53.404957 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:23:53 crc kubenswrapper[4672]: I0217 16:23:53.530285 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6cb035c-108b-40c2-82fa-bc9db8599b1a","Type":"ContainerStarted","Data":"e22babdf03c40fc0728d4b21ad9b7217ccecd7f2b1f505089b250125de3732cc"} Feb 17 16:23:54 crc kubenswrapper[4672]: I0217 16:23:54.540303 4672 generic.go:334] "Generic (PLEG): container finished" podID="649147ca-1dbd-4260-8d7c-8077186059f1" containerID="fc791be504d7919ae381e82cce9626958016ce9136a99d702f5f1a07fd157209" exitCode=0 Feb 17 16:23:54 crc kubenswrapper[4672]: I0217 16:23:54.540399 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w8fzc" event={"ID":"649147ca-1dbd-4260-8d7c-8077186059f1","Type":"ContainerDied","Data":"fc791be504d7919ae381e82cce9626958016ce9136a99d702f5f1a07fd157209"} Feb 17 16:23:54 crc kubenswrapper[4672]: I0217 16:23:54.547291 4672 generic.go:334] "Generic (PLEG): container finished" podID="1d098964-5b23-460e-bb88-42ed525b84ed" containerID="5e465adee39fc2fcc0779a67dab0cd184e31994bcad65a8ca61dfbe0edcf675c" exitCode=0 Feb 17 16:23:54 crc kubenswrapper[4672]: I0217 16:23:54.547331 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-72t4g" event={"ID":"1d098964-5b23-460e-bb88-42ed525b84ed","Type":"ContainerDied","Data":"5e465adee39fc2fcc0779a67dab0cd184e31994bcad65a8ca61dfbe0edcf675c"} Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.267353 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w8fzc" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.274897 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-72t4g" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.355666 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-combined-ca-bundle\") pod \"1d098964-5b23-460e-bb88-42ed525b84ed\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.356032 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-config-data\") pod \"1d098964-5b23-460e-bb88-42ed525b84ed\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.356088 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-credential-keys\") pod \"1d098964-5b23-460e-bb88-42ed525b84ed\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.356137 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/649147ca-1dbd-4260-8d7c-8077186059f1-scripts\") pod \"649147ca-1dbd-4260-8d7c-8077186059f1\" (UID: \"649147ca-1dbd-4260-8d7c-8077186059f1\") " Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.356597 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/649147ca-1dbd-4260-8d7c-8077186059f1-logs\") pod \"649147ca-1dbd-4260-8d7c-8077186059f1\" (UID: \"649147ca-1dbd-4260-8d7c-8077186059f1\") " Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.356640 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/649147ca-1dbd-4260-8d7c-8077186059f1-config-data\") pod \"649147ca-1dbd-4260-8d7c-8077186059f1\" (UID: \"649147ca-1dbd-4260-8d7c-8077186059f1\") " Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.356669 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-fernet-keys\") pod \"1d098964-5b23-460e-bb88-42ed525b84ed\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.356712 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b77sp\" (UniqueName: \"kubernetes.io/projected/1d098964-5b23-460e-bb88-42ed525b84ed-kube-api-access-b77sp\") pod \"1d098964-5b23-460e-bb88-42ed525b84ed\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.356748 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/649147ca-1dbd-4260-8d7c-8077186059f1-combined-ca-bundle\") pod \"649147ca-1dbd-4260-8d7c-8077186059f1\" (UID: \"649147ca-1dbd-4260-8d7c-8077186059f1\") " Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.356769 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-scripts\") pod \"1d098964-5b23-460e-bb88-42ed525b84ed\" (UID: \"1d098964-5b23-460e-bb88-42ed525b84ed\") " Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.356792 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqk9l\" (UniqueName: \"kubernetes.io/projected/649147ca-1dbd-4260-8d7c-8077186059f1-kube-api-access-gqk9l\") pod \"649147ca-1dbd-4260-8d7c-8077186059f1\" (UID: \"649147ca-1dbd-4260-8d7c-8077186059f1\") " Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.356927 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/649147ca-1dbd-4260-8d7c-8077186059f1-logs" (OuterVolumeSpecName: "logs") pod "649147ca-1dbd-4260-8d7c-8077186059f1" (UID: "649147ca-1dbd-4260-8d7c-8077186059f1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.357958 4672 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/649147ca-1dbd-4260-8d7c-8077186059f1-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.361699 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d098964-5b23-460e-bb88-42ed525b84ed-kube-api-access-b77sp" (OuterVolumeSpecName: "kube-api-access-b77sp") pod "1d098964-5b23-460e-bb88-42ed525b84ed" (UID: "1d098964-5b23-460e-bb88-42ed525b84ed"). InnerVolumeSpecName "kube-api-access-b77sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.362113 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1d098964-5b23-460e-bb88-42ed525b84ed" (UID: "1d098964-5b23-460e-bb88-42ed525b84ed"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.362325 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/649147ca-1dbd-4260-8d7c-8077186059f1-scripts" (OuterVolumeSpecName: "scripts") pod "649147ca-1dbd-4260-8d7c-8077186059f1" (UID: "649147ca-1dbd-4260-8d7c-8077186059f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.363752 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/649147ca-1dbd-4260-8d7c-8077186059f1-kube-api-access-gqk9l" (OuterVolumeSpecName: "kube-api-access-gqk9l") pod "649147ca-1dbd-4260-8d7c-8077186059f1" (UID: "649147ca-1dbd-4260-8d7c-8077186059f1"). InnerVolumeSpecName "kube-api-access-gqk9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.364049 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1d098964-5b23-460e-bb88-42ed525b84ed" (UID: "1d098964-5b23-460e-bb88-42ed525b84ed"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.364358 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-scripts" (OuterVolumeSpecName: "scripts") pod "1d098964-5b23-460e-bb88-42ed525b84ed" (UID: "1d098964-5b23-460e-bb88-42ed525b84ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.385179 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/649147ca-1dbd-4260-8d7c-8077186059f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "649147ca-1dbd-4260-8d7c-8077186059f1" (UID: "649147ca-1dbd-4260-8d7c-8077186059f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.392489 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-config-data" (OuterVolumeSpecName: "config-data") pod "1d098964-5b23-460e-bb88-42ed525b84ed" (UID: "1d098964-5b23-460e-bb88-42ed525b84ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.394615 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d098964-5b23-460e-bb88-42ed525b84ed" (UID: "1d098964-5b23-460e-bb88-42ed525b84ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.394821 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/649147ca-1dbd-4260-8d7c-8077186059f1-config-data" (OuterVolumeSpecName: "config-data") pod "649147ca-1dbd-4260-8d7c-8077186059f1" (UID: "649147ca-1dbd-4260-8d7c-8077186059f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.460302 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.460349 4672 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.460363 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/649147ca-1dbd-4260-8d7c-8077186059f1-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.460372 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/649147ca-1dbd-4260-8d7c-8077186059f1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.460381 4672 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.460391 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b77sp\" (UniqueName: \"kubernetes.io/projected/1d098964-5b23-460e-bb88-42ed525b84ed-kube-api-access-b77sp\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.460400 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/649147ca-1dbd-4260-8d7c-8077186059f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.460408 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.460417 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqk9l\" (UniqueName: \"kubernetes.io/projected/649147ca-1dbd-4260-8d7c-8077186059f1-kube-api-access-gqk9l\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.460426 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d098964-5b23-460e-bb88-42ed525b84ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.570574 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4640eeb0-bf75-4e1b-a291-964288b3ecb1","Type":"ContainerStarted","Data":"3f782825d9bd6901043337bcedaaa165090660c76e21967f3120c4f6ef5ced19"} Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.572568 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w8fzc" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.572620 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w8fzc" event={"ID":"649147ca-1dbd-4260-8d7c-8077186059f1","Type":"ContainerDied","Data":"b7dac8b3c065a1556a1deca5f7762ad45931a2b9753c71c66743afd96f2ae5a4"} Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.572658 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7dac8b3c065a1556a1deca5f7762ad45931a2b9753c71c66743afd96f2ae5a4" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.581320 4672 generic.go:334] "Generic (PLEG): container finished" podID="4a68f7c0-293c-434c-8e63-c6855ba4d822" containerID="0a8ca4ed628b1d7249393b041a19d78fb3e129bea47af3a23f562581869139a0" exitCode=0 Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.581417 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5spsr" event={"ID":"4a68f7c0-293c-434c-8e63-c6855ba4d822","Type":"ContainerDied","Data":"0a8ca4ed628b1d7249393b041a19d78fb3e129bea47af3a23f562581869139a0"} Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.591204 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-72t4g" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.591390 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-72t4g" event={"ID":"1d098964-5b23-460e-bb88-42ed525b84ed","Type":"ContainerDied","Data":"99084dc0ba32fb7505f9ab6f248d09d65ad9a5ee6235ba4dd9ab7c12fbe275ab"} Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.592669 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99084dc0ba32fb7505f9ab6f248d09d65ad9a5ee6235ba4dd9ab7c12fbe275ab" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.601842 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46707458-3c2e-4f29-bda9-dd5ebc8b60cb","Type":"ContainerStarted","Data":"6fb6a5a4b16113704b1f0309d5a7bfb303e1d71c6eb57e4be9ca0efebde8268e"} Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.687557 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6cc66b5c9b-dpjsg"] Feb 17 16:23:56 crc kubenswrapper[4672]: E0217 16:23:56.688018 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d098964-5b23-460e-bb88-42ed525b84ed" containerName="keystone-bootstrap" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.688040 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d098964-5b23-460e-bb88-42ed525b84ed" containerName="keystone-bootstrap" Feb 17 16:23:56 crc kubenswrapper[4672]: E0217 16:23:56.688065 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="649147ca-1dbd-4260-8d7c-8077186059f1" containerName="placement-db-sync" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.688074 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="649147ca-1dbd-4260-8d7c-8077186059f1" containerName="placement-db-sync" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.688330 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d098964-5b23-460e-bb88-42ed525b84ed" containerName="keystone-bootstrap" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.688363 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="649147ca-1dbd-4260-8d7c-8077186059f1" containerName="placement-db-sync" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.690995 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.697994 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.698352 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-xn68c" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.706200 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.707902 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.721014 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.749874 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6cc66b5c9b-dpjsg"] Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.779772 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-865bd5d96d-f924s"] Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.781447 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.784982 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.785239 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.785437 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.785772 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.786202 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.788146 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2hf9g" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.806929 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-865bd5d96d-f924s"] Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.870134 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-combined-ca-bundle\") pod \"placement-6cc66b5c9b-dpjsg\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.870407 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-combined-ca-bundle\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.870572 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-credential-keys\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.870679 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-config-data\") pod \"placement-6cc66b5c9b-dpjsg\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.870773 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfkfn\" (UniqueName: \"kubernetes.io/projected/12d8802b-c666-49da-ac6f-cd885f46f9f0-kube-api-access-vfkfn\") pod \"placement-6cc66b5c9b-dpjsg\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.870869 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-internal-tls-certs\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.870964 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-internal-tls-certs\") pod \"placement-6cc66b5c9b-dpjsg\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.871057 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12d8802b-c666-49da-ac6f-cd885f46f9f0-logs\") pod \"placement-6cc66b5c9b-dpjsg\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.871145 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-public-tls-certs\") pod \"placement-6cc66b5c9b-dpjsg\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.871235 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-fernet-keys\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.871317 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7jnl\" (UniqueName: \"kubernetes.io/projected/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-kube-api-access-h7jnl\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.871437 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-scripts\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.871553 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-public-tls-certs\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.871661 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-config-data\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.871761 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-scripts\") pod \"placement-6cc66b5c9b-dpjsg\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.977809 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-credential-keys\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.977864 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-config-data\") pod \"placement-6cc66b5c9b-dpjsg\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.977889 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfkfn\" (UniqueName: \"kubernetes.io/projected/12d8802b-c666-49da-ac6f-cd885f46f9f0-kube-api-access-vfkfn\") pod \"placement-6cc66b5c9b-dpjsg\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.977908 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-internal-tls-certs\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.977933 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-internal-tls-certs\") pod \"placement-6cc66b5c9b-dpjsg\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.977954 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12d8802b-c666-49da-ac6f-cd885f46f9f0-logs\") pod \"placement-6cc66b5c9b-dpjsg\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.977974 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-public-tls-certs\") pod \"placement-6cc66b5c9b-dpjsg\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.977996 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-fernet-keys\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.978016 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7jnl\" (UniqueName: \"kubernetes.io/projected/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-kube-api-access-h7jnl\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.978054 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-scripts\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.978083 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-public-tls-certs\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.978111 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-config-data\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.978157 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-scripts\") pod \"placement-6cc66b5c9b-dpjsg\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.978193 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-combined-ca-bundle\") pod \"placement-6cc66b5c9b-dpjsg\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.978212 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-combined-ca-bundle\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.980632 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12d8802b-c666-49da-ac6f-cd885f46f9f0-logs\") pod \"placement-6cc66b5c9b-dpjsg\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.984332 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-public-tls-certs\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.984985 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-combined-ca-bundle\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.987329 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-combined-ca-bundle\") pod \"placement-6cc66b5c9b-dpjsg\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.987935 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-config-data\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.992179 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-internal-tls-certs\") pod \"placement-6cc66b5c9b-dpjsg\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.992331 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-public-tls-certs\") pod \"placement-6cc66b5c9b-dpjsg\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.992379 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-internal-tls-certs\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.994868 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-scripts\") pod \"placement-6cc66b5c9b-dpjsg\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.995205 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-scripts\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.995258 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-credential-keys\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.995438 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-config-data\") pod \"placement-6cc66b5c9b-dpjsg\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.998681 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7jnl\" (UniqueName: \"kubernetes.io/projected/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-kube-api-access-h7jnl\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:56 crc kubenswrapper[4672]: I0217 16:23:56.999488 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfkfn\" (UniqueName: \"kubernetes.io/projected/12d8802b-c666-49da-ac6f-cd885f46f9f0-kube-api-access-vfkfn\") pod \"placement-6cc66b5c9b-dpjsg\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:57 crc kubenswrapper[4672]: I0217 16:23:57.000941 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4d6379a6-6265-4eed-8c5c-cc4f8991bf7a-fernet-keys\") pod \"keystone-865bd5d96d-f924s\" (UID: \"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a\") " pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:57 crc kubenswrapper[4672]: I0217 16:23:57.022600 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:57 crc kubenswrapper[4672]: I0217 16:23:57.111669 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:57 crc kubenswrapper[4672]: I0217 16:23:57.579666 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6cc66b5c9b-dpjsg"] Feb 17 16:23:57 crc kubenswrapper[4672]: I0217 16:23:57.626820 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4640eeb0-bf75-4e1b-a291-964288b3ecb1","Type":"ContainerStarted","Data":"5d2d94cfbfd60ba5d23a98ac38e4f21d1c6147c03fd72322065afe51d172d515"} Feb 17 16:23:57 crc kubenswrapper[4672]: I0217 16:23:57.628857 4672 generic.go:334] "Generic (PLEG): container finished" podID="352f61db-51f9-425a-9ee2-78f681033626" containerID="30718458db68d0f429d661b7899a51b27814db48281672e25b55a8fceeeb4bc1" exitCode=0 Feb 17 16:23:57 crc kubenswrapper[4672]: I0217 16:23:57.628963 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4vtt8" event={"ID":"352f61db-51f9-425a-9ee2-78f681033626","Type":"ContainerDied","Data":"30718458db68d0f429d661b7899a51b27814db48281672e25b55a8fceeeb4bc1"} Feb 17 16:23:57 crc kubenswrapper[4672]: I0217 16:23:57.633885 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6cb035c-108b-40c2-82fa-bc9db8599b1a","Type":"ContainerStarted","Data":"6b9470433ecb636a8d0409bc7e66e74931bba2d3c00b15e7ca5d37a5ed3f8849"} Feb 17 16:23:57 crc kubenswrapper[4672]: I0217 16:23:57.642467 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cc66b5c9b-dpjsg" event={"ID":"12d8802b-c666-49da-ac6f-cd885f46f9f0","Type":"ContainerStarted","Data":"c20f78e17a84156701a25d44ee9b660d3f52157fce1cdd02d91e84d872e46a78"} Feb 17 16:23:57 crc kubenswrapper[4672]: I0217 16:23:57.690259 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-865bd5d96d-f924s"] Feb 17 16:23:57 crc kubenswrapper[4672]: I0217 16:23:57.692670 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.692650444 podStartE2EDuration="7.692650444s" podCreationTimestamp="2026-02-17 16:23:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:57.66710535 +0000 UTC m=+1246.421194082" watchObservedRunningTime="2026-02-17 16:23:57.692650444 +0000 UTC m=+1246.446739176" Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.122307 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5spsr" Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.207582 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4a68f7c0-293c-434c-8e63-c6855ba4d822-db-sync-config-data\") pod \"4a68f7c0-293c-434c-8e63-c6855ba4d822\" (UID: \"4a68f7c0-293c-434c-8e63-c6855ba4d822\") " Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.207624 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a68f7c0-293c-434c-8e63-c6855ba4d822-combined-ca-bundle\") pod \"4a68f7c0-293c-434c-8e63-c6855ba4d822\" (UID: \"4a68f7c0-293c-434c-8e63-c6855ba4d822\") " Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.207857 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5vsh\" (UniqueName: \"kubernetes.io/projected/4a68f7c0-293c-434c-8e63-c6855ba4d822-kube-api-access-l5vsh\") pod \"4a68f7c0-293c-434c-8e63-c6855ba4d822\" (UID: \"4a68f7c0-293c-434c-8e63-c6855ba4d822\") " Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.212178 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a68f7c0-293c-434c-8e63-c6855ba4d822-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4a68f7c0-293c-434c-8e63-c6855ba4d822" (UID: "4a68f7c0-293c-434c-8e63-c6855ba4d822"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.225726 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a68f7c0-293c-434c-8e63-c6855ba4d822-kube-api-access-l5vsh" (OuterVolumeSpecName: "kube-api-access-l5vsh") pod "4a68f7c0-293c-434c-8e63-c6855ba4d822" (UID: "4a68f7c0-293c-434c-8e63-c6855ba4d822"). InnerVolumeSpecName "kube-api-access-l5vsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.233489 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a68f7c0-293c-434c-8e63-c6855ba4d822-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a68f7c0-293c-434c-8e63-c6855ba4d822" (UID: "4a68f7c0-293c-434c-8e63-c6855ba4d822"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.312716 4672 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4a68f7c0-293c-434c-8e63-c6855ba4d822-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.312872 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a68f7c0-293c-434c-8e63-c6855ba4d822-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.312890 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5vsh\" (UniqueName: \"kubernetes.io/projected/4a68f7c0-293c-434c-8e63-c6855ba4d822-kube-api-access-l5vsh\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.649374 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5spsr" event={"ID":"4a68f7c0-293c-434c-8e63-c6855ba4d822","Type":"ContainerDied","Data":"d02223497d52a7f9ee5c5571b7b0fb08e9c3a5249684423458b897aa179f3739"} Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.649431 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d02223497d52a7f9ee5c5571b7b0fb08e9c3a5249684423458b897aa179f3739" Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.649506 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5spsr" Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.653284 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-865bd5d96d-f924s" event={"ID":"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a","Type":"ContainerStarted","Data":"ccd0713097fa7170ba6c4eac7246ed33036626d560f1e0a5acfa42b78f15ab11"} Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.658979 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4640eeb0-bf75-4e1b-a291-964288b3ecb1","Type":"ContainerStarted","Data":"dfdd97b715abd1e15945c0e67790e24f6002e4e322d35ac6e781335fb439eb6e"} Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.883109 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5498949f87-rqtb7"] Feb 17 16:23:58 crc kubenswrapper[4672]: E0217 16:23:58.890320 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a68f7c0-293c-434c-8e63-c6855ba4d822" containerName="barbican-db-sync" Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.890523 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a68f7c0-293c-434c-8e63-c6855ba4d822" containerName="barbican-db-sync" Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.890845 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a68f7c0-293c-434c-8e63-c6855ba4d822" containerName="barbican-db-sync" Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.896491 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5498949f87-rqtb7" Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.898418 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-647bb874b-c9skw"] Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.900571 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-647bb874b-c9skw" Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.902886 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-865d7" Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.903268 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.912319 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.912555 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.961563 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-647bb874b-c9skw"] Feb 17 16:23:58 crc kubenswrapper[4672]: I0217 16:23:58.968212 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5498949f87-rqtb7"] Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.004746 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-txp6k"] Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.005145 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" podUID="31275847-8cb0-4fe6-9a21-68c3f99727ed" containerName="dnsmasq-dns" containerID="cri-o://fdc9d579a10bd973eca9655d89287299c27ef0a941e3b099bd6219004d0850ab" gracePeriod=10 Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.034225 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.036080 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ckzb\" (UniqueName: \"kubernetes.io/projected/8359abf8-58ae-423d-9de3-b4488cffe247-kube-api-access-6ckzb\") pod \"barbican-worker-5498949f87-rqtb7\" (UID: \"8359abf8-58ae-423d-9de3-b4488cffe247\") " pod="openstack/barbican-worker-5498949f87-rqtb7" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.036171 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vskg\" (UniqueName: \"kubernetes.io/projected/9e060962-a1f6-47fd-af63-b9b7f8bfd863-kube-api-access-4vskg\") pod \"barbican-keystone-listener-647bb874b-c9skw\" (UID: \"9e060962-a1f6-47fd-af63-b9b7f8bfd863\") " pod="openstack/barbican-keystone-listener-647bb874b-c9skw" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.036229 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8359abf8-58ae-423d-9de3-b4488cffe247-logs\") pod \"barbican-worker-5498949f87-rqtb7\" (UID: \"8359abf8-58ae-423d-9de3-b4488cffe247\") " pod="openstack/barbican-worker-5498949f87-rqtb7" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.036254 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9e060962-a1f6-47fd-af63-b9b7f8bfd863-config-data-custom\") pod \"barbican-keystone-listener-647bb874b-c9skw\" (UID: \"9e060962-a1f6-47fd-af63-b9b7f8bfd863\") " pod="openstack/barbican-keystone-listener-647bb874b-c9skw" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.036283 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e060962-a1f6-47fd-af63-b9b7f8bfd863-combined-ca-bundle\") pod \"barbican-keystone-listener-647bb874b-c9skw\" (UID: \"9e060962-a1f6-47fd-af63-b9b7f8bfd863\") " pod="openstack/barbican-keystone-listener-647bb874b-c9skw" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.036315 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8359abf8-58ae-423d-9de3-b4488cffe247-combined-ca-bundle\") pod \"barbican-worker-5498949f87-rqtb7\" (UID: \"8359abf8-58ae-423d-9de3-b4488cffe247\") " pod="openstack/barbican-worker-5498949f87-rqtb7" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.036335 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e060962-a1f6-47fd-af63-b9b7f8bfd863-config-data\") pod \"barbican-keystone-listener-647bb874b-c9skw\" (UID: \"9e060962-a1f6-47fd-af63-b9b7f8bfd863\") " pod="openstack/barbican-keystone-listener-647bb874b-c9skw" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.036373 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e060962-a1f6-47fd-af63-b9b7f8bfd863-logs\") pod \"barbican-keystone-listener-647bb874b-c9skw\" (UID: \"9e060962-a1f6-47fd-af63-b9b7f8bfd863\") " pod="openstack/barbican-keystone-listener-647bb874b-c9skw" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.036433 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8359abf8-58ae-423d-9de3-b4488cffe247-config-data\") pod \"barbican-worker-5498949f87-rqtb7\" (UID: \"8359abf8-58ae-423d-9de3-b4488cffe247\") " pod="openstack/barbican-worker-5498949f87-rqtb7" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.036522 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8359abf8-58ae-423d-9de3-b4488cffe247-config-data-custom\") pod \"barbican-worker-5498949f87-rqtb7\" (UID: \"8359abf8-58ae-423d-9de3-b4488cffe247\") " pod="openstack/barbican-worker-5498949f87-rqtb7" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.126981 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-pg4m8"] Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.129190 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.142312 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vskg\" (UniqueName: \"kubernetes.io/projected/9e060962-a1f6-47fd-af63-b9b7f8bfd863-kube-api-access-4vskg\") pod \"barbican-keystone-listener-647bb874b-c9skw\" (UID: \"9e060962-a1f6-47fd-af63-b9b7f8bfd863\") " pod="openstack/barbican-keystone-listener-647bb874b-c9skw" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.142370 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8359abf8-58ae-423d-9de3-b4488cffe247-logs\") pod \"barbican-worker-5498949f87-rqtb7\" (UID: \"8359abf8-58ae-423d-9de3-b4488cffe247\") " pod="openstack/barbican-worker-5498949f87-rqtb7" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.142421 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9e060962-a1f6-47fd-af63-b9b7f8bfd863-config-data-custom\") pod \"barbican-keystone-listener-647bb874b-c9skw\" (UID: \"9e060962-a1f6-47fd-af63-b9b7f8bfd863\") " pod="openstack/barbican-keystone-listener-647bb874b-c9skw" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.142440 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e060962-a1f6-47fd-af63-b9b7f8bfd863-combined-ca-bundle\") pod \"barbican-keystone-listener-647bb874b-c9skw\" (UID: \"9e060962-a1f6-47fd-af63-b9b7f8bfd863\") " pod="openstack/barbican-keystone-listener-647bb874b-c9skw" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.142477 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8359abf8-58ae-423d-9de3-b4488cffe247-combined-ca-bundle\") pod \"barbican-worker-5498949f87-rqtb7\" (UID: \"8359abf8-58ae-423d-9de3-b4488cffe247\") " pod="openstack/barbican-worker-5498949f87-rqtb7" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.142528 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e060962-a1f6-47fd-af63-b9b7f8bfd863-config-data\") pod \"barbican-keystone-listener-647bb874b-c9skw\" (UID: \"9e060962-a1f6-47fd-af63-b9b7f8bfd863\") " pod="openstack/barbican-keystone-listener-647bb874b-c9skw" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.142582 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e060962-a1f6-47fd-af63-b9b7f8bfd863-logs\") pod \"barbican-keystone-listener-647bb874b-c9skw\" (UID: \"9e060962-a1f6-47fd-af63-b9b7f8bfd863\") " pod="openstack/barbican-keystone-listener-647bb874b-c9skw" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.142645 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8359abf8-58ae-423d-9de3-b4488cffe247-config-data\") pod \"barbican-worker-5498949f87-rqtb7\" (UID: \"8359abf8-58ae-423d-9de3-b4488cffe247\") " pod="openstack/barbican-worker-5498949f87-rqtb7" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.142718 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8359abf8-58ae-423d-9de3-b4488cffe247-config-data-custom\") pod \"barbican-worker-5498949f87-rqtb7\" (UID: \"8359abf8-58ae-423d-9de3-b4488cffe247\") " pod="openstack/barbican-worker-5498949f87-rqtb7" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.142766 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ckzb\" (UniqueName: \"kubernetes.io/projected/8359abf8-58ae-423d-9de3-b4488cffe247-kube-api-access-6ckzb\") pod \"barbican-worker-5498949f87-rqtb7\" (UID: \"8359abf8-58ae-423d-9de3-b4488cffe247\") " pod="openstack/barbican-worker-5498949f87-rqtb7" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.143552 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8359abf8-58ae-423d-9de3-b4488cffe247-logs\") pod \"barbican-worker-5498949f87-rqtb7\" (UID: \"8359abf8-58ae-423d-9de3-b4488cffe247\") " pod="openstack/barbican-worker-5498949f87-rqtb7" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.144733 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-pg4m8"] Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.157268 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8359abf8-58ae-423d-9de3-b4488cffe247-config-data\") pod \"barbican-worker-5498949f87-rqtb7\" (UID: \"8359abf8-58ae-423d-9de3-b4488cffe247\") " pod="openstack/barbican-worker-5498949f87-rqtb7" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.157572 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e060962-a1f6-47fd-af63-b9b7f8bfd863-logs\") pod \"barbican-keystone-listener-647bb874b-c9skw\" (UID: \"9e060962-a1f6-47fd-af63-b9b7f8bfd863\") " pod="openstack/barbican-keystone-listener-647bb874b-c9skw" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.161082 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8359abf8-58ae-423d-9de3-b4488cffe247-config-data-custom\") pod \"barbican-worker-5498949f87-rqtb7\" (UID: \"8359abf8-58ae-423d-9de3-b4488cffe247\") " pod="openstack/barbican-worker-5498949f87-rqtb7" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.164007 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9e060962-a1f6-47fd-af63-b9b7f8bfd863-config-data-custom\") pod \"barbican-keystone-listener-647bb874b-c9skw\" (UID: \"9e060962-a1f6-47fd-af63-b9b7f8bfd863\") " pod="openstack/barbican-keystone-listener-647bb874b-c9skw" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.164322 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e060962-a1f6-47fd-af63-b9b7f8bfd863-combined-ca-bundle\") pod \"barbican-keystone-listener-647bb874b-c9skw\" (UID: \"9e060962-a1f6-47fd-af63-b9b7f8bfd863\") " pod="openstack/barbican-keystone-listener-647bb874b-c9skw" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.173120 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8359abf8-58ae-423d-9de3-b4488cffe247-combined-ca-bundle\") pod \"barbican-worker-5498949f87-rqtb7\" (UID: \"8359abf8-58ae-423d-9de3-b4488cffe247\") " pod="openstack/barbican-worker-5498949f87-rqtb7" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.181804 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e060962-a1f6-47fd-af63-b9b7f8bfd863-config-data\") pod \"barbican-keystone-listener-647bb874b-c9skw\" (UID: \"9e060962-a1f6-47fd-af63-b9b7f8bfd863\") " pod="openstack/barbican-keystone-listener-647bb874b-c9skw" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.188633 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6fd59c5bf8-d6vtf"] Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.190345 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fd59c5bf8-d6vtf" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.195749 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vskg\" (UniqueName: \"kubernetes.io/projected/9e060962-a1f6-47fd-af63-b9b7f8bfd863-kube-api-access-4vskg\") pod \"barbican-keystone-listener-647bb874b-c9skw\" (UID: \"9e060962-a1f6-47fd-af63-b9b7f8bfd863\") " pod="openstack/barbican-keystone-listener-647bb874b-c9skw" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.212421 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ckzb\" (UniqueName: \"kubernetes.io/projected/8359abf8-58ae-423d-9de3-b4488cffe247-kube-api-access-6ckzb\") pod \"barbican-worker-5498949f87-rqtb7\" (UID: \"8359abf8-58ae-423d-9de3-b4488cffe247\") " pod="openstack/barbican-worker-5498949f87-rqtb7" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.220474 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.222124 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6fd59c5bf8-d6vtf"] Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.254406 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-pg4m8\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.254492 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-pg4m8\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.254526 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95nxm\" (UniqueName: \"kubernetes.io/projected/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-kube-api-access-95nxm\") pod \"dnsmasq-dns-75c8ddd69c-pg4m8\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.254598 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-pg4m8\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.254647 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-pg4m8\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.254832 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-config\") pod \"dnsmasq-dns-75c8ddd69c-pg4m8\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.255072 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5498949f87-rqtb7" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.305627 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-647bb874b-c9skw" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.356645 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-pg4m8\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.357029 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-config-data-custom\") pod \"barbican-api-6fd59c5bf8-d6vtf\" (UID: \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\") " pod="openstack/barbican-api-6fd59c5bf8-d6vtf" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.357097 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-config\") pod \"dnsmasq-dns-75c8ddd69c-pg4m8\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.357169 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-config-data\") pod \"barbican-api-6fd59c5bf8-d6vtf\" (UID: \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\") " pod="openstack/barbican-api-6fd59c5bf8-d6vtf" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.357203 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-pg4m8\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.357235 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-pg4m8\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.357252 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95nxm\" (UniqueName: \"kubernetes.io/projected/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-kube-api-access-95nxm\") pod \"dnsmasq-dns-75c8ddd69c-pg4m8\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.357283 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-pg4m8\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.357309 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-combined-ca-bundle\") pod \"barbican-api-6fd59c5bf8-d6vtf\" (UID: \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\") " pod="openstack/barbican-api-6fd59c5bf8-d6vtf" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.357329 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-logs\") pod \"barbican-api-6fd59c5bf8-d6vtf\" (UID: \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\") " pod="openstack/barbican-api-6fd59c5bf8-d6vtf" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.357350 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pflrn\" (UniqueName: \"kubernetes.io/projected/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-kube-api-access-pflrn\") pod \"barbican-api-6fd59c5bf8-d6vtf\" (UID: \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\") " pod="openstack/barbican-api-6fd59c5bf8-d6vtf" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.358201 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-pg4m8\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.358829 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-config\") pod \"dnsmasq-dns-75c8ddd69c-pg4m8\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.359665 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-pg4m8\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.360089 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-pg4m8\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.362777 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-pg4m8\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.383883 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95nxm\" (UniqueName: \"kubernetes.io/projected/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-kube-api-access-95nxm\") pod \"dnsmasq-dns-75c8ddd69c-pg4m8\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.388906 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4vtt8" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.459621 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-config-data\") pod \"barbican-api-6fd59c5bf8-d6vtf\" (UID: \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\") " pod="openstack/barbican-api-6fd59c5bf8-d6vtf" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.459746 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-combined-ca-bundle\") pod \"barbican-api-6fd59c5bf8-d6vtf\" (UID: \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\") " pod="openstack/barbican-api-6fd59c5bf8-d6vtf" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.459767 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-logs\") pod \"barbican-api-6fd59c5bf8-d6vtf\" (UID: \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\") " pod="openstack/barbican-api-6fd59c5bf8-d6vtf" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.459784 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pflrn\" (UniqueName: \"kubernetes.io/projected/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-kube-api-access-pflrn\") pod \"barbican-api-6fd59c5bf8-d6vtf\" (UID: \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\") " pod="openstack/barbican-api-6fd59c5bf8-d6vtf" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.459836 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-config-data-custom\") pod \"barbican-api-6fd59c5bf8-d6vtf\" (UID: \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\") " pod="openstack/barbican-api-6fd59c5bf8-d6vtf" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.462286 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-logs\") pod \"barbican-api-6fd59c5bf8-d6vtf\" (UID: \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\") " pod="openstack/barbican-api-6fd59c5bf8-d6vtf" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.466052 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-combined-ca-bundle\") pod \"barbican-api-6fd59c5bf8-d6vtf\" (UID: \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\") " pod="openstack/barbican-api-6fd59c5bf8-d6vtf" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.468613 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-config-data-custom\") pod \"barbican-api-6fd59c5bf8-d6vtf\" (UID: \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\") " pod="openstack/barbican-api-6fd59c5bf8-d6vtf" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.480180 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-config-data\") pod \"barbican-api-6fd59c5bf8-d6vtf\" (UID: \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\") " pod="openstack/barbican-api-6fd59c5bf8-d6vtf" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.481429 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pflrn\" (UniqueName: \"kubernetes.io/projected/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-kube-api-access-pflrn\") pod \"barbican-api-6fd59c5bf8-d6vtf\" (UID: \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\") " pod="openstack/barbican-api-6fd59c5bf8-d6vtf" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.564139 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-combined-ca-bundle\") pod \"352f61db-51f9-425a-9ee2-78f681033626\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.564226 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/352f61db-51f9-425a-9ee2-78f681033626-etc-machine-id\") pod \"352f61db-51f9-425a-9ee2-78f681033626\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.564271 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjvkc\" (UniqueName: \"kubernetes.io/projected/352f61db-51f9-425a-9ee2-78f681033626-kube-api-access-wjvkc\") pod \"352f61db-51f9-425a-9ee2-78f681033626\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.564336 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-scripts\") pod \"352f61db-51f9-425a-9ee2-78f681033626\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.564388 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-config-data\") pod \"352f61db-51f9-425a-9ee2-78f681033626\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.564427 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-db-sync-config-data\") pod \"352f61db-51f9-425a-9ee2-78f681033626\" (UID: \"352f61db-51f9-425a-9ee2-78f681033626\") " Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.574400 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/352f61db-51f9-425a-9ee2-78f681033626-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "352f61db-51f9-425a-9ee2-78f681033626" (UID: "352f61db-51f9-425a-9ee2-78f681033626"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.594330 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/352f61db-51f9-425a-9ee2-78f681033626-kube-api-access-wjvkc" (OuterVolumeSpecName: "kube-api-access-wjvkc") pod "352f61db-51f9-425a-9ee2-78f681033626" (UID: "352f61db-51f9-425a-9ee2-78f681033626"). InnerVolumeSpecName "kube-api-access-wjvkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.605211 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-scripts" (OuterVolumeSpecName: "scripts") pod "352f61db-51f9-425a-9ee2-78f681033626" (UID: "352f61db-51f9-425a-9ee2-78f681033626"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.605388 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "352f61db-51f9-425a-9ee2-78f681033626" (UID: "352f61db-51f9-425a-9ee2-78f681033626"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.605908 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.668930 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.668959 4672 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.668970 4672 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/352f61db-51f9-425a-9ee2-78f681033626-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.668980 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjvkc\" (UniqueName: \"kubernetes.io/projected/352f61db-51f9-425a-9ee2-78f681033626-kube-api-access-wjvkc\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.675718 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-config-data" (OuterVolumeSpecName: "config-data") pod "352f61db-51f9-425a-9ee2-78f681033626" (UID: "352f61db-51f9-425a-9ee2-78f681033626"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.677097 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fd59c5bf8-d6vtf" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.724826 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-865bd5d96d-f924s" event={"ID":"4d6379a6-6265-4eed-8c5c-cc4f8991bf7a","Type":"ContainerStarted","Data":"a2bdb8aa52496cfec56c4ff7c5850fee803a76a4d4d7413ca9cb93daf8ef7eb3"} Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.725962 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.737588 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "352f61db-51f9-425a-9ee2-78f681033626" (UID: "352f61db-51f9-425a-9ee2-78f681033626"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.738193 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4vtt8" event={"ID":"352f61db-51f9-425a-9ee2-78f681033626","Type":"ContainerDied","Data":"0d61b937fd288a2a36bbc9e01bcbc763358acd480681b440550eaa5a1657e184"} Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.738216 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d61b937fd288a2a36bbc9e01bcbc763358acd480681b440550eaa5a1657e184" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.738261 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4vtt8" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.761083 4672 generic.go:334] "Generic (PLEG): container finished" podID="31275847-8cb0-4fe6-9a21-68c3f99727ed" containerID="fdc9d579a10bd973eca9655d89287299c27ef0a941e3b099bd6219004d0850ab" exitCode=0 Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.761178 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" event={"ID":"31275847-8cb0-4fe6-9a21-68c3f99727ed","Type":"ContainerDied","Data":"fdc9d579a10bd973eca9655d89287299c27ef0a941e3b099bd6219004d0850ab"} Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.770171 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.770196 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/352f61db-51f9-425a-9ee2-78f681033626-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.776999 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cc66b5c9b-dpjsg" event={"ID":"12d8802b-c666-49da-ac6f-cd885f46f9f0","Type":"ContainerStarted","Data":"5f2b62ca220bf20e868e16ffc57ed2e7cb6589839f91d9a7b7d458cb26f03373"} Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.777023 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cc66b5c9b-dpjsg" event={"ID":"12d8802b-c666-49da-ac6f-cd885f46f9f0","Type":"ContainerStarted","Data":"12b2de443d40f1eb4dd2fd2516396777577d97a923427a4beab259f2e173d6bf"} Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.777036 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.777045 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.803180 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.812731 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.812711754 podStartE2EDuration="8.812711754s" podCreationTimestamp="2026-02-17 16:23:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:59.803616784 +0000 UTC m=+1248.557705516" watchObservedRunningTime="2026-02-17 16:23:59.812711754 +0000 UTC m=+1248.566800486" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.813295 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-865bd5d96d-f924s" podStartSLOduration=3.813289649 podStartE2EDuration="3.813289649s" podCreationTimestamp="2026-02-17 16:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:59.755964138 +0000 UTC m=+1248.510052870" watchObservedRunningTime="2026-02-17 16:23:59.813289649 +0000 UTC m=+1248.567378391" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.873563 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-ovsdbserver-nb\") pod \"31275847-8cb0-4fe6-9a21-68c3f99727ed\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.873661 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-ovsdbserver-sb\") pod \"31275847-8cb0-4fe6-9a21-68c3f99727ed\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.873680 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-dns-swift-storage-0\") pod \"31275847-8cb0-4fe6-9a21-68c3f99727ed\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.873713 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-dns-svc\") pod \"31275847-8cb0-4fe6-9a21-68c3f99727ed\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.873772 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-config\") pod \"31275847-8cb0-4fe6-9a21-68c3f99727ed\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.873812 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gt8r\" (UniqueName: \"kubernetes.io/projected/31275847-8cb0-4fe6-9a21-68c3f99727ed-kube-api-access-2gt8r\") pod \"31275847-8cb0-4fe6-9a21-68c3f99727ed\" (UID: \"31275847-8cb0-4fe6-9a21-68c3f99727ed\") " Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.925665 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6cc66b5c9b-dpjsg" podStartSLOduration=3.925643281 podStartE2EDuration="3.925643281s" podCreationTimestamp="2026-02-17 16:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:23:59.849170025 +0000 UTC m=+1248.603258757" watchObservedRunningTime="2026-02-17 16:23:59.925643281 +0000 UTC m=+1248.679732013" Feb 17 16:23:59 crc kubenswrapper[4672]: I0217 16:23:59.983144 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31275847-8cb0-4fe6-9a21-68c3f99727ed-kube-api-access-2gt8r" (OuterVolumeSpecName: "kube-api-access-2gt8r") pod "31275847-8cb0-4fe6-9a21-68c3f99727ed" (UID: "31275847-8cb0-4fe6-9a21-68c3f99727ed"). InnerVolumeSpecName "kube-api-access-2gt8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.006234 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "31275847-8cb0-4fe6-9a21-68c3f99727ed" (UID: "31275847-8cb0-4fe6-9a21-68c3f99727ed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.007309 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-config" (OuterVolumeSpecName: "config") pod "31275847-8cb0-4fe6-9a21-68c3f99727ed" (UID: "31275847-8cb0-4fe6-9a21-68c3f99727ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.026762 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "31275847-8cb0-4fe6-9a21-68c3f99727ed" (UID: "31275847-8cb0-4fe6-9a21-68c3f99727ed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.036075 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "31275847-8cb0-4fe6-9a21-68c3f99727ed" (UID: "31275847-8cb0-4fe6-9a21-68c3f99727ed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.042705 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "31275847-8cb0-4fe6-9a21-68c3f99727ed" (UID: "31275847-8cb0-4fe6-9a21-68c3f99727ed"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.086329 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.086616 4672 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.086837 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.086918 4672 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.087032 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31275847-8cb0-4fe6-9a21-68c3f99727ed-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.087119 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gt8r\" (UniqueName: \"kubernetes.io/projected/31275847-8cb0-4fe6-9a21-68c3f99727ed-kube-api-access-2gt8r\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.148433 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:24:00 crc kubenswrapper[4672]: E0217 16:24:00.148959 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31275847-8cb0-4fe6-9a21-68c3f99727ed" containerName="dnsmasq-dns" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.149318 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="31275847-8cb0-4fe6-9a21-68c3f99727ed" containerName="dnsmasq-dns" Feb 17 16:24:00 crc kubenswrapper[4672]: E0217 16:24:00.149387 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="352f61db-51f9-425a-9ee2-78f681033626" containerName="cinder-db-sync" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.149436 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="352f61db-51f9-425a-9ee2-78f681033626" containerName="cinder-db-sync" Feb 17 16:24:00 crc kubenswrapper[4672]: E0217 16:24:00.149499 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31275847-8cb0-4fe6-9a21-68c3f99727ed" containerName="init" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.149580 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="31275847-8cb0-4fe6-9a21-68c3f99727ed" containerName="init" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.149859 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="352f61db-51f9-425a-9ee2-78f681033626" containerName="cinder-db-sync" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.149922 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="31275847-8cb0-4fe6-9a21-68c3f99727ed" containerName="dnsmasq-dns" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.150937 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.151013 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-647bb874b-c9skw"] Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.151252 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5498949f87-rqtb7"] Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.151390 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.159770 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hfmxf" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.170148 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.170435 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.170435 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.183138 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-pg4m8"] Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.193266 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-mlrbg"] Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.195139 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.222134 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-mlrbg"] Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.257361 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.258873 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.263143 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.276192 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.301747 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nq6j\" (UniqueName: \"kubernetes.io/projected/4141331e-2369-48a5-9f23-15b35887e53b-kube-api-access-5nq6j\") pod \"dnsmasq-dns-5784cf869f-mlrbg\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.301810 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ttq2\" (UniqueName: \"kubernetes.io/projected/00e5bb32-954a-444f-b6d5-74e4c519d0c1-kube-api-access-9ttq2\") pod \"cinder-scheduler-0\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.301834 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-config\") pod \"dnsmasq-dns-5784cf869f-mlrbg\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.301872 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.301906 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-mlrbg\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.301952 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-config-data\") pod \"cinder-scheduler-0\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.301997 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-mlrbg\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.302042 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-mlrbg\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.302075 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-scripts\") pod \"cinder-scheduler-0\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.302101 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.302125 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00e5bb32-954a-444f-b6d5-74e4c519d0c1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.302148 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-dns-svc\") pod \"dnsmasq-dns-5784cf869f-mlrbg\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.373527 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6fd59c5bf8-d6vtf"] Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.403649 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-scripts\") pod \"cinder-scheduler-0\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.403698 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.403719 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00e5bb32-954a-444f-b6d5-74e4c519d0c1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.403739 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-dns-svc\") pod \"dnsmasq-dns-5784cf869f-mlrbg\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.403762 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-scripts\") pod \"cinder-api-0\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " pod="openstack/cinder-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.403801 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nq6j\" (UniqueName: \"kubernetes.io/projected/4141331e-2369-48a5-9f23-15b35887e53b-kube-api-access-5nq6j\") pod \"dnsmasq-dns-5784cf869f-mlrbg\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.403826 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ttq2\" (UniqueName: \"kubernetes.io/projected/00e5bb32-954a-444f-b6d5-74e4c519d0c1-kube-api-access-9ttq2\") pod \"cinder-scheduler-0\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.403841 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-config\") pod \"dnsmasq-dns-5784cf869f-mlrbg\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.403868 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.403885 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fe368a-64c0-447e-969b-06cc444a1bd8-logs\") pod \"cinder-api-0\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " pod="openstack/cinder-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.403904 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-config-data\") pod \"cinder-api-0\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " pod="openstack/cinder-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.403919 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-mlrbg\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.403958 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-config-data\") pod \"cinder-scheduler-0\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.403975 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " pod="openstack/cinder-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.404000 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtrf8\" (UniqueName: \"kubernetes.io/projected/54fe368a-64c0-447e-969b-06cc444a1bd8-kube-api-access-gtrf8\") pod \"cinder-api-0\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " pod="openstack/cinder-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.404016 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-mlrbg\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.404032 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-config-data-custom\") pod \"cinder-api-0\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " pod="openstack/cinder-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.404050 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54fe368a-64c0-447e-969b-06cc444a1bd8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " pod="openstack/cinder-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.404080 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-mlrbg\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.404809 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-mlrbg\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.407406 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00e5bb32-954a-444f-b6d5-74e4c519d0c1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.408225 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-mlrbg\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.409355 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-config\") pod \"dnsmasq-dns-5784cf869f-mlrbg\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.409594 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-dns-svc\") pod \"dnsmasq-dns-5784cf869f-mlrbg\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.409616 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-scripts\") pod \"cinder-scheduler-0\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.411059 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.420939 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-mlrbg\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.421978 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-config-data\") pod \"cinder-scheduler-0\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.422491 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.425126 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nq6j\" (UniqueName: \"kubernetes.io/projected/4141331e-2369-48a5-9f23-15b35887e53b-kube-api-access-5nq6j\") pod \"dnsmasq-dns-5784cf869f-mlrbg\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.437894 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ttq2\" (UniqueName: \"kubernetes.io/projected/00e5bb32-954a-444f-b6d5-74e4c519d0c1-kube-api-access-9ttq2\") pod \"cinder-scheduler-0\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.495052 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.505440 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54fe368a-64c0-447e-969b-06cc444a1bd8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " pod="openstack/cinder-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.505574 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-scripts\") pod \"cinder-api-0\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " pod="openstack/cinder-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.505612 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54fe368a-64c0-447e-969b-06cc444a1bd8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " pod="openstack/cinder-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.505675 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fe368a-64c0-447e-969b-06cc444a1bd8-logs\") pod \"cinder-api-0\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " pod="openstack/cinder-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.505696 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-config-data\") pod \"cinder-api-0\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " pod="openstack/cinder-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.505741 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " pod="openstack/cinder-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.505771 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtrf8\" (UniqueName: \"kubernetes.io/projected/54fe368a-64c0-447e-969b-06cc444a1bd8-kube-api-access-gtrf8\") pod \"cinder-api-0\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " pod="openstack/cinder-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.505796 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-config-data-custom\") pod \"cinder-api-0\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " pod="openstack/cinder-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.507763 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fe368a-64c0-447e-969b-06cc444a1bd8-logs\") pod \"cinder-api-0\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " pod="openstack/cinder-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.509370 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-config-data-custom\") pod \"cinder-api-0\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " pod="openstack/cinder-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.510159 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " pod="openstack/cinder-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.517933 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-scripts\") pod \"cinder-api-0\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " pod="openstack/cinder-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.523096 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-config-data\") pod \"cinder-api-0\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " pod="openstack/cinder-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.523837 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtrf8\" (UniqueName: \"kubernetes.io/projected/54fe368a-64c0-447e-969b-06cc444a1bd8-kube-api-access-gtrf8\") pod \"cinder-api-0\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " pod="openstack/cinder-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.532345 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.582600 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.624122 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-pg4m8"] Feb 17 16:24:00 crc kubenswrapper[4672]: W0217 16:24:00.649406 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ef4f1cd_9771_44fb_aa56_42fcb86e51fb.slice/crio-1ffc7693d18611ab40543416e64d3b8938ebefe74606d5eb2e1bfd6c59f14f4d WatchSource:0}: Error finding container 1ffc7693d18611ab40543416e64d3b8938ebefe74606d5eb2e1bfd6c59f14f4d: Status 404 returned error can't find the container with id 1ffc7693d18611ab40543416e64d3b8938ebefe74606d5eb2e1bfd6c59f14f4d Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.802282 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5498949f87-rqtb7" event={"ID":"8359abf8-58ae-423d-9de3-b4488cffe247","Type":"ContainerStarted","Data":"ac9bfe8ffd41e1181b843d242008ee5e46c8580eb7a8a31d3fb5b8932b9c5146"} Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.804543 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fd59c5bf8-d6vtf" event={"ID":"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76","Type":"ContainerStarted","Data":"85f8023b11ad72082696a55d706231bbf1a9c87c41050c966d87a3c4fc183133"} Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.804565 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fd59c5bf8-d6vtf" event={"ID":"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76","Type":"ContainerStarted","Data":"c80ab85df46ec477c1793e6b52e4d0e63d7919c1ee14a5fa4cf19a146f84222d"} Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.812407 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" event={"ID":"31275847-8cb0-4fe6-9a21-68c3f99727ed","Type":"ContainerDied","Data":"2e3bc9de5a4e43a9146b67dbf198c3d20804a79e7d1e4ac14ce612e57092c4df"} Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.812444 4672 scope.go:117] "RemoveContainer" containerID="fdc9d579a10bd973eca9655d89287299c27ef0a941e3b099bd6219004d0850ab" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.812591 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-txp6k" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.819717 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-647bb874b-c9skw" event={"ID":"9e060962-a1f6-47fd-af63-b9b7f8bfd863","Type":"ContainerStarted","Data":"0695995cc106e5d2a4b7ca6ce5c7499bd3459ad3b5156b94817bc475a4585f61"} Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.828422 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" event={"ID":"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb","Type":"ContainerStarted","Data":"1ffc7693d18611ab40543416e64d3b8938ebefe74606d5eb2e1bfd6c59f14f4d"} Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.828996 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.829176 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.879482 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-txp6k"] Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.886156 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.898072 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-txp6k"] Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.899292 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.921311 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6fd764cdf6-q8qss"] Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.923095 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.929192 4672 scope.go:117] "RemoveContainer" containerID="b2b6ac28007988e899e118d5a61c650e0e43fa9761908deda1189f6f603b53db" Feb 17 16:24:00 crc kubenswrapper[4672]: I0217 16:24:00.932460 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6fd764cdf6-q8qss"] Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.028101 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d47f2556-58c0-4d11-b435-a08e06a11c76-logs\") pod \"placement-6fd764cdf6-q8qss\" (UID: \"d47f2556-58c0-4d11-b435-a08e06a11c76\") " pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.028380 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d47f2556-58c0-4d11-b435-a08e06a11c76-internal-tls-certs\") pod \"placement-6fd764cdf6-q8qss\" (UID: \"d47f2556-58c0-4d11-b435-a08e06a11c76\") " pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.028565 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d47f2556-58c0-4d11-b435-a08e06a11c76-config-data\") pod \"placement-6fd764cdf6-q8qss\" (UID: \"d47f2556-58c0-4d11-b435-a08e06a11c76\") " pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.028716 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47f2556-58c0-4d11-b435-a08e06a11c76-combined-ca-bundle\") pod \"placement-6fd764cdf6-q8qss\" (UID: \"d47f2556-58c0-4d11-b435-a08e06a11c76\") " pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.029838 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d47f2556-58c0-4d11-b435-a08e06a11c76-public-tls-certs\") pod \"placement-6fd764cdf6-q8qss\" (UID: \"d47f2556-58c0-4d11-b435-a08e06a11c76\") " pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.031717 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvfm4\" (UniqueName: \"kubernetes.io/projected/d47f2556-58c0-4d11-b435-a08e06a11c76-kube-api-access-kvfm4\") pod \"placement-6fd764cdf6-q8qss\" (UID: \"d47f2556-58c0-4d11-b435-a08e06a11c76\") " pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.031805 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d47f2556-58c0-4d11-b435-a08e06a11c76-scripts\") pod \"placement-6fd764cdf6-q8qss\" (UID: \"d47f2556-58c0-4d11-b435-a08e06a11c76\") " pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.118334 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.135030 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47f2556-58c0-4d11-b435-a08e06a11c76-combined-ca-bundle\") pod \"placement-6fd764cdf6-q8qss\" (UID: \"d47f2556-58c0-4d11-b435-a08e06a11c76\") " pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.135135 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d47f2556-58c0-4d11-b435-a08e06a11c76-public-tls-certs\") pod \"placement-6fd764cdf6-q8qss\" (UID: \"d47f2556-58c0-4d11-b435-a08e06a11c76\") " pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.135167 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvfm4\" (UniqueName: \"kubernetes.io/projected/d47f2556-58c0-4d11-b435-a08e06a11c76-kube-api-access-kvfm4\") pod \"placement-6fd764cdf6-q8qss\" (UID: \"d47f2556-58c0-4d11-b435-a08e06a11c76\") " pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.135187 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d47f2556-58c0-4d11-b435-a08e06a11c76-scripts\") pod \"placement-6fd764cdf6-q8qss\" (UID: \"d47f2556-58c0-4d11-b435-a08e06a11c76\") " pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.135217 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d47f2556-58c0-4d11-b435-a08e06a11c76-logs\") pod \"placement-6fd764cdf6-q8qss\" (UID: \"d47f2556-58c0-4d11-b435-a08e06a11c76\") " pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.135245 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d47f2556-58c0-4d11-b435-a08e06a11c76-internal-tls-certs\") pod \"placement-6fd764cdf6-q8qss\" (UID: \"d47f2556-58c0-4d11-b435-a08e06a11c76\") " pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.135266 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d47f2556-58c0-4d11-b435-a08e06a11c76-config-data\") pod \"placement-6fd764cdf6-q8qss\" (UID: \"d47f2556-58c0-4d11-b435-a08e06a11c76\") " pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.138453 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d47f2556-58c0-4d11-b435-a08e06a11c76-logs\") pod \"placement-6fd764cdf6-q8qss\" (UID: \"d47f2556-58c0-4d11-b435-a08e06a11c76\") " pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.141886 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d47f2556-58c0-4d11-b435-a08e06a11c76-internal-tls-certs\") pod \"placement-6fd764cdf6-q8qss\" (UID: \"d47f2556-58c0-4d11-b435-a08e06a11c76\") " pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.142891 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47f2556-58c0-4d11-b435-a08e06a11c76-combined-ca-bundle\") pod \"placement-6fd764cdf6-q8qss\" (UID: \"d47f2556-58c0-4d11-b435-a08e06a11c76\") " pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.142904 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d47f2556-58c0-4d11-b435-a08e06a11c76-scripts\") pod \"placement-6fd764cdf6-q8qss\" (UID: \"d47f2556-58c0-4d11-b435-a08e06a11c76\") " pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.146467 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d47f2556-58c0-4d11-b435-a08e06a11c76-config-data\") pod \"placement-6fd764cdf6-q8qss\" (UID: \"d47f2556-58c0-4d11-b435-a08e06a11c76\") " pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.147080 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d47f2556-58c0-4d11-b435-a08e06a11c76-public-tls-certs\") pod \"placement-6fd764cdf6-q8qss\" (UID: \"d47f2556-58c0-4d11-b435-a08e06a11c76\") " pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.179131 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvfm4\" (UniqueName: \"kubernetes.io/projected/d47f2556-58c0-4d11-b435-a08e06a11c76-kube-api-access-kvfm4\") pod \"placement-6fd764cdf6-q8qss\" (UID: \"d47f2556-58c0-4d11-b435-a08e06a11c76\") " pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.250899 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.257834 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-mlrbg"] Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.269350 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.834037 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"00e5bb32-954a-444f-b6d5-74e4c519d0c1","Type":"ContainerStarted","Data":"8f8efdf21ba7a73b13577ce02a6e2af615bace8e64d1dde9800ae96c271d995f"} Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.835616 4672 generic.go:334] "Generic (PLEG): container finished" podID="4141331e-2369-48a5-9f23-15b35887e53b" containerID="84da35121780a7f67dc2d7e9383f9689bebfd9af7cb3001dba721f23323ad680" exitCode=0 Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.835657 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" event={"ID":"4141331e-2369-48a5-9f23-15b35887e53b","Type":"ContainerDied","Data":"84da35121780a7f67dc2d7e9383f9689bebfd9af7cb3001dba721f23323ad680"} Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.835675 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" event={"ID":"4141331e-2369-48a5-9f23-15b35887e53b","Type":"ContainerStarted","Data":"b052e7a7a0321bb968c5b819aa99326f68c1c4421ba3015ee5028d49b5ccae44"} Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.839628 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54fe368a-64c0-447e-969b-06cc444a1bd8","Type":"ContainerStarted","Data":"a0a2bcab6637a0686543ca1aad0384f7079e8a3f1e1edd4b776609093c98b26d"} Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.845314 4672 generic.go:334] "Generic (PLEG): container finished" podID="9ef4f1cd-9771-44fb-aa56-42fcb86e51fb" containerID="0dbc91b75e005b3566b447b932579105104b278d3893d8550d0c6f64b2756ef9" exitCode=0 Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.845395 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" event={"ID":"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb","Type":"ContainerDied","Data":"0dbc91b75e005b3566b447b932579105104b278d3893d8550d0c6f64b2756ef9"} Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.849778 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fd59c5bf8-d6vtf" event={"ID":"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76","Type":"ContainerStarted","Data":"6bc9df9fc3da8cf3b3195fbd8b1f5ea498fd9699337929a0fcc8e0ac1158d24c"} Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.849848 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6fd59c5bf8-d6vtf" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.849936 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6fd59c5bf8-d6vtf" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.866004 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.866275 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.893375 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6fd59c5bf8-d6vtf" podStartSLOduration=2.893359297 podStartE2EDuration="2.893359297s" podCreationTimestamp="2026-02-17 16:23:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:01.892455503 +0000 UTC m=+1250.646544235" watchObservedRunningTime="2026-02-17 16:24:01.893359297 +0000 UTC m=+1250.647448029" Feb 17 16:24:01 crc kubenswrapper[4672]: I0217 16:24:01.989368 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31275847-8cb0-4fe6-9a21-68c3f99727ed" path="/var/lib/kubelet/pods/31275847-8cb0-4fe6-9a21-68c3f99727ed/volumes" Feb 17 16:24:02 crc kubenswrapper[4672]: I0217 16:24:02.048923 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6fd764cdf6-q8qss"] Feb 17 16:24:02 crc kubenswrapper[4672]: I0217 16:24:02.260020 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:02 crc kubenswrapper[4672]: I0217 16:24:02.261677 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:02 crc kubenswrapper[4672]: I0217 16:24:02.322798 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:02 crc kubenswrapper[4672]: I0217 16:24:02.323196 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:02 crc kubenswrapper[4672]: I0217 16:24:02.709355 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:24:02 crc kubenswrapper[4672]: I0217 16:24:02.877882 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54fe368a-64c0-447e-969b-06cc444a1bd8","Type":"ContainerStarted","Data":"b8e8eb412fa2c12dcdafdf2863a3b3efcae5847a6c678cf55b8b039bae580a01"} Feb 17 16:24:02 crc kubenswrapper[4672]: I0217 16:24:02.878395 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:02 crc kubenswrapper[4672]: I0217 16:24:02.878412 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:02 crc kubenswrapper[4672]: W0217 16:24:02.944472 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd47f2556_58c0_4d11_b435_a08e06a11c76.slice/crio-7ce6004370095a5a53f9f966152d401b5c8fd576049b0bcad7e3672d9da37b80 WatchSource:0}: Error finding container 7ce6004370095a5a53f9f966152d401b5c8fd576049b0bcad7e3672d9da37b80: Status 404 returned error can't find the container with id 7ce6004370095a5a53f9f966152d401b5c8fd576049b0bcad7e3672d9da37b80 Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.367434 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.417194 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95nxm\" (UniqueName: \"kubernetes.io/projected/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-kube-api-access-95nxm\") pod \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.417254 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-ovsdbserver-nb\") pod \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.417283 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-config\") pod \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.417301 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-ovsdbserver-sb\") pod \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.417387 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-dns-svc\") pod \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.417407 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-dns-swift-storage-0\") pod \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\" (UID: \"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb\") " Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.434605 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-kube-api-access-95nxm" (OuterVolumeSpecName: "kube-api-access-95nxm") pod "9ef4f1cd-9771-44fb-aa56-42fcb86e51fb" (UID: "9ef4f1cd-9771-44fb-aa56-42fcb86e51fb"). InnerVolumeSpecName "kube-api-access-95nxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.466044 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9ef4f1cd-9771-44fb-aa56-42fcb86e51fb" (UID: "9ef4f1cd-9771-44fb-aa56-42fcb86e51fb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.468470 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9ef4f1cd-9771-44fb-aa56-42fcb86e51fb" (UID: "9ef4f1cd-9771-44fb-aa56-42fcb86e51fb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.496489 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9ef4f1cd-9771-44fb-aa56-42fcb86e51fb" (UID: "9ef4f1cd-9771-44fb-aa56-42fcb86e51fb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.519923 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95nxm\" (UniqueName: \"kubernetes.io/projected/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-kube-api-access-95nxm\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.519950 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.519959 4672 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.519968 4672 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.547182 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9ef4f1cd-9771-44fb-aa56-42fcb86e51fb" (UID: "9ef4f1cd-9771-44fb-aa56-42fcb86e51fb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.549456 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-config" (OuterVolumeSpecName: "config") pod "9ef4f1cd-9771-44fb-aa56-42fcb86e51fb" (UID: "9ef4f1cd-9771-44fb-aa56-42fcb86e51fb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.621306 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.621577 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.895627 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" event={"ID":"4141331e-2369-48a5-9f23-15b35887e53b","Type":"ContainerStarted","Data":"5de0f4fbb7d27105885f6d589fd071b134695174d91ec2848d522fa7b7395b1c"} Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.897385 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.910054 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-647bb874b-c9skw" event={"ID":"9e060962-a1f6-47fd-af63-b9b7f8bfd863","Type":"ContainerStarted","Data":"482f52e89b4cc4506bdb4b38720014481982c87bf875f5c95294477b446de7cc"} Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.910102 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-647bb874b-c9skw" event={"ID":"9e060962-a1f6-47fd-af63-b9b7f8bfd863","Type":"ContainerStarted","Data":"1ac1bdb64ec118553bad7dc6d75d5e038921e17979cc47a616566f4b3d61e232"} Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.921378 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54fe368a-64c0-447e-969b-06cc444a1bd8","Type":"ContainerStarted","Data":"b4602447459674aa4f8ee42f5265444575c54ef38969fde41a64ebae307376a5"} Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.921491 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="54fe368a-64c0-447e-969b-06cc444a1bd8" containerName="cinder-api-log" containerID="cri-o://b8e8eb412fa2c12dcdafdf2863a3b3efcae5847a6c678cf55b8b039bae580a01" gracePeriod=30 Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.921555 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.921595 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="54fe368a-64c0-447e-969b-06cc444a1bd8" containerName="cinder-api" containerID="cri-o://b4602447459674aa4f8ee42f5265444575c54ef38969fde41a64ebae307376a5" gracePeriod=30 Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.941885 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.942630 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-pg4m8" event={"ID":"9ef4f1cd-9771-44fb-aa56-42fcb86e51fb","Type":"ContainerDied","Data":"1ffc7693d18611ab40543416e64d3b8938ebefe74606d5eb2e1bfd6c59f14f4d"} Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.943143 4672 scope.go:117] "RemoveContainer" containerID="0dbc91b75e005b3566b447b932579105104b278d3893d8550d0c6f64b2756ef9" Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.945310 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" podStartSLOduration=3.945297991 podStartE2EDuration="3.945297991s" podCreationTimestamp="2026-02-17 16:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:03.917672293 +0000 UTC m=+1252.671761025" watchObservedRunningTime="2026-02-17 16:24:03.945297991 +0000 UTC m=+1252.699386723" Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.969208 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-647bb874b-c9skw" podStartSLOduration=2.999996544 podStartE2EDuration="5.969185751s" podCreationTimestamp="2026-02-17 16:23:58 +0000 UTC" firstStartedPulling="2026-02-17 16:24:00.141275906 +0000 UTC m=+1248.895364638" lastFinishedPulling="2026-02-17 16:24:03.110465123 +0000 UTC m=+1251.864553845" observedRunningTime="2026-02-17 16:24:03.937405643 +0000 UTC m=+1252.691494375" watchObservedRunningTime="2026-02-17 16:24:03.969185751 +0000 UTC m=+1252.723274483" Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.977170 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.9771416410000002 podStartE2EDuration="3.977141641s" podCreationTimestamp="2026-02-17 16:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:03.973090074 +0000 UTC m=+1252.727178816" watchObservedRunningTime="2026-02-17 16:24:03.977141641 +0000 UTC m=+1252.731230373" Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.991030 4672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:24:03 crc kubenswrapper[4672]: I0217 16:24:03.991062 4672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:24:04 crc kubenswrapper[4672]: I0217 16:24:04.004191 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:04 crc kubenswrapper[4672]: I0217 16:24:04.004241 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:04 crc kubenswrapper[4672]: I0217 16:24:04.004268 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5498949f87-rqtb7" event={"ID":"8359abf8-58ae-423d-9de3-b4488cffe247","Type":"ContainerStarted","Data":"25737e0bba455e95543f4018139251ce84fe4deb2dc0373f2c02a91324e30be3"} Feb 17 16:24:04 crc kubenswrapper[4672]: I0217 16:24:04.004283 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5498949f87-rqtb7" event={"ID":"8359abf8-58ae-423d-9de3-b4488cffe247","Type":"ContainerStarted","Data":"df6736dcab57c1f3d9b994c7b7277fc54a44234a397e70a8f2ce1f75a791b1bd"} Feb 17 16:24:04 crc kubenswrapper[4672]: I0217 16:24:04.004294 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"00e5bb32-954a-444f-b6d5-74e4c519d0c1","Type":"ContainerStarted","Data":"4abc94d493e2c27214b2484b2cb764aa476dddfbb0b6c883f8b6f849067b0cf0"} Feb 17 16:24:04 crc kubenswrapper[4672]: I0217 16:24:04.004304 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6fd764cdf6-q8qss" event={"ID":"d47f2556-58c0-4d11-b435-a08e06a11c76","Type":"ContainerStarted","Data":"f869bf11b4c5df36323a669c27a9f2a65961db1b3041c878c5a1708f764f7294"} Feb 17 16:24:04 crc kubenswrapper[4672]: I0217 16:24:04.004314 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6fd764cdf6-q8qss" event={"ID":"d47f2556-58c0-4d11-b435-a08e06a11c76","Type":"ContainerStarted","Data":"7ce6004370095a5a53f9f966152d401b5c8fd576049b0bcad7e3672d9da37b80"} Feb 17 16:24:04 crc kubenswrapper[4672]: I0217 16:24:04.016503 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5498949f87-rqtb7" podStartSLOduration=3.058916458 podStartE2EDuration="6.016484598s" podCreationTimestamp="2026-02-17 16:23:58 +0000 UTC" firstStartedPulling="2026-02-17 16:24:00.134373844 +0000 UTC m=+1248.888462576" lastFinishedPulling="2026-02-17 16:24:03.091941984 +0000 UTC m=+1251.846030716" observedRunningTime="2026-02-17 16:24:04.015341298 +0000 UTC m=+1252.769430040" watchObservedRunningTime="2026-02-17 16:24:04.016484598 +0000 UTC m=+1252.770573330" Feb 17 16:24:04 crc kubenswrapper[4672]: I0217 16:24:04.084850 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6fd764cdf6-q8qss" podStartSLOduration=4.08482473 podStartE2EDuration="4.08482473s" podCreationTimestamp="2026-02-17 16:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:04.053087463 +0000 UTC m=+1252.807176195" watchObservedRunningTime="2026-02-17 16:24:04.08482473 +0000 UTC m=+1252.838913482" Feb 17 16:24:04 crc kubenswrapper[4672]: I0217 16:24:04.104709 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:24:04 crc kubenswrapper[4672]: I0217 16:24:04.113203 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:24:04 crc kubenswrapper[4672]: I0217 16:24:04.116215 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-pg4m8"] Feb 17 16:24:04 crc kubenswrapper[4672]: I0217 16:24:04.124944 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-pg4m8"] Feb 17 16:24:05 crc kubenswrapper[4672]: I0217 16:24:05.002498 4672 generic.go:334] "Generic (PLEG): container finished" podID="54fe368a-64c0-447e-969b-06cc444a1bd8" containerID="b8e8eb412fa2c12dcdafdf2863a3b3efcae5847a6c678cf55b8b039bae580a01" exitCode=143 Feb 17 16:24:05 crc kubenswrapper[4672]: I0217 16:24:05.002598 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54fe368a-64c0-447e-969b-06cc444a1bd8","Type":"ContainerDied","Data":"b8e8eb412fa2c12dcdafdf2863a3b3efcae5847a6c678cf55b8b039bae580a01"} Feb 17 16:24:05 crc kubenswrapper[4672]: I0217 16:24:05.004712 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-scpk5" event={"ID":"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2","Type":"ContainerStarted","Data":"b4eeadfb9ece5de10f49b2da19997621f6375e0b1e4f58923e6410083e99843a"} Feb 17 16:24:05 crc kubenswrapper[4672]: I0217 16:24:05.010989 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"00e5bb32-954a-444f-b6d5-74e4c519d0c1","Type":"ContainerStarted","Data":"08e071a47eb7d41846924aa64ef729d8aced44bb499e6f9f1f7d5f854fc8530b"} Feb 17 16:24:05 crc kubenswrapper[4672]: I0217 16:24:05.013235 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6fd764cdf6-q8qss" event={"ID":"d47f2556-58c0-4d11-b435-a08e06a11c76","Type":"ContainerStarted","Data":"99acb529ff0a659aa5ec54eba234da115809aa42e955890f3b923106acf19d1b"} Feb 17 16:24:05 crc kubenswrapper[4672]: I0217 16:24:05.029327 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-scpk5" podStartSLOduration=2.440618515 podStartE2EDuration="55.02931069s" podCreationTimestamp="2026-02-17 16:23:10 +0000 UTC" firstStartedPulling="2026-02-17 16:23:11.691417233 +0000 UTC m=+1200.445505965" lastFinishedPulling="2026-02-17 16:24:04.280109408 +0000 UTC m=+1253.034198140" observedRunningTime="2026-02-17 16:24:05.022994503 +0000 UTC m=+1253.777083225" watchObservedRunningTime="2026-02-17 16:24:05.02931069 +0000 UTC m=+1253.783399422" Feb 17 16:24:05 crc kubenswrapper[4672]: I0217 16:24:05.042800 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.085416934 podStartE2EDuration="6.042783165s" podCreationTimestamp="2026-02-17 16:23:59 +0000 UTC" firstStartedPulling="2026-02-17 16:24:01.134397938 +0000 UTC m=+1249.888486660" lastFinishedPulling="2026-02-17 16:24:03.091764159 +0000 UTC m=+1251.845852891" observedRunningTime="2026-02-17 16:24:05.037758002 +0000 UTC m=+1253.791846724" watchObservedRunningTime="2026-02-17 16:24:05.042783165 +0000 UTC m=+1253.796871897" Feb 17 16:24:05 crc kubenswrapper[4672]: I0217 16:24:05.299394 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:05 crc kubenswrapper[4672]: I0217 16:24:05.496041 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 17 16:24:05 crc kubenswrapper[4672]: I0217 16:24:05.879287 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5ff45d6f4b-l6mqf"] Feb 17 16:24:05 crc kubenswrapper[4672]: E0217 16:24:05.879670 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef4f1cd-9771-44fb-aa56-42fcb86e51fb" containerName="init" Feb 17 16:24:05 crc kubenswrapper[4672]: I0217 16:24:05.879684 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef4f1cd-9771-44fb-aa56-42fcb86e51fb" containerName="init" Feb 17 16:24:05 crc kubenswrapper[4672]: I0217 16:24:05.879885 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ef4f1cd-9771-44fb-aa56-42fcb86e51fb" containerName="init" Feb 17 16:24:05 crc kubenswrapper[4672]: I0217 16:24:05.900455 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:05 crc kubenswrapper[4672]: I0217 16:24:05.904524 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 17 16:24:05 crc kubenswrapper[4672]: I0217 16:24:05.913260 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5ff45d6f4b-l6mqf"] Feb 17 16:24:05 crc kubenswrapper[4672]: I0217 16:24:05.919757 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 17 16:24:05 crc kubenswrapper[4672]: I0217 16:24:05.986585 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wwgd\" (UniqueName: \"kubernetes.io/projected/e2bd5a6e-90e9-487c-bc75-ee390f1f97c9-kube-api-access-8wwgd\") pod \"barbican-api-5ff45d6f4b-l6mqf\" (UID: \"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9\") " pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:05 crc kubenswrapper[4672]: I0217 16:24:05.986637 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2bd5a6e-90e9-487c-bc75-ee390f1f97c9-combined-ca-bundle\") pod \"barbican-api-5ff45d6f4b-l6mqf\" (UID: \"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9\") " pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:05 crc kubenswrapper[4672]: I0217 16:24:05.986717 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2bd5a6e-90e9-487c-bc75-ee390f1f97c9-logs\") pod \"barbican-api-5ff45d6f4b-l6mqf\" (UID: \"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9\") " pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:05 crc kubenswrapper[4672]: I0217 16:24:05.986763 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2bd5a6e-90e9-487c-bc75-ee390f1f97c9-config-data\") pod \"barbican-api-5ff45d6f4b-l6mqf\" (UID: \"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9\") " pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:05 crc kubenswrapper[4672]: I0217 16:24:05.986790 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2bd5a6e-90e9-487c-bc75-ee390f1f97c9-config-data-custom\") pod \"barbican-api-5ff45d6f4b-l6mqf\" (UID: \"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9\") " pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:05 crc kubenswrapper[4672]: I0217 16:24:05.986811 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bd5a6e-90e9-487c-bc75-ee390f1f97c9-internal-tls-certs\") pod \"barbican-api-5ff45d6f4b-l6mqf\" (UID: \"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9\") " pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:05 crc kubenswrapper[4672]: I0217 16:24:05.986826 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bd5a6e-90e9-487c-bc75-ee390f1f97c9-public-tls-certs\") pod \"barbican-api-5ff45d6f4b-l6mqf\" (UID: \"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9\") " pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:06 crc kubenswrapper[4672]: I0217 16:24:06.012751 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ef4f1cd-9771-44fb-aa56-42fcb86e51fb" path="/var/lib/kubelet/pods/9ef4f1cd-9771-44fb-aa56-42fcb86e51fb/volumes" Feb 17 16:24:06 crc kubenswrapper[4672]: I0217 16:24:06.088790 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2bd5a6e-90e9-487c-bc75-ee390f1f97c9-logs\") pod \"barbican-api-5ff45d6f4b-l6mqf\" (UID: \"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9\") " pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:06 crc kubenswrapper[4672]: I0217 16:24:06.088906 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2bd5a6e-90e9-487c-bc75-ee390f1f97c9-config-data\") pod \"barbican-api-5ff45d6f4b-l6mqf\" (UID: \"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9\") " pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:06 crc kubenswrapper[4672]: I0217 16:24:06.088959 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2bd5a6e-90e9-487c-bc75-ee390f1f97c9-config-data-custom\") pod \"barbican-api-5ff45d6f4b-l6mqf\" (UID: \"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9\") " pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:06 crc kubenswrapper[4672]: I0217 16:24:06.088979 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bd5a6e-90e9-487c-bc75-ee390f1f97c9-internal-tls-certs\") pod \"barbican-api-5ff45d6f4b-l6mqf\" (UID: \"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9\") " pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:06 crc kubenswrapper[4672]: I0217 16:24:06.088997 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bd5a6e-90e9-487c-bc75-ee390f1f97c9-public-tls-certs\") pod \"barbican-api-5ff45d6f4b-l6mqf\" (UID: \"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9\") " pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:06 crc kubenswrapper[4672]: I0217 16:24:06.089103 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wwgd\" (UniqueName: \"kubernetes.io/projected/e2bd5a6e-90e9-487c-bc75-ee390f1f97c9-kube-api-access-8wwgd\") pod \"barbican-api-5ff45d6f4b-l6mqf\" (UID: \"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9\") " pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:06 crc kubenswrapper[4672]: I0217 16:24:06.089122 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2bd5a6e-90e9-487c-bc75-ee390f1f97c9-combined-ca-bundle\") pod \"barbican-api-5ff45d6f4b-l6mqf\" (UID: \"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9\") " pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:06 crc kubenswrapper[4672]: I0217 16:24:06.091056 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2bd5a6e-90e9-487c-bc75-ee390f1f97c9-logs\") pod \"barbican-api-5ff45d6f4b-l6mqf\" (UID: \"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9\") " pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:06 crc kubenswrapper[4672]: I0217 16:24:06.095874 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bd5a6e-90e9-487c-bc75-ee390f1f97c9-internal-tls-certs\") pod \"barbican-api-5ff45d6f4b-l6mqf\" (UID: \"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9\") " pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:06 crc kubenswrapper[4672]: I0217 16:24:06.096348 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2bd5a6e-90e9-487c-bc75-ee390f1f97c9-config-data-custom\") pod \"barbican-api-5ff45d6f4b-l6mqf\" (UID: \"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9\") " pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:06 crc kubenswrapper[4672]: I0217 16:24:06.101537 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bd5a6e-90e9-487c-bc75-ee390f1f97c9-public-tls-certs\") pod \"barbican-api-5ff45d6f4b-l6mqf\" (UID: \"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9\") " pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:06 crc kubenswrapper[4672]: I0217 16:24:06.104972 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2bd5a6e-90e9-487c-bc75-ee390f1f97c9-combined-ca-bundle\") pod \"barbican-api-5ff45d6f4b-l6mqf\" (UID: \"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9\") " pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:06 crc kubenswrapper[4672]: I0217 16:24:06.109312 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2bd5a6e-90e9-487c-bc75-ee390f1f97c9-config-data\") pod \"barbican-api-5ff45d6f4b-l6mqf\" (UID: \"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9\") " pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:06 crc kubenswrapper[4672]: I0217 16:24:06.113033 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wwgd\" (UniqueName: \"kubernetes.io/projected/e2bd5a6e-90e9-487c-bc75-ee390f1f97c9-kube-api-access-8wwgd\") pod \"barbican-api-5ff45d6f4b-l6mqf\" (UID: \"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9\") " pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:06 crc kubenswrapper[4672]: I0217 16:24:06.241215 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:06 crc kubenswrapper[4672]: I0217 16:24:06.623031 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:10 crc kubenswrapper[4672]: I0217 16:24:10.081606 4672 generic.go:334] "Generic (PLEG): container finished" podID="fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2" containerID="b4eeadfb9ece5de10f49b2da19997621f6375e0b1e4f58923e6410083e99843a" exitCode=0 Feb 17 16:24:10 crc kubenswrapper[4672]: I0217 16:24:10.082159 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-scpk5" event={"ID":"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2","Type":"ContainerDied","Data":"b4eeadfb9ece5de10f49b2da19997621f6375e0b1e4f58923e6410083e99843a"} Feb 17 16:24:10 crc kubenswrapper[4672]: E0217 16:24:10.418149 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"ceilometer-notification-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="46707458-3c2e-4f29-bda9-dd5ebc8b60cb" Feb 17 16:24:10 crc kubenswrapper[4672]: I0217 16:24:10.537700 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:10 crc kubenswrapper[4672]: I0217 16:24:10.614451 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-txwzd"] Feb 17 16:24:10 crc kubenswrapper[4672]: I0217 16:24:10.614751 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" podUID="c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6" containerName="dnsmasq-dns" containerID="cri-o://bec6e8802ffbb46e4ea68ddc569cbf88eceab6369527a15a8405f0eb1391cd4e" gracePeriod=10 Feb 17 16:24:10 crc kubenswrapper[4672]: I0217 16:24:10.657495 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5ff45d6f4b-l6mqf"] Feb 17 16:24:10 crc kubenswrapper[4672]: W0217 16:24:10.659451 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2bd5a6e_90e9_487c_bc75_ee390f1f97c9.slice/crio-277d6a3602db5fe0d15850a0134f8ff7c1d4f045a01cc1349af0ca84bba19d42 WatchSource:0}: Error finding container 277d6a3602db5fe0d15850a0134f8ff7c1d4f045a01cc1349af0ca84bba19d42: Status 404 returned error can't find the container with id 277d6a3602db5fe0d15850a0134f8ff7c1d4f045a01cc1349af0ca84bba19d42 Feb 17 16:24:10 crc kubenswrapper[4672]: I0217 16:24:10.745189 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 17 16:24:10 crc kubenswrapper[4672]: I0217 16:24:10.751003 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-b745f78d8-8tmpn" Feb 17 16:24:10 crc kubenswrapper[4672]: I0217 16:24:10.793503 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:24:10 crc kubenswrapper[4672]: I0217 16:24:10.911060 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" podUID="c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.167:5353: connect: connection refused" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.123713 4672 generic.go:334] "Generic (PLEG): container finished" podID="c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6" containerID="bec6e8802ffbb46e4ea68ddc569cbf88eceab6369527a15a8405f0eb1391cd4e" exitCode=0 Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.123773 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" event={"ID":"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6","Type":"ContainerDied","Data":"bec6e8802ffbb46e4ea68ddc569cbf88eceab6369527a15a8405f0eb1391cd4e"} Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.128283 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6fd59c5bf8-d6vtf" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.151775 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46707458-3c2e-4f29-bda9-dd5ebc8b60cb","Type":"ContainerStarted","Data":"6bab3d177e51b67df76fdc579f3463bb33a49b1499ac8dcbc5471e00e63afb5e"} Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.151935 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46707458-3c2e-4f29-bda9-dd5ebc8b60cb" containerName="sg-core" containerID="cri-o://6fb6a5a4b16113704b1f0309d5a7bfb303e1d71c6eb57e4be9ca0efebde8268e" gracePeriod=30 Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.152144 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.152194 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46707458-3c2e-4f29-bda9-dd5ebc8b60cb" containerName="proxy-httpd" containerID="cri-o://6bab3d177e51b67df76fdc579f3463bb33a49b1499ac8dcbc5471e00e63afb5e" gracePeriod=30 Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.178610 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6fdb669fcc-nckw2"] Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.179083 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6fdb669fcc-nckw2" podUID="4ec72261-e568-4e6d-83e7-aee39c008aab" containerName="neutron-api" containerID="cri-o://0154bd3b20823bfae5abe9a8d2be4113635ea2b75020d704b5e87c5b4dda3d5b" gracePeriod=30 Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.179459 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6fdb669fcc-nckw2" podUID="4ec72261-e568-4e6d-83e7-aee39c008aab" containerName="neutron-httpd" containerID="cri-o://6b9b058ffb60c57f7cf20d1ae869e53314675588f1f076a940daf371457e0622" gracePeriod=30 Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.186528 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5ff45d6f4b-l6mqf" event={"ID":"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9","Type":"ContainerStarted","Data":"277d6a3602db5fe0d15850a0134f8ff7c1d4f045a01cc1349af0ca84bba19d42"} Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.186670 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="00e5bb32-954a-444f-b6d5-74e4c519d0c1" containerName="cinder-scheduler" containerID="cri-o://4abc94d493e2c27214b2484b2cb764aa476dddfbb0b6c883f8b6f849067b0cf0" gracePeriod=30 Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.186689 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="00e5bb32-954a-444f-b6d5-74e4c519d0c1" containerName="probe" containerID="cri-o://08e071a47eb7d41846924aa64ef729d8aced44bb499e6f9f1f7d5f854fc8530b" gracePeriod=30 Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.217579 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-78b4dc5857-f54l5"] Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.219568 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.236348 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6fd59c5bf8-d6vtf" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.249451 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78b4dc5857-f54l5"] Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.316366 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ddee357-262e-497b-aa02-4a2604fadc41-public-tls-certs\") pod \"neutron-78b4dc5857-f54l5\" (UID: \"4ddee357-262e-497b-aa02-4a2604fadc41\") " pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.316522 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ddee357-262e-497b-aa02-4a2604fadc41-internal-tls-certs\") pod \"neutron-78b4dc5857-f54l5\" (UID: \"4ddee357-262e-497b-aa02-4a2604fadc41\") " pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.316665 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddee357-262e-497b-aa02-4a2604fadc41-combined-ca-bundle\") pod \"neutron-78b4dc5857-f54l5\" (UID: \"4ddee357-262e-497b-aa02-4a2604fadc41\") " pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.316828 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4ddee357-262e-497b-aa02-4a2604fadc41-httpd-config\") pod \"neutron-78b4dc5857-f54l5\" (UID: \"4ddee357-262e-497b-aa02-4a2604fadc41\") " pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.316868 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd8s7\" (UniqueName: \"kubernetes.io/projected/4ddee357-262e-497b-aa02-4a2604fadc41-kube-api-access-sd8s7\") pod \"neutron-78b4dc5857-f54l5\" (UID: \"4ddee357-262e-497b-aa02-4a2604fadc41\") " pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.316957 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ddee357-262e-497b-aa02-4a2604fadc41-ovndb-tls-certs\") pod \"neutron-78b4dc5857-f54l5\" (UID: \"4ddee357-262e-497b-aa02-4a2604fadc41\") " pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.316975 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ddee357-262e-497b-aa02-4a2604fadc41-config\") pod \"neutron-78b4dc5857-f54l5\" (UID: \"4ddee357-262e-497b-aa02-4a2604fadc41\") " pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.394765 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.418743 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ddee357-262e-497b-aa02-4a2604fadc41-internal-tls-certs\") pod \"neutron-78b4dc5857-f54l5\" (UID: \"4ddee357-262e-497b-aa02-4a2604fadc41\") " pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.418873 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddee357-262e-497b-aa02-4a2604fadc41-combined-ca-bundle\") pod \"neutron-78b4dc5857-f54l5\" (UID: \"4ddee357-262e-497b-aa02-4a2604fadc41\") " pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.418916 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4ddee357-262e-497b-aa02-4a2604fadc41-httpd-config\") pod \"neutron-78b4dc5857-f54l5\" (UID: \"4ddee357-262e-497b-aa02-4a2604fadc41\") " pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.418946 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd8s7\" (UniqueName: \"kubernetes.io/projected/4ddee357-262e-497b-aa02-4a2604fadc41-kube-api-access-sd8s7\") pod \"neutron-78b4dc5857-f54l5\" (UID: \"4ddee357-262e-497b-aa02-4a2604fadc41\") " pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.420373 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ddee357-262e-497b-aa02-4a2604fadc41-ovndb-tls-certs\") pod \"neutron-78b4dc5857-f54l5\" (UID: \"4ddee357-262e-497b-aa02-4a2604fadc41\") " pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.420438 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ddee357-262e-497b-aa02-4a2604fadc41-config\") pod \"neutron-78b4dc5857-f54l5\" (UID: \"4ddee357-262e-497b-aa02-4a2604fadc41\") " pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.420635 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ddee357-262e-497b-aa02-4a2604fadc41-public-tls-certs\") pod \"neutron-78b4dc5857-f54l5\" (UID: \"4ddee357-262e-497b-aa02-4a2604fadc41\") " pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.425062 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ddee357-262e-497b-aa02-4a2604fadc41-public-tls-certs\") pod \"neutron-78b4dc5857-f54l5\" (UID: \"4ddee357-262e-497b-aa02-4a2604fadc41\") " pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.436101 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ddee357-262e-497b-aa02-4a2604fadc41-ovndb-tls-certs\") pod \"neutron-78b4dc5857-f54l5\" (UID: \"4ddee357-262e-497b-aa02-4a2604fadc41\") " pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.436227 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ddee357-262e-497b-aa02-4a2604fadc41-combined-ca-bundle\") pod \"neutron-78b4dc5857-f54l5\" (UID: \"4ddee357-262e-497b-aa02-4a2604fadc41\") " pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.436409 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4ddee357-262e-497b-aa02-4a2604fadc41-httpd-config\") pod \"neutron-78b4dc5857-f54l5\" (UID: \"4ddee357-262e-497b-aa02-4a2604fadc41\") " pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.436826 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ddee357-262e-497b-aa02-4a2604fadc41-config\") pod \"neutron-78b4dc5857-f54l5\" (UID: \"4ddee357-262e-497b-aa02-4a2604fadc41\") " pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.442805 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ddee357-262e-497b-aa02-4a2604fadc41-internal-tls-certs\") pod \"neutron-78b4dc5857-f54l5\" (UID: \"4ddee357-262e-497b-aa02-4a2604fadc41\") " pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.448102 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd8s7\" (UniqueName: \"kubernetes.io/projected/4ddee357-262e-497b-aa02-4a2604fadc41-kube-api-access-sd8s7\") pod \"neutron-78b4dc5857-f54l5\" (UID: \"4ddee357-262e-497b-aa02-4a2604fadc41\") " pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.522469 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zn85\" (UniqueName: \"kubernetes.io/projected/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-kube-api-access-2zn85\") pod \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.522581 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-ovsdbserver-nb\") pod \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.522607 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-config\") pod \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.522632 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-ovsdbserver-sb\") pod \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.522671 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-dns-svc\") pod \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.522883 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-dns-swift-storage-0\") pod \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\" (UID: \"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6\") " Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.530824 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-kube-api-access-2zn85" (OuterVolumeSpecName: "kube-api-access-2zn85") pod "c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6" (UID: "c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6"). InnerVolumeSpecName "kube-api-access-2zn85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.554870 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.567131 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6fdb669fcc-nckw2" podUID="4ec72261-e568-4e6d-83e7-aee39c008aab" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.173:9696/\": read tcp 10.217.0.2:52640->10.217.0.173:9696: read: connection reset by peer" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.585807 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6" (UID: "c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.589472 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6" (UID: "c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.590000 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-config" (OuterVolumeSpecName: "config") pod "c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6" (UID: "c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.624813 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.624854 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.624869 4672 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.624878 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zn85\" (UniqueName: \"kubernetes.io/projected/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-kube-api-access-2zn85\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.635670 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6" (UID: "c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.637331 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6" (UID: "c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.726322 4672 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:11 crc kubenswrapper[4672]: I0217 16:24:11.726355 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.032002 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-scpk5" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.138342 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-config-data\") pod \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\" (UID: \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\") " Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.138652 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-certs\") pod \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\" (UID: \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\") " Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.138816 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-scripts\") pod \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\" (UID: \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\") " Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.138897 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k62j9\" (UniqueName: \"kubernetes.io/projected/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-kube-api-access-k62j9\") pod \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\" (UID: \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\") " Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.145111 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-combined-ca-bundle\") pod \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\" (UID: \"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2\") " Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.152342 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-scripts" (OuterVolumeSpecName: "scripts") pod "fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2" (UID: "fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.166824 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-kube-api-access-k62j9" (OuterVolumeSpecName: "kube-api-access-k62j9") pod "fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2" (UID: "fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2"). InnerVolumeSpecName "kube-api-access-k62j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.176156 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-certs" (OuterVolumeSpecName: "certs") pod "fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2" (UID: "fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.215845 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-config-data" (OuterVolumeSpecName: "config-data") pod "fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2" (UID: "fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.242858 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-45n2r"] Feb 17 16:24:12 crc kubenswrapper[4672]: E0217 16:24:12.243739 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2" containerName="cloudkitty-db-sync" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.243752 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2" containerName="cloudkitty-db-sync" Feb 17 16:24:12 crc kubenswrapper[4672]: E0217 16:24:12.243765 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6" containerName="init" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.243771 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6" containerName="init" Feb 17 16:24:12 crc kubenswrapper[4672]: E0217 16:24:12.243776 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6" containerName="dnsmasq-dns" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.243783 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6" containerName="dnsmasq-dns" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.244004 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2" containerName="cloudkitty-db-sync" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.244021 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6" containerName="dnsmasq-dns" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.244800 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-45n2r" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.250260 4672 generic.go:334] "Generic (PLEG): container finished" podID="4ec72261-e568-4e6d-83e7-aee39c008aab" containerID="6b9b058ffb60c57f7cf20d1ae869e53314675588f1f076a940daf371457e0622" exitCode=0 Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.250334 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fdb669fcc-nckw2" event={"ID":"4ec72261-e568-4e6d-83e7-aee39c008aab","Type":"ContainerDied","Data":"6b9b058ffb60c57f7cf20d1ae869e53314675588f1f076a940daf371457e0622"} Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.252729 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-45n2r"] Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.252988 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.253017 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k62j9\" (UniqueName: \"kubernetes.io/projected/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-kube-api-access-k62j9\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.253029 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.253038 4672 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.254795 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2" (UID: "fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.293753 4672 generic.go:334] "Generic (PLEG): container finished" podID="00e5bb32-954a-444f-b6d5-74e4c519d0c1" containerID="08e071a47eb7d41846924aa64ef729d8aced44bb499e6f9f1f7d5f854fc8530b" exitCode=0 Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.293834 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"00e5bb32-954a-444f-b6d5-74e4c519d0c1","Type":"ContainerDied","Data":"08e071a47eb7d41846924aa64ef729d8aced44bb499e6f9f1f7d5f854fc8530b"} Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.324020 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" event={"ID":"c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6","Type":"ContainerDied","Data":"4fede41a6b3d442704cc0b64a71cfcde9ecee5251694f4b5e0c64343367e5adb"} Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.324073 4672 scope.go:117] "RemoveContainer" containerID="bec6e8802ffbb46e4ea68ddc569cbf88eceab6369527a15a8405f0eb1391cd4e" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.324212 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-txwzd" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.337176 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-scpk5" event={"ID":"fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2","Type":"ContainerDied","Data":"923bfe0ff7b74505e673b35dec55ff8b807af5db0da4ef4593a1f23f433cd321"} Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.337209 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="923bfe0ff7b74505e673b35dec55ff8b807af5db0da4ef4593a1f23f433cd321" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.337300 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-scpk5" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.350914 4672 generic.go:334] "Generic (PLEG): container finished" podID="46707458-3c2e-4f29-bda9-dd5ebc8b60cb" containerID="6bab3d177e51b67df76fdc579f3463bb33a49b1499ac8dcbc5471e00e63afb5e" exitCode=0 Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.350948 4672 generic.go:334] "Generic (PLEG): container finished" podID="46707458-3c2e-4f29-bda9-dd5ebc8b60cb" containerID="6fb6a5a4b16113704b1f0309d5a7bfb303e1d71c6eb57e4be9ca0efebde8268e" exitCode=2 Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.350996 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46707458-3c2e-4f29-bda9-dd5ebc8b60cb","Type":"ContainerDied","Data":"6bab3d177e51b67df76fdc579f3463bb33a49b1499ac8dcbc5471e00e63afb5e"} Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.351024 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46707458-3c2e-4f29-bda9-dd5ebc8b60cb","Type":"ContainerDied","Data":"6fb6a5a4b16113704b1f0309d5a7bfb303e1d71c6eb57e4be9ca0efebde8268e"} Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.353766 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5ff45d6f4b-l6mqf" event={"ID":"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9","Type":"ContainerStarted","Data":"3cf7873fd4abada7ecf8267d3d8b48e19da30aa56f7a68d4fc5fdea4112ee514"} Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.354845 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-combined-ca-bundle\") pod \"cloudkitty-storageinit-45n2r\" (UID: \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\") " pod="openstack/cloudkitty-storageinit-45n2r" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.354919 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-scripts\") pod \"cloudkitty-storageinit-45n2r\" (UID: \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\") " pod="openstack/cloudkitty-storageinit-45n2r" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.354959 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-config-data\") pod \"cloudkitty-storageinit-45n2r\" (UID: \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\") " pod="openstack/cloudkitty-storageinit-45n2r" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.354999 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqj4z\" (UniqueName: \"kubernetes.io/projected/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-kube-api-access-lqj4z\") pod \"cloudkitty-storageinit-45n2r\" (UID: \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\") " pod="openstack/cloudkitty-storageinit-45n2r" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.355042 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-certs\") pod \"cloudkitty-storageinit-45n2r\" (UID: \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\") " pod="openstack/cloudkitty-storageinit-45n2r" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.355151 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.367614 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-txwzd"] Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.374646 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-txwzd"] Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.384630 4672 scope.go:117] "RemoveContainer" containerID="72ec42bda12a322ab0d79d2b2bdb335b6a491e27fa6bf7c3927a0fbf39b7de2e" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.456777 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-combined-ca-bundle\") pod \"cloudkitty-storageinit-45n2r\" (UID: \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\") " pod="openstack/cloudkitty-storageinit-45n2r" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.457039 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-scripts\") pod \"cloudkitty-storageinit-45n2r\" (UID: \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\") " pod="openstack/cloudkitty-storageinit-45n2r" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.457180 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-config-data\") pod \"cloudkitty-storageinit-45n2r\" (UID: \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\") " pod="openstack/cloudkitty-storageinit-45n2r" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.457328 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqj4z\" (UniqueName: \"kubernetes.io/projected/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-kube-api-access-lqj4z\") pod \"cloudkitty-storageinit-45n2r\" (UID: \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\") " pod="openstack/cloudkitty-storageinit-45n2r" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.457460 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-certs\") pod \"cloudkitty-storageinit-45n2r\" (UID: \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\") " pod="openstack/cloudkitty-storageinit-45n2r" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.464532 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-combined-ca-bundle\") pod \"cloudkitty-storageinit-45n2r\" (UID: \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\") " pod="openstack/cloudkitty-storageinit-45n2r" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.477209 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-scripts\") pod \"cloudkitty-storageinit-45n2r\" (UID: \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\") " pod="openstack/cloudkitty-storageinit-45n2r" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.483809 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-certs\") pod \"cloudkitty-storageinit-45n2r\" (UID: \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\") " pod="openstack/cloudkitty-storageinit-45n2r" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.483827 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-config-data\") pod \"cloudkitty-storageinit-45n2r\" (UID: \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\") " pod="openstack/cloudkitty-storageinit-45n2r" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.488005 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqj4z\" (UniqueName: \"kubernetes.io/projected/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-kube-api-access-lqj4z\") pod \"cloudkitty-storageinit-45n2r\" (UID: \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\") " pod="openstack/cloudkitty-storageinit-45n2r" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.618023 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-45n2r" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.656429 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78b4dc5857-f54l5"] Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.675305 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.764625 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-scripts\") pod \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.764866 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-log-httpd\") pod \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.764890 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-combined-ca-bundle\") pod \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.764909 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk6bj\" (UniqueName: \"kubernetes.io/projected/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-kube-api-access-bk6bj\") pod \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.764950 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-run-httpd\") pod \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.765078 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-sg-core-conf-yaml\") pod \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.765144 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-config-data\") pod \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\" (UID: \"46707458-3c2e-4f29-bda9-dd5ebc8b60cb\") " Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.770792 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-scripts" (OuterVolumeSpecName: "scripts") pod "46707458-3c2e-4f29-bda9-dd5ebc8b60cb" (UID: "46707458-3c2e-4f29-bda9-dd5ebc8b60cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.771142 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "46707458-3c2e-4f29-bda9-dd5ebc8b60cb" (UID: "46707458-3c2e-4f29-bda9-dd5ebc8b60cb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.777781 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-kube-api-access-bk6bj" (OuterVolumeSpecName: "kube-api-access-bk6bj") pod "46707458-3c2e-4f29-bda9-dd5ebc8b60cb" (UID: "46707458-3c2e-4f29-bda9-dd5ebc8b60cb"). InnerVolumeSpecName "kube-api-access-bk6bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.778109 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "46707458-3c2e-4f29-bda9-dd5ebc8b60cb" (UID: "46707458-3c2e-4f29-bda9-dd5ebc8b60cb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.809943 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46707458-3c2e-4f29-bda9-dd5ebc8b60cb" (UID: "46707458-3c2e-4f29-bda9-dd5ebc8b60cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.814471 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "46707458-3c2e-4f29-bda9-dd5ebc8b60cb" (UID: "46707458-3c2e-4f29-bda9-dd5ebc8b60cb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.856800 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-config-data" (OuterVolumeSpecName: "config-data") pod "46707458-3c2e-4f29-bda9-dd5ebc8b60cb" (UID: "46707458-3c2e-4f29-bda9-dd5ebc8b60cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.857656 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6fdb669fcc-nckw2" podUID="4ec72261-e568-4e6d-83e7-aee39c008aab" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.173:9696/\": dial tcp 10.217.0.173:9696: connect: connection refused" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.867320 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.867360 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.867372 4672 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.867384 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.867396 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk6bj\" (UniqueName: \"kubernetes.io/projected/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-kube-api-access-bk6bj\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.867407 4672 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:12 crc kubenswrapper[4672]: I0217 16:24:12.867417 4672 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46707458-3c2e-4f29-bda9-dd5ebc8b60cb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:12 crc kubenswrapper[4672]: E0217 16:24:12.881168 4672 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6ad1f8c_ef18_4bd8_ac43_b8f1151277f6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb223fa0_5bed_4291_bc2d_3e1f6c90e6f2.slice/crio-923bfe0ff7b74505e673b35dec55ff8b807af5db0da4ef4593a1f23f433cd321\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb223fa0_5bed_4291_bc2d_3e1f6c90e6f2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6ad1f8c_ef18_4bd8_ac43_b8f1151277f6.slice/crio-4fede41a6b3d442704cc0b64a71cfcde9ecee5251694f4b5e0c64343367e5adb\": RecentStats: unable to find data in memory cache]" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.155653 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-45n2r"] Feb 17 16:24:13 crc kubenswrapper[4672]: W0217 16:24:13.163408 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f4819e1_9f5d_4b90_9a97_97c8ac76cc77.slice/crio-05da2ffe62044ee48d5f7c578a3db8a5f0194a8d09c6f3d6bb00962dceebdc3c WatchSource:0}: Error finding container 05da2ffe62044ee48d5f7c578a3db8a5f0194a8d09c6f3d6bb00962dceebdc3c: Status 404 returned error can't find the container with id 05da2ffe62044ee48d5f7c578a3db8a5f0194a8d09c6f3d6bb00962dceebdc3c Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.225911 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.374360 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46707458-3c2e-4f29-bda9-dd5ebc8b60cb","Type":"ContainerDied","Data":"2d5defdc7740fe73a7d51e814c1b8c52984eafa390fafd11864047877b4412c0"} Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.374713 4672 scope.go:117] "RemoveContainer" containerID="6bab3d177e51b67df76fdc579f3463bb33a49b1499ac8dcbc5471e00e63afb5e" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.374402 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.385173 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5ff45d6f4b-l6mqf" event={"ID":"e2bd5a6e-90e9-487c-bc75-ee390f1f97c9","Type":"ContainerStarted","Data":"43313546be1c5b9917806335eda81ed270953d62655b78cef34c19527c960e8c"} Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.386066 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.386108 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.388171 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-45n2r" event={"ID":"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77","Type":"ContainerStarted","Data":"05da2ffe62044ee48d5f7c578a3db8a5f0194a8d09c6f3d6bb00962dceebdc3c"} Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.390821 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78b4dc5857-f54l5" event={"ID":"4ddee357-262e-497b-aa02-4a2604fadc41","Type":"ContainerStarted","Data":"3f9122b707fb59f4607825e9b397c5c38ce5ffa4aba7608d3625ff117fe2d0bf"} Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.390857 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78b4dc5857-f54l5" event={"ID":"4ddee357-262e-497b-aa02-4a2604fadc41","Type":"ContainerStarted","Data":"7838bdcd830f1332755f273ac552fb56c22180e1beae879284bfbf3673138efd"} Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.390871 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78b4dc5857-f54l5" event={"ID":"4ddee357-262e-497b-aa02-4a2604fadc41","Type":"ContainerStarted","Data":"662cf161dfe948b2d3d2ecffd00528138394f716dda067462281bb8f4109b317"} Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.391971 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.408762 4672 scope.go:117] "RemoveContainer" containerID="6fb6a5a4b16113704b1f0309d5a7bfb303e1d71c6eb57e4be9ca0efebde8268e" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.424108 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5ff45d6f4b-l6mqf" podStartSLOduration=8.424091681 podStartE2EDuration="8.424091681s" podCreationTimestamp="2026-02-17 16:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:13.407794092 +0000 UTC m=+1262.161882844" watchObservedRunningTime="2026-02-17 16:24:13.424091681 +0000 UTC m=+1262.178180413" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.436058 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-78b4dc5857-f54l5" podStartSLOduration=2.436041246 podStartE2EDuration="2.436041246s" podCreationTimestamp="2026-02-17 16:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:13.432115593 +0000 UTC m=+1262.186204335" watchObservedRunningTime="2026-02-17 16:24:13.436041246 +0000 UTC m=+1262.190129988" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.552623 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.570539 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.581918 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:24:13 crc kubenswrapper[4672]: E0217 16:24:13.582395 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46707458-3c2e-4f29-bda9-dd5ebc8b60cb" containerName="proxy-httpd" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.582416 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="46707458-3c2e-4f29-bda9-dd5ebc8b60cb" containerName="proxy-httpd" Feb 17 16:24:13 crc kubenswrapper[4672]: E0217 16:24:13.582455 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46707458-3c2e-4f29-bda9-dd5ebc8b60cb" containerName="sg-core" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.582463 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="46707458-3c2e-4f29-bda9-dd5ebc8b60cb" containerName="sg-core" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.582895 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="46707458-3c2e-4f29-bda9-dd5ebc8b60cb" containerName="proxy-httpd" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.582921 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="46707458-3c2e-4f29-bda9-dd5ebc8b60cb" containerName="sg-core" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.584886 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.589049 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.589665 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.592135 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.710942 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-config-data\") pod \"ceilometer-0\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.711029 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.711070 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.711311 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-scripts\") pod \"ceilometer-0\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.711402 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df6q8\" (UniqueName: \"kubernetes.io/projected/261e82ba-d901-48e9-9890-768595c3e9df-kube-api-access-df6q8\") pod \"ceilometer-0\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.711435 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/261e82ba-d901-48e9-9890-768595c3e9df-run-httpd\") pod \"ceilometer-0\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.711491 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/261e82ba-d901-48e9-9890-768595c3e9df-log-httpd\") pod \"ceilometer-0\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.812883 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-config-data\") pod \"ceilometer-0\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.812997 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.813046 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.813114 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-scripts\") pod \"ceilometer-0\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.813150 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df6q8\" (UniqueName: \"kubernetes.io/projected/261e82ba-d901-48e9-9890-768595c3e9df-kube-api-access-df6q8\") pod \"ceilometer-0\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.813176 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/261e82ba-d901-48e9-9890-768595c3e9df-run-httpd\") pod \"ceilometer-0\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.813210 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/261e82ba-d901-48e9-9890-768595c3e9df-log-httpd\") pod \"ceilometer-0\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.814187 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/261e82ba-d901-48e9-9890-768595c3e9df-log-httpd\") pod \"ceilometer-0\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.814486 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/261e82ba-d901-48e9-9890-768595c3e9df-run-httpd\") pod \"ceilometer-0\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.818140 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.818828 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-scripts\") pod \"ceilometer-0\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.819686 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-config-data\") pod \"ceilometer-0\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.821467 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.829899 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df6q8\" (UniqueName: \"kubernetes.io/projected/261e82ba-d901-48e9-9890-768595c3e9df-kube-api-access-df6q8\") pod \"ceilometer-0\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.912760 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.964881 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46707458-3c2e-4f29-bda9-dd5ebc8b60cb" path="/var/lib/kubelet/pods/46707458-3c2e-4f29-bda9-dd5ebc8b60cb/volumes" Feb 17 16:24:13 crc kubenswrapper[4672]: I0217 16:24:13.965661 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6" path="/var/lib/kubelet/pods/c6ad1f8c-ef18-4bd8-ac43-b8f1151277f6/volumes" Feb 17 16:24:14 crc kubenswrapper[4672]: I0217 16:24:14.383039 4672 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:24:14 crc kubenswrapper[4672]: I0217 16:24:14.397353 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:24:14 crc kubenswrapper[4672]: I0217 16:24:14.415991 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-45n2r" event={"ID":"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77","Type":"ContainerStarted","Data":"fe2741f0ffcaa8b2beeaef5d27ce0186dfdbe811b0f5e893708f048dbc9d5d99"} Feb 17 16:24:14 crc kubenswrapper[4672]: I0217 16:24:14.421346 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"261e82ba-d901-48e9-9890-768595c3e9df","Type":"ContainerStarted","Data":"5e405dcc8193a94eb51442f9ee3b1638d6fb4e55cafec7184caf4cc99e7e71ee"} Feb 17 16:24:14 crc kubenswrapper[4672]: I0217 16:24:14.456058 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-45n2r" podStartSLOduration=2.456035786 podStartE2EDuration="2.456035786s" podCreationTimestamp="2026-02-17 16:24:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:14.439316435 +0000 UTC m=+1263.193405177" watchObservedRunningTime="2026-02-17 16:24:14.456035786 +0000 UTC m=+1263.210124528" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.436642 4672 generic.go:334] "Generic (PLEG): container finished" podID="9f4819e1-9f5d-4b90-9a97-97c8ac76cc77" containerID="fe2741f0ffcaa8b2beeaef5d27ce0186dfdbe811b0f5e893708f048dbc9d5d99" exitCode=0 Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.436695 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-45n2r" event={"ID":"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77","Type":"ContainerDied","Data":"fe2741f0ffcaa8b2beeaef5d27ce0186dfdbe811b0f5e893708f048dbc9d5d99"} Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.440264 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.441498 4672 generic.go:334] "Generic (PLEG): container finished" podID="4ec72261-e568-4e6d-83e7-aee39c008aab" containerID="0154bd3b20823bfae5abe9a8d2be4113635ea2b75020d704b5e87c5b4dda3d5b" exitCode=0 Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.441620 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fdb669fcc-nckw2" event={"ID":"4ec72261-e568-4e6d-83e7-aee39c008aab","Type":"ContainerDied","Data":"0154bd3b20823bfae5abe9a8d2be4113635ea2b75020d704b5e87c5b4dda3d5b"} Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.441787 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fdb669fcc-nckw2" event={"ID":"4ec72261-e568-4e6d-83e7-aee39c008aab","Type":"ContainerDied","Data":"65df6360f4f34229e2dbc12ac319a1b0e7a26e3a8668d97175d0226e6983e55b"} Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.441825 4672 scope.go:117] "RemoveContainer" containerID="6b9b058ffb60c57f7cf20d1ae869e53314675588f1f076a940daf371457e0622" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.453861 4672 generic.go:334] "Generic (PLEG): container finished" podID="00e5bb32-954a-444f-b6d5-74e4c519d0c1" containerID="4abc94d493e2c27214b2484b2cb764aa476dddfbb0b6c883f8b6f849067b0cf0" exitCode=0 Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.455774 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"00e5bb32-954a-444f-b6d5-74e4c519d0c1","Type":"ContainerDied","Data":"4abc94d493e2c27214b2484b2cb764aa476dddfbb0b6c883f8b6f849067b0cf0"} Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.508663 4672 scope.go:117] "RemoveContainer" containerID="0154bd3b20823bfae5abe9a8d2be4113635ea2b75020d704b5e87c5b4dda3d5b" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.535238 4672 scope.go:117] "RemoveContainer" containerID="6b9b058ffb60c57f7cf20d1ae869e53314675588f1f076a940daf371457e0622" Feb 17 16:24:15 crc kubenswrapper[4672]: E0217 16:24:15.535763 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b9b058ffb60c57f7cf20d1ae869e53314675588f1f076a940daf371457e0622\": container with ID starting with 6b9b058ffb60c57f7cf20d1ae869e53314675588f1f076a940daf371457e0622 not found: ID does not exist" containerID="6b9b058ffb60c57f7cf20d1ae869e53314675588f1f076a940daf371457e0622" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.535822 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b9b058ffb60c57f7cf20d1ae869e53314675588f1f076a940daf371457e0622"} err="failed to get container status \"6b9b058ffb60c57f7cf20d1ae869e53314675588f1f076a940daf371457e0622\": rpc error: code = NotFound desc = could not find container \"6b9b058ffb60c57f7cf20d1ae869e53314675588f1f076a940daf371457e0622\": container with ID starting with 6b9b058ffb60c57f7cf20d1ae869e53314675588f1f076a940daf371457e0622 not found: ID does not exist" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.535854 4672 scope.go:117] "RemoveContainer" containerID="0154bd3b20823bfae5abe9a8d2be4113635ea2b75020d704b5e87c5b4dda3d5b" Feb 17 16:24:15 crc kubenswrapper[4672]: E0217 16:24:15.536263 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0154bd3b20823bfae5abe9a8d2be4113635ea2b75020d704b5e87c5b4dda3d5b\": container with ID starting with 0154bd3b20823bfae5abe9a8d2be4113635ea2b75020d704b5e87c5b4dda3d5b not found: ID does not exist" containerID="0154bd3b20823bfae5abe9a8d2be4113635ea2b75020d704b5e87c5b4dda3d5b" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.536310 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0154bd3b20823bfae5abe9a8d2be4113635ea2b75020d704b5e87c5b4dda3d5b"} err="failed to get container status \"0154bd3b20823bfae5abe9a8d2be4113635ea2b75020d704b5e87c5b4dda3d5b\": rpc error: code = NotFound desc = could not find container \"0154bd3b20823bfae5abe9a8d2be4113635ea2b75020d704b5e87c5b4dda3d5b\": container with ID starting with 0154bd3b20823bfae5abe9a8d2be4113635ea2b75020d704b5e87c5b4dda3d5b not found: ID does not exist" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.552589 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-ovndb-tls-certs\") pod \"4ec72261-e568-4e6d-83e7-aee39c008aab\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.552684 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-public-tls-certs\") pod \"4ec72261-e568-4e6d-83e7-aee39c008aab\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.552759 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp86x\" (UniqueName: \"kubernetes.io/projected/4ec72261-e568-4e6d-83e7-aee39c008aab-kube-api-access-lp86x\") pod \"4ec72261-e568-4e6d-83e7-aee39c008aab\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.552822 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-httpd-config\") pod \"4ec72261-e568-4e6d-83e7-aee39c008aab\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.552888 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-internal-tls-certs\") pod \"4ec72261-e568-4e6d-83e7-aee39c008aab\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.552932 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-config\") pod \"4ec72261-e568-4e6d-83e7-aee39c008aab\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.552960 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-combined-ca-bundle\") pod \"4ec72261-e568-4e6d-83e7-aee39c008aab\" (UID: \"4ec72261-e568-4e6d-83e7-aee39c008aab\") " Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.562182 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ec72261-e568-4e6d-83e7-aee39c008aab-kube-api-access-lp86x" (OuterVolumeSpecName: "kube-api-access-lp86x") pod "4ec72261-e568-4e6d-83e7-aee39c008aab" (UID: "4ec72261-e568-4e6d-83e7-aee39c008aab"). InnerVolumeSpecName "kube-api-access-lp86x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.564443 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "4ec72261-e568-4e6d-83e7-aee39c008aab" (UID: "4ec72261-e568-4e6d-83e7-aee39c008aab"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.645773 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4ec72261-e568-4e6d-83e7-aee39c008aab" (UID: "4ec72261-e568-4e6d-83e7-aee39c008aab"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.657162 4672 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.657199 4672 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.657212 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp86x\" (UniqueName: \"kubernetes.io/projected/4ec72261-e568-4e6d-83e7-aee39c008aab-kube-api-access-lp86x\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.662059 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ec72261-e568-4e6d-83e7-aee39c008aab" (UID: "4ec72261-e568-4e6d-83e7-aee39c008aab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.662531 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4ec72261-e568-4e6d-83e7-aee39c008aab" (UID: "4ec72261-e568-4e6d-83e7-aee39c008aab"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.668788 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "4ec72261-e568-4e6d-83e7-aee39c008aab" (UID: "4ec72261-e568-4e6d-83e7-aee39c008aab"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.682429 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-config" (OuterVolumeSpecName: "config") pod "4ec72261-e568-4e6d-83e7-aee39c008aab" (UID: "4ec72261-e568-4e6d-83e7-aee39c008aab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.705002 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.769425 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-scripts\") pod \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.769827 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-config-data-custom\") pod \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.769874 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00e5bb32-954a-444f-b6d5-74e4c519d0c1-etc-machine-id\") pod \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.769911 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-config-data\") pod \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.769968 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ttq2\" (UniqueName: \"kubernetes.io/projected/00e5bb32-954a-444f-b6d5-74e4c519d0c1-kube-api-access-9ttq2\") pod \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.769988 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-combined-ca-bundle\") pod \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\" (UID: \"00e5bb32-954a-444f-b6d5-74e4c519d0c1\") " Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.770406 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.770424 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.770434 4672 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.770443 4672 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ec72261-e568-4e6d-83e7-aee39c008aab-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.771752 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00e5bb32-954a-444f-b6d5-74e4c519d0c1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "00e5bb32-954a-444f-b6d5-74e4c519d0c1" (UID: "00e5bb32-954a-444f-b6d5-74e4c519d0c1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.772733 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "00e5bb32-954a-444f-b6d5-74e4c519d0c1" (UID: "00e5bb32-954a-444f-b6d5-74e4c519d0c1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.773695 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-scripts" (OuterVolumeSpecName: "scripts") pod "00e5bb32-954a-444f-b6d5-74e4c519d0c1" (UID: "00e5bb32-954a-444f-b6d5-74e4c519d0c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.787156 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00e5bb32-954a-444f-b6d5-74e4c519d0c1-kube-api-access-9ttq2" (OuterVolumeSpecName: "kube-api-access-9ttq2") pod "00e5bb32-954a-444f-b6d5-74e4c519d0c1" (UID: "00e5bb32-954a-444f-b6d5-74e4c519d0c1"). InnerVolumeSpecName "kube-api-access-9ttq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.822777 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00e5bb32-954a-444f-b6d5-74e4c519d0c1" (UID: "00e5bb32-954a-444f-b6d5-74e4c519d0c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.872296 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ttq2\" (UniqueName: \"kubernetes.io/projected/00e5bb32-954a-444f-b6d5-74e4c519d0c1-kube-api-access-9ttq2\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.872353 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.872372 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.872391 4672 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.872410 4672 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00e5bb32-954a-444f-b6d5-74e4c519d0c1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.926156 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-config-data" (OuterVolumeSpecName: "config-data") pod "00e5bb32-954a-444f-b6d5-74e4c519d0c1" (UID: "00e5bb32-954a-444f-b6d5-74e4c519d0c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:15 crc kubenswrapper[4672]: I0217 16:24:15.974043 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00e5bb32-954a-444f-b6d5-74e4c519d0c1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.472569 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fdb669fcc-nckw2" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.477966 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"00e5bb32-954a-444f-b6d5-74e4c519d0c1","Type":"ContainerDied","Data":"8f8efdf21ba7a73b13577ce02a6e2af615bace8e64d1dde9800ae96c271d995f"} Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.478042 4672 scope.go:117] "RemoveContainer" containerID="08e071a47eb7d41846924aa64ef729d8aced44bb499e6f9f1f7d5f854fc8530b" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.478237 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.484196 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"261e82ba-d901-48e9-9890-768595c3e9df","Type":"ContainerStarted","Data":"2ae893b986748b9f358e2e0f5f2eda2e2e881935f9b3538bd9ccd113b9ae6f5d"} Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.530865 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.548432 4672 scope.go:117] "RemoveContainer" containerID="4abc94d493e2c27214b2484b2cb764aa476dddfbb0b6c883f8b6f849067b0cf0" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.552580 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.591573 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6fdb669fcc-nckw2"] Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.608578 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:24:16 crc kubenswrapper[4672]: E0217 16:24:16.609019 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00e5bb32-954a-444f-b6d5-74e4c519d0c1" containerName="cinder-scheduler" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.609037 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="00e5bb32-954a-444f-b6d5-74e4c519d0c1" containerName="cinder-scheduler" Feb 17 16:24:16 crc kubenswrapper[4672]: E0217 16:24:16.609065 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00e5bb32-954a-444f-b6d5-74e4c519d0c1" containerName="probe" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.609071 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="00e5bb32-954a-444f-b6d5-74e4c519d0c1" containerName="probe" Feb 17 16:24:16 crc kubenswrapper[4672]: E0217 16:24:16.609080 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec72261-e568-4e6d-83e7-aee39c008aab" containerName="neutron-api" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.609087 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec72261-e568-4e6d-83e7-aee39c008aab" containerName="neutron-api" Feb 17 16:24:16 crc kubenswrapper[4672]: E0217 16:24:16.609099 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec72261-e568-4e6d-83e7-aee39c008aab" containerName="neutron-httpd" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.609107 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec72261-e568-4e6d-83e7-aee39c008aab" containerName="neutron-httpd" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.609297 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ec72261-e568-4e6d-83e7-aee39c008aab" containerName="neutron-httpd" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.609317 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="00e5bb32-954a-444f-b6d5-74e4c519d0c1" containerName="probe" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.609331 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ec72261-e568-4e6d-83e7-aee39c008aab" containerName="neutron-api" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.609348 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="00e5bb32-954a-444f-b6d5-74e4c519d0c1" containerName="cinder-scheduler" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.610379 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.612099 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.617538 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6fdb669fcc-nckw2"] Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.626954 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.686782 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61689860-63f3-424a-92e6-b5f0fd8d17b3-config-data\") pod \"cinder-scheduler-0\" (UID: \"61689860-63f3-424a-92e6-b5f0fd8d17b3\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.686851 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/61689860-63f3-424a-92e6-b5f0fd8d17b3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"61689860-63f3-424a-92e6-b5f0fd8d17b3\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.686892 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89x6k\" (UniqueName: \"kubernetes.io/projected/61689860-63f3-424a-92e6-b5f0fd8d17b3-kube-api-access-89x6k\") pod \"cinder-scheduler-0\" (UID: \"61689860-63f3-424a-92e6-b5f0fd8d17b3\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.687146 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61689860-63f3-424a-92e6-b5f0fd8d17b3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"61689860-63f3-424a-92e6-b5f0fd8d17b3\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.687369 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61689860-63f3-424a-92e6-b5f0fd8d17b3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"61689860-63f3-424a-92e6-b5f0fd8d17b3\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.687406 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61689860-63f3-424a-92e6-b5f0fd8d17b3-scripts\") pod \"cinder-scheduler-0\" (UID: \"61689860-63f3-424a-92e6-b5f0fd8d17b3\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.795052 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61689860-63f3-424a-92e6-b5f0fd8d17b3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"61689860-63f3-424a-92e6-b5f0fd8d17b3\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.795110 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61689860-63f3-424a-92e6-b5f0fd8d17b3-scripts\") pod \"cinder-scheduler-0\" (UID: \"61689860-63f3-424a-92e6-b5f0fd8d17b3\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.795146 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61689860-63f3-424a-92e6-b5f0fd8d17b3-config-data\") pod \"cinder-scheduler-0\" (UID: \"61689860-63f3-424a-92e6-b5f0fd8d17b3\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.795195 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/61689860-63f3-424a-92e6-b5f0fd8d17b3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"61689860-63f3-424a-92e6-b5f0fd8d17b3\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.795236 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89x6k\" (UniqueName: \"kubernetes.io/projected/61689860-63f3-424a-92e6-b5f0fd8d17b3-kube-api-access-89x6k\") pod \"cinder-scheduler-0\" (UID: \"61689860-63f3-424a-92e6-b5f0fd8d17b3\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.795287 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61689860-63f3-424a-92e6-b5f0fd8d17b3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"61689860-63f3-424a-92e6-b5f0fd8d17b3\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.796177 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/61689860-63f3-424a-92e6-b5f0fd8d17b3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"61689860-63f3-424a-92e6-b5f0fd8d17b3\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.802210 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61689860-63f3-424a-92e6-b5f0fd8d17b3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"61689860-63f3-424a-92e6-b5f0fd8d17b3\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.803468 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61689860-63f3-424a-92e6-b5f0fd8d17b3-scripts\") pod \"cinder-scheduler-0\" (UID: \"61689860-63f3-424a-92e6-b5f0fd8d17b3\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.806813 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61689860-63f3-424a-92e6-b5f0fd8d17b3-config-data\") pod \"cinder-scheduler-0\" (UID: \"61689860-63f3-424a-92e6-b5f0fd8d17b3\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.813976 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61689860-63f3-424a-92e6-b5f0fd8d17b3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"61689860-63f3-424a-92e6-b5f0fd8d17b3\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.828496 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89x6k\" (UniqueName: \"kubernetes.io/projected/61689860-63f3-424a-92e6-b5f0fd8d17b3-kube-api-access-89x6k\") pod \"cinder-scheduler-0\" (UID: \"61689860-63f3-424a-92e6-b5f0fd8d17b3\") " pod="openstack/cinder-scheduler-0" Feb 17 16:24:16 crc kubenswrapper[4672]: I0217 16:24:16.929305 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.060448 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-45n2r" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.098810 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-combined-ca-bundle\") pod \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\" (UID: \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\") " Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.099058 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-config-data\") pod \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\" (UID: \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\") " Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.099119 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-certs\") pod \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\" (UID: \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\") " Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.099230 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-scripts\") pod \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\" (UID: \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\") " Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.099341 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqj4z\" (UniqueName: \"kubernetes.io/projected/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-kube-api-access-lqj4z\") pod \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\" (UID: \"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77\") " Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.104685 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-kube-api-access-lqj4z" (OuterVolumeSpecName: "kube-api-access-lqj4z") pod "9f4819e1-9f5d-4b90-9a97-97c8ac76cc77" (UID: "9f4819e1-9f5d-4b90-9a97-97c8ac76cc77"). InnerVolumeSpecName "kube-api-access-lqj4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.107699 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-scripts" (OuterVolumeSpecName: "scripts") pod "9f4819e1-9f5d-4b90-9a97-97c8ac76cc77" (UID: "9f4819e1-9f5d-4b90-9a97-97c8ac76cc77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.114008 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-certs" (OuterVolumeSpecName: "certs") pod "9f4819e1-9f5d-4b90-9a97-97c8ac76cc77" (UID: "9f4819e1-9f5d-4b90-9a97-97c8ac76cc77"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.127544 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-config-data" (OuterVolumeSpecName: "config-data") pod "9f4819e1-9f5d-4b90-9a97-97c8ac76cc77" (UID: "9f4819e1-9f5d-4b90-9a97-97c8ac76cc77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.136216 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f4819e1-9f5d-4b90-9a97-97c8ac76cc77" (UID: "9f4819e1-9f5d-4b90-9a97-97c8ac76cc77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.201401 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.201490 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.201503 4672 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.201540 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.201551 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqj4z\" (UniqueName: \"kubernetes.io/projected/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77-kube-api-access-lqj4z\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.393977 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:24:17 crc kubenswrapper[4672]: W0217 16:24:17.400417 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61689860_63f3_424a_92e6_b5f0fd8d17b3.slice/crio-df892ec85f21360be55edd7ec1a7dff2d9ed070f15beb433645f89fba7f9f5f1 WatchSource:0}: Error finding container df892ec85f21360be55edd7ec1a7dff2d9ed070f15beb433645f89fba7f9f5f1: Status 404 returned error can't find the container with id df892ec85f21360be55edd7ec1a7dff2d9ed070f15beb433645f89fba7f9f5f1 Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.501435 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"261e82ba-d901-48e9-9890-768595c3e9df","Type":"ContainerStarted","Data":"291bfa87a02917eb4de97c1b46645bab031f200ccf9a9bb7eb9d3ba35f0f5f06"} Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.502722 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"61689860-63f3-424a-92e6-b5f0fd8d17b3","Type":"ContainerStarted","Data":"df892ec85f21360be55edd7ec1a7dff2d9ed070f15beb433645f89fba7f9f5f1"} Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.503937 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-45n2r" event={"ID":"9f4819e1-9f5d-4b90-9a97-97c8ac76cc77","Type":"ContainerDied","Data":"05da2ffe62044ee48d5f7c578a3db8a5f0194a8d09c6f3d6bb00962dceebdc3c"} Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.503964 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05da2ffe62044ee48d5f7c578a3db8a5f0194a8d09c6f3d6bb00962dceebdc3c" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.504013 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-45n2r" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.655954 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:24:17 crc kubenswrapper[4672]: E0217 16:24:17.656336 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f4819e1-9f5d-4b90-9a97-97c8ac76cc77" containerName="cloudkitty-storageinit" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.656352 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f4819e1-9f5d-4b90-9a97-97c8ac76cc77" containerName="cloudkitty-storageinit" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.678328 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f4819e1-9f5d-4b90-9a97-97c8ac76cc77" containerName="cloudkitty-storageinit" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.679139 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.680561 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.683921 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.684639 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.684764 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.684867 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.684963 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-qptlj" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.714102 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.714148 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-scripts\") pod \"cloudkitty-proc-0\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.714171 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/57cae8a9-696d-48a0-9420-2a2d7ed2639f-certs\") pod \"cloudkitty-proc-0\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.714229 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4htvg\" (UniqueName: \"kubernetes.io/projected/57cae8a9-696d-48a0-9420-2a2d7ed2639f-kube-api-access-4htvg\") pod \"cloudkitty-proc-0\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.714301 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.714317 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-config-data\") pod \"cloudkitty-proc-0\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.763547 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76d4d7c9b7-6prnv"] Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.765798 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.789345 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76d4d7c9b7-6prnv"] Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.827548 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-config-data\") pod \"cloudkitty-proc-0\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.827626 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-dns-svc\") pod \"dnsmasq-dns-76d4d7c9b7-6prnv\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.827652 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-config\") pod \"dnsmasq-dns-76d4d7c9b7-6prnv\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.827670 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5gv9\" (UniqueName: \"kubernetes.io/projected/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-kube-api-access-t5gv9\") pod \"dnsmasq-dns-76d4d7c9b7-6prnv\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.827701 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.827718 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-scripts\") pod \"cloudkitty-proc-0\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.827738 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-ovsdbserver-sb\") pod \"dnsmasq-dns-76d4d7c9b7-6prnv\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.827757 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-ovsdbserver-nb\") pod \"dnsmasq-dns-76d4d7c9b7-6prnv\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.827778 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/57cae8a9-696d-48a0-9420-2a2d7ed2639f-certs\") pod \"cloudkitty-proc-0\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.827802 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-dns-swift-storage-0\") pod \"dnsmasq-dns-76d4d7c9b7-6prnv\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.827851 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4htvg\" (UniqueName: \"kubernetes.io/projected/57cae8a9-696d-48a0-9420-2a2d7ed2639f-kube-api-access-4htvg\") pod \"cloudkitty-proc-0\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.827923 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.838863 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/57cae8a9-696d-48a0-9420-2a2d7ed2639f-certs\") pod \"cloudkitty-proc-0\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.839161 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-scripts\") pod \"cloudkitty-proc-0\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.839552 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.843469 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-config-data\") pod \"cloudkitty-proc-0\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.864096 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.882054 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4htvg\" (UniqueName: \"kubernetes.io/projected/57cae8a9-696d-48a0-9420-2a2d7ed2639f-kube-api-access-4htvg\") pod \"cloudkitty-proc-0\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.930187 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-config\") pod \"dnsmasq-dns-76d4d7c9b7-6prnv\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.930229 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5gv9\" (UniqueName: \"kubernetes.io/projected/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-kube-api-access-t5gv9\") pod \"dnsmasq-dns-76d4d7c9b7-6prnv\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.930273 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-ovsdbserver-sb\") pod \"dnsmasq-dns-76d4d7c9b7-6prnv\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.930289 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-ovsdbserver-nb\") pod \"dnsmasq-dns-76d4d7c9b7-6prnv\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.930315 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-dns-swift-storage-0\") pod \"dnsmasq-dns-76d4d7c9b7-6prnv\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.930442 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-dns-svc\") pod \"dnsmasq-dns-76d4d7c9b7-6prnv\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.931320 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-dns-svc\") pod \"dnsmasq-dns-76d4d7c9b7-6prnv\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.931913 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-config\") pod \"dnsmasq-dns-76d4d7c9b7-6prnv\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.932688 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-ovsdbserver-sb\") pod \"dnsmasq-dns-76d4d7c9b7-6prnv\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.933235 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-ovsdbserver-nb\") pod \"dnsmasq-dns-76d4d7c9b7-6prnv\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.933783 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-dns-swift-storage-0\") pod \"dnsmasq-dns-76d4d7c9b7-6prnv\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.950599 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5gv9\" (UniqueName: \"kubernetes.io/projected/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-kube-api-access-t5gv9\") pod \"dnsmasq-dns-76d4d7c9b7-6prnv\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.998691 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00e5bb32-954a-444f-b6d5-74e4c519d0c1" path="/var/lib/kubelet/pods/00e5bb32-954a-444f-b6d5-74e4c519d0c1/volumes" Feb 17 16:24:17 crc kubenswrapper[4672]: I0217 16:24:17.999403 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ec72261-e568-4e6d-83e7-aee39c008aab" path="/var/lib/kubelet/pods/4ec72261-e568-4e6d-83e7-aee39c008aab/volumes" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.012933 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.094087 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.098492 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.118996 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.133439 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.138898 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.139043 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf8z2\" (UniqueName: \"kubernetes.io/projected/20df91b1-1934-4461-a13b-c9461a066562-kube-api-access-vf8z2\") pod \"cloudkitty-api-0\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.139148 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-scripts\") pod \"cloudkitty-api-0\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.139229 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20df91b1-1934-4461-a13b-c9461a066562-logs\") pod \"cloudkitty-api-0\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.139333 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.139420 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/20df91b1-1934-4461-a13b-c9461a066562-certs\") pod \"cloudkitty-api-0\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.139644 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-config-data\") pod \"cloudkitty-api-0\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.178734 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.242225 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-scripts\") pod \"cloudkitty-api-0\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.242273 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20df91b1-1934-4461-a13b-c9461a066562-logs\") pod \"cloudkitty-api-0\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.242320 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.242347 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/20df91b1-1934-4461-a13b-c9461a066562-certs\") pod \"cloudkitty-api-0\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.242421 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-config-data\") pod \"cloudkitty-api-0\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.242482 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.242525 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf8z2\" (UniqueName: \"kubernetes.io/projected/20df91b1-1934-4461-a13b-c9461a066562-kube-api-access-vf8z2\") pod \"cloudkitty-api-0\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.245464 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20df91b1-1934-4461-a13b-c9461a066562-logs\") pod \"cloudkitty-api-0\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.250070 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-scripts\") pod \"cloudkitty-api-0\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.256882 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-config-data\") pod \"cloudkitty-api-0\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.257919 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.258986 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.263739 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf8z2\" (UniqueName: \"kubernetes.io/projected/20df91b1-1934-4461-a13b-c9461a066562-kube-api-access-vf8z2\") pod \"cloudkitty-api-0\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.266295 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/20df91b1-1934-4461-a13b-c9461a066562-certs\") pod \"cloudkitty-api-0\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.359616 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.529737 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"261e82ba-d901-48e9-9890-768595c3e9df","Type":"ContainerStarted","Data":"76bcbfd3216cfc1f37682ca551e3fa5d5fe00389def7eda63afec5c06fe5de87"} Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.731041 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.865025 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76d4d7c9b7-6prnv"] Feb 17 16:24:18 crc kubenswrapper[4672]: W0217 16:24:18.875342 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a2e45a5_f1bd_4f5b_82e6_f98168ece99a.slice/crio-4dbc418dede794706fca8c8ba8177efa88e7a3668bc123ec4547ce757e08ae06 WatchSource:0}: Error finding container 4dbc418dede794706fca8c8ba8177efa88e7a3668bc123ec4547ce757e08ae06: Status 404 returned error can't find the container with id 4dbc418dede794706fca8c8ba8177efa88e7a3668bc123ec4547ce757e08ae06 Feb 17 16:24:18 crc kubenswrapper[4672]: I0217 16:24:18.941435 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:24:19 crc kubenswrapper[4672]: I0217 16:24:19.542396 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"20df91b1-1934-4461-a13b-c9461a066562","Type":"ContainerStarted","Data":"f55b38cdffba60b5604bdf72379eb78f86549776388c53f61ae8b5bb937d186d"} Feb 17 16:24:19 crc kubenswrapper[4672]: I0217 16:24:19.542796 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"20df91b1-1934-4461-a13b-c9461a066562","Type":"ContainerStarted","Data":"b00933f820701504e599f0992c5e804c1ea70d09a40b45a570fc403b14edaa19"} Feb 17 16:24:19 crc kubenswrapper[4672]: I0217 16:24:19.544153 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"61689860-63f3-424a-92e6-b5f0fd8d17b3","Type":"ContainerStarted","Data":"539184288a1cc9c35281a6ff3bb454120e6cbcf4ec56866c2f3e6ea567dfcb5f"} Feb 17 16:24:19 crc kubenswrapper[4672]: I0217 16:24:19.549441 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"57cae8a9-696d-48a0-9420-2a2d7ed2639f","Type":"ContainerStarted","Data":"6d86a4ac41648b299250f82035b9c5711f19cebe12c89fbe0ce1f1e57a44136c"} Feb 17 16:24:19 crc kubenswrapper[4672]: I0217 16:24:19.555893 4672 generic.go:334] "Generic (PLEG): container finished" podID="8a2e45a5-f1bd-4f5b-82e6-f98168ece99a" containerID="daddb4d78fbf72310ed73e7caa36ff0ac12e74c7ae0cefdcc0f7d05c4cb2f929" exitCode=0 Feb 17 16:24:19 crc kubenswrapper[4672]: I0217 16:24:19.555943 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" event={"ID":"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a","Type":"ContainerDied","Data":"daddb4d78fbf72310ed73e7caa36ff0ac12e74c7ae0cefdcc0f7d05c4cb2f929"} Feb 17 16:24:19 crc kubenswrapper[4672]: I0217 16:24:19.555968 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" event={"ID":"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a","Type":"ContainerStarted","Data":"4dbc418dede794706fca8c8ba8177efa88e7a3668bc123ec4547ce757e08ae06"} Feb 17 16:24:20 crc kubenswrapper[4672]: I0217 16:24:20.568062 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"20df91b1-1934-4461-a13b-c9461a066562","Type":"ContainerStarted","Data":"88b204dddd25be4a2c974e3fc03a9704927c82549760d896a71d236182ded6eb"} Feb 17 16:24:20 crc kubenswrapper[4672]: I0217 16:24:20.568576 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Feb 17 16:24:20 crc kubenswrapper[4672]: I0217 16:24:20.569871 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"61689860-63f3-424a-92e6-b5f0fd8d17b3","Type":"ContainerStarted","Data":"f8e57ccc8273be702a93c3ae388d0443e8232cc5832135e1425e83dea1e1f226"} Feb 17 16:24:20 crc kubenswrapper[4672]: I0217 16:24:20.572205 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" event={"ID":"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a","Type":"ContainerStarted","Data":"80910acdf1a097539503d5da5240eb78e6e4a94151a2cf7fda013b1c43c03528"} Feb 17 16:24:20 crc kubenswrapper[4672]: I0217 16:24:20.572324 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:24:20 crc kubenswrapper[4672]: I0217 16:24:20.575187 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"261e82ba-d901-48e9-9890-768595c3e9df","Type":"ContainerStarted","Data":"372e811646bfd7b1d3aab1076f1c0032e9fb71de3e71b2cbef15c4059bf12a48"} Feb 17 16:24:20 crc kubenswrapper[4672]: I0217 16:24:20.575352 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:24:20 crc kubenswrapper[4672]: I0217 16:24:20.596935 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=2.596918578 podStartE2EDuration="2.596918578s" podCreationTimestamp="2026-02-17 16:24:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:20.587809228 +0000 UTC m=+1269.341897960" watchObservedRunningTime="2026-02-17 16:24:20.596918578 +0000 UTC m=+1269.351007310" Feb 17 16:24:20 crc kubenswrapper[4672]: I0217 16:24:20.608048 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.4297687359999998 podStartE2EDuration="7.608025321s" podCreationTimestamp="2026-02-17 16:24:13 +0000 UTC" firstStartedPulling="2026-02-17 16:24:14.382762254 +0000 UTC m=+1263.136850996" lastFinishedPulling="2026-02-17 16:24:19.561018849 +0000 UTC m=+1268.315107581" observedRunningTime="2026-02-17 16:24:20.607448696 +0000 UTC m=+1269.361537438" watchObservedRunningTime="2026-02-17 16:24:20.608025321 +0000 UTC m=+1269.362114053" Feb 17 16:24:20 crc kubenswrapper[4672]: I0217 16:24:20.636102 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.636083781 podStartE2EDuration="4.636083781s" podCreationTimestamp="2026-02-17 16:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:20.625087241 +0000 UTC m=+1269.379175973" watchObservedRunningTime="2026-02-17 16:24:20.636083781 +0000 UTC m=+1269.390172513" Feb 17 16:24:20 crc kubenswrapper[4672]: I0217 16:24:20.652883 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" podStartSLOduration=3.652498823 podStartE2EDuration="3.652498823s" podCreationTimestamp="2026-02-17 16:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:20.646311621 +0000 UTC m=+1269.400400353" watchObservedRunningTime="2026-02-17 16:24:20.652498823 +0000 UTC m=+1269.406587555" Feb 17 16:24:20 crc kubenswrapper[4672]: I0217 16:24:20.964097 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:24:21 crc kubenswrapper[4672]: I0217 16:24:21.584020 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"57cae8a9-696d-48a0-9420-2a2d7ed2639f","Type":"ContainerStarted","Data":"61b5e2394a9a10a7bce25ecd8c016a4f6f7501899cddba68350096ffb8651bb1"} Feb 17 16:24:21 crc kubenswrapper[4672]: I0217 16:24:21.610189 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.946455049 podStartE2EDuration="4.61017321s" podCreationTimestamp="2026-02-17 16:24:17 +0000 UTC" firstStartedPulling="2026-02-17 16:24:18.746650119 +0000 UTC m=+1267.500738851" lastFinishedPulling="2026-02-17 16:24:20.41036828 +0000 UTC m=+1269.164457012" observedRunningTime="2026-02-17 16:24:21.606260767 +0000 UTC m=+1270.360349499" watchObservedRunningTime="2026-02-17 16:24:21.61017321 +0000 UTC m=+1270.364261942" Feb 17 16:24:21 crc kubenswrapper[4672]: I0217 16:24:21.626978 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:24:21 crc kubenswrapper[4672]: I0217 16:24:21.931071 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 17 16:24:22 crc kubenswrapper[4672]: I0217 16:24:22.591913 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="20df91b1-1934-4461-a13b-c9461a066562" containerName="cloudkitty-api-log" containerID="cri-o://f55b38cdffba60b5604bdf72379eb78f86549776388c53f61ae8b5bb937d186d" gracePeriod=30 Feb 17 16:24:22 crc kubenswrapper[4672]: I0217 16:24:22.592468 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="20df91b1-1934-4461-a13b-c9461a066562" containerName="cloudkitty-api" containerID="cri-o://88b204dddd25be4a2c974e3fc03a9704927c82549760d896a71d236182ded6eb" gracePeriod=30 Feb 17 16:24:23 crc kubenswrapper[4672]: E0217 16:24:23.160667 4672 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6ad1f8c_ef18_4bd8_ac43_b8f1151277f6.slice/crio-4fede41a6b3d442704cc0b64a71cfcde9ecee5251694f4b5e0c64343367e5adb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6ad1f8c_ef18_4bd8_ac43_b8f1151277f6.slice\": RecentStats: unable to find data in memory cache]" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.549532 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.612847 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20df91b1-1934-4461-a13b-c9461a066562-logs\") pod \"20df91b1-1934-4461-a13b-c9461a066562\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.612935 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-scripts\") pod \"20df91b1-1934-4461-a13b-c9461a066562\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.612956 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-config-data\") pod \"20df91b1-1934-4461-a13b-c9461a066562\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.613051 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-config-data-custom\") pod \"20df91b1-1934-4461-a13b-c9461a066562\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.613116 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf8z2\" (UniqueName: \"kubernetes.io/projected/20df91b1-1934-4461-a13b-c9461a066562-kube-api-access-vf8z2\") pod \"20df91b1-1934-4461-a13b-c9461a066562\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.613185 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/20df91b1-1934-4461-a13b-c9461a066562-certs\") pod \"20df91b1-1934-4461-a13b-c9461a066562\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.613272 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-combined-ca-bundle\") pod \"20df91b1-1934-4461-a13b-c9461a066562\" (UID: \"20df91b1-1934-4461-a13b-c9461a066562\") " Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.621223 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20df91b1-1934-4461-a13b-c9461a066562-logs" (OuterVolumeSpecName: "logs") pod "20df91b1-1934-4461-a13b-c9461a066562" (UID: "20df91b1-1934-4461-a13b-c9461a066562"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.626626 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20df91b1-1934-4461-a13b-c9461a066562-kube-api-access-vf8z2" (OuterVolumeSpecName: "kube-api-access-vf8z2") pod "20df91b1-1934-4461-a13b-c9461a066562" (UID: "20df91b1-1934-4461-a13b-c9461a066562"). InnerVolumeSpecName "kube-api-access-vf8z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.646244 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20df91b1-1934-4461-a13b-c9461a066562-certs" (OuterVolumeSpecName: "certs") pod "20df91b1-1934-4461-a13b-c9461a066562" (UID: "20df91b1-1934-4461-a13b-c9461a066562"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.652706 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-scripts" (OuterVolumeSpecName: "scripts") pod "20df91b1-1934-4461-a13b-c9461a066562" (UID: "20df91b1-1934-4461-a13b-c9461a066562"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.660171 4672 generic.go:334] "Generic (PLEG): container finished" podID="20df91b1-1934-4461-a13b-c9461a066562" containerID="88b204dddd25be4a2c974e3fc03a9704927c82549760d896a71d236182ded6eb" exitCode=0 Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.660205 4672 generic.go:334] "Generic (PLEG): container finished" podID="20df91b1-1934-4461-a13b-c9461a066562" containerID="f55b38cdffba60b5604bdf72379eb78f86549776388c53f61ae8b5bb937d186d" exitCode=143 Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.660377 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="57cae8a9-696d-48a0-9420-2a2d7ed2639f" containerName="cloudkitty-proc" containerID="cri-o://61b5e2394a9a10a7bce25ecd8c016a4f6f7501899cddba68350096ffb8651bb1" gracePeriod=30 Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.660523 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.660990 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"20df91b1-1934-4461-a13b-c9461a066562","Type":"ContainerDied","Data":"88b204dddd25be4a2c974e3fc03a9704927c82549760d896a71d236182ded6eb"} Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.661018 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"20df91b1-1934-4461-a13b-c9461a066562","Type":"ContainerDied","Data":"f55b38cdffba60b5604bdf72379eb78f86549776388c53f61ae8b5bb937d186d"} Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.661029 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"20df91b1-1934-4461-a13b-c9461a066562","Type":"ContainerDied","Data":"b00933f820701504e599f0992c5e804c1ea70d09a40b45a570fc403b14edaa19"} Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.661044 4672 scope.go:117] "RemoveContainer" containerID="88b204dddd25be4a2c974e3fc03a9704927c82549760d896a71d236182ded6eb" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.665292 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "20df91b1-1934-4461-a13b-c9461a066562" (UID: "20df91b1-1934-4461-a13b-c9461a066562"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.713625 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-config-data" (OuterVolumeSpecName: "config-data") pod "20df91b1-1934-4461-a13b-c9461a066562" (UID: "20df91b1-1934-4461-a13b-c9461a066562"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.716074 4672 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20df91b1-1934-4461-a13b-c9461a066562-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.716133 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.716145 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.716157 4672 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.716169 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf8z2\" (UniqueName: \"kubernetes.io/projected/20df91b1-1934-4461-a13b-c9461a066562-kube-api-access-vf8z2\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.716181 4672 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/20df91b1-1934-4461-a13b-c9461a066562-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.732958 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20df91b1-1934-4461-a13b-c9461a066562" (UID: "20df91b1-1934-4461-a13b-c9461a066562"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.822251 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20df91b1-1934-4461-a13b-c9461a066562-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.824174 4672 scope.go:117] "RemoveContainer" containerID="f55b38cdffba60b5604bdf72379eb78f86549776388c53f61ae8b5bb937d186d" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.846917 4672 scope.go:117] "RemoveContainer" containerID="88b204dddd25be4a2c974e3fc03a9704927c82549760d896a71d236182ded6eb" Feb 17 16:24:23 crc kubenswrapper[4672]: E0217 16:24:23.847546 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88b204dddd25be4a2c974e3fc03a9704927c82549760d896a71d236182ded6eb\": container with ID starting with 88b204dddd25be4a2c974e3fc03a9704927c82549760d896a71d236182ded6eb not found: ID does not exist" containerID="88b204dddd25be4a2c974e3fc03a9704927c82549760d896a71d236182ded6eb" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.847586 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b204dddd25be4a2c974e3fc03a9704927c82549760d896a71d236182ded6eb"} err="failed to get container status \"88b204dddd25be4a2c974e3fc03a9704927c82549760d896a71d236182ded6eb\": rpc error: code = NotFound desc = could not find container \"88b204dddd25be4a2c974e3fc03a9704927c82549760d896a71d236182ded6eb\": container with ID starting with 88b204dddd25be4a2c974e3fc03a9704927c82549760d896a71d236182ded6eb not found: ID does not exist" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.847614 4672 scope.go:117] "RemoveContainer" containerID="f55b38cdffba60b5604bdf72379eb78f86549776388c53f61ae8b5bb937d186d" Feb 17 16:24:23 crc kubenswrapper[4672]: E0217 16:24:23.848039 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f55b38cdffba60b5604bdf72379eb78f86549776388c53f61ae8b5bb937d186d\": container with ID starting with f55b38cdffba60b5604bdf72379eb78f86549776388c53f61ae8b5bb937d186d not found: ID does not exist" containerID="f55b38cdffba60b5604bdf72379eb78f86549776388c53f61ae8b5bb937d186d" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.848085 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f55b38cdffba60b5604bdf72379eb78f86549776388c53f61ae8b5bb937d186d"} err="failed to get container status \"f55b38cdffba60b5604bdf72379eb78f86549776388c53f61ae8b5bb937d186d\": rpc error: code = NotFound desc = could not find container \"f55b38cdffba60b5604bdf72379eb78f86549776388c53f61ae8b5bb937d186d\": container with ID starting with f55b38cdffba60b5604bdf72379eb78f86549776388c53f61ae8b5bb937d186d not found: ID does not exist" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.848113 4672 scope.go:117] "RemoveContainer" containerID="88b204dddd25be4a2c974e3fc03a9704927c82549760d896a71d236182ded6eb" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.848548 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b204dddd25be4a2c974e3fc03a9704927c82549760d896a71d236182ded6eb"} err="failed to get container status \"88b204dddd25be4a2c974e3fc03a9704927c82549760d896a71d236182ded6eb\": rpc error: code = NotFound desc = could not find container \"88b204dddd25be4a2c974e3fc03a9704927c82549760d896a71d236182ded6eb\": container with ID starting with 88b204dddd25be4a2c974e3fc03a9704927c82549760d896a71d236182ded6eb not found: ID does not exist" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.848571 4672 scope.go:117] "RemoveContainer" containerID="f55b38cdffba60b5604bdf72379eb78f86549776388c53f61ae8b5bb937d186d" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.848895 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f55b38cdffba60b5604bdf72379eb78f86549776388c53f61ae8b5bb937d186d"} err="failed to get container status \"f55b38cdffba60b5604bdf72379eb78f86549776388c53f61ae8b5bb937d186d\": rpc error: code = NotFound desc = could not find container \"f55b38cdffba60b5604bdf72379eb78f86549776388c53f61ae8b5bb937d186d\": container with ID starting with f55b38cdffba60b5604bdf72379eb78f86549776388c53f61ae8b5bb937d186d not found: ID does not exist" Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.989085 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:24:23 crc kubenswrapper[4672]: I0217 16:24:23.997984 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.006300 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:24:24 crc kubenswrapper[4672]: E0217 16:24:24.006706 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20df91b1-1934-4461-a13b-c9461a066562" containerName="cloudkitty-api-log" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.006724 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="20df91b1-1934-4461-a13b-c9461a066562" containerName="cloudkitty-api-log" Feb 17 16:24:24 crc kubenswrapper[4672]: E0217 16:24:24.006733 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20df91b1-1934-4461-a13b-c9461a066562" containerName="cloudkitty-api" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.006740 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="20df91b1-1934-4461-a13b-c9461a066562" containerName="cloudkitty-api" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.006936 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="20df91b1-1934-4461-a13b-c9461a066562" containerName="cloudkitty-api-log" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.006968 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="20df91b1-1934-4461-a13b-c9461a066562" containerName="cloudkitty-api" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.012485 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.014342 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.015573 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.016867 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.017030 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.025349 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fb90291-b26f-465e-9f31-aa9336133b6b-scripts\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.025457 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/4fb90291-b26f-465e-9f31-aa9336133b6b-certs\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.025490 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxzl4\" (UniqueName: \"kubernetes.io/projected/4fb90291-b26f-465e-9f31-aa9336133b6b-kube-api-access-jxzl4\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.025616 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fb90291-b26f-465e-9f31-aa9336133b6b-logs\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.025643 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fb90291-b26f-465e-9f31-aa9336133b6b-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.025660 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fb90291-b26f-465e-9f31-aa9336133b6b-config-data\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.025696 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fb90291-b26f-465e-9f31-aa9336133b6b-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.025712 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fb90291-b26f-465e-9f31-aa9336133b6b-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.025739 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fb90291-b26f-465e-9f31-aa9336133b6b-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.038564 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.127835 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/4fb90291-b26f-465e-9f31-aa9336133b6b-certs\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.128178 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxzl4\" (UniqueName: \"kubernetes.io/projected/4fb90291-b26f-465e-9f31-aa9336133b6b-kube-api-access-jxzl4\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.128229 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fb90291-b26f-465e-9f31-aa9336133b6b-logs\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.128259 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fb90291-b26f-465e-9f31-aa9336133b6b-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.128287 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fb90291-b26f-465e-9f31-aa9336133b6b-config-data\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.128321 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fb90291-b26f-465e-9f31-aa9336133b6b-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.128349 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fb90291-b26f-465e-9f31-aa9336133b6b-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.128413 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fb90291-b26f-465e-9f31-aa9336133b6b-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.128582 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fb90291-b26f-465e-9f31-aa9336133b6b-scripts\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.129038 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fb90291-b26f-465e-9f31-aa9336133b6b-logs\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.131393 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fb90291-b26f-465e-9f31-aa9336133b6b-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.131935 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/4fb90291-b26f-465e-9f31-aa9336133b6b-certs\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.132110 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fb90291-b26f-465e-9f31-aa9336133b6b-config-data\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.133240 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fb90291-b26f-465e-9f31-aa9336133b6b-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.133936 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fb90291-b26f-465e-9f31-aa9336133b6b-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.134260 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fb90291-b26f-465e-9f31-aa9336133b6b-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.144968 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxzl4\" (UniqueName: \"kubernetes.io/projected/4fb90291-b26f-465e-9f31-aa9336133b6b-kube-api-access-jxzl4\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.150092 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fb90291-b26f-465e-9f31-aa9336133b6b-scripts\") pod \"cloudkitty-api-0\" (UID: \"4fb90291-b26f-465e-9f31-aa9336133b6b\") " pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.329929 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5ff45d6f4b-l6mqf" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.344617 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.413109 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6fd59c5bf8-d6vtf"] Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.413346 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6fd59c5bf8-d6vtf" podUID="87bfbbe6-1b72-4f9b-bb5e-ef6560acab76" containerName="barbican-api-log" containerID="cri-o://85f8023b11ad72082696a55d706231bbf1a9c87c41050c966d87a3c4fc183133" gracePeriod=30 Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.413687 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6fd59c5bf8-d6vtf" podUID="87bfbbe6-1b72-4f9b-bb5e-ef6560acab76" containerName="barbican-api" containerID="cri-o://6bc9df9fc3da8cf3b3195fbd8b1f5ea498fd9699337929a0fcc8e0ac1158d24c" gracePeriod=30 Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.673003 4672 generic.go:334] "Generic (PLEG): container finished" podID="87bfbbe6-1b72-4f9b-bb5e-ef6560acab76" containerID="85f8023b11ad72082696a55d706231bbf1a9c87c41050c966d87a3c4fc183133" exitCode=143 Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.673121 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fd59c5bf8-d6vtf" event={"ID":"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76","Type":"ContainerDied","Data":"85f8023b11ad72082696a55d706231bbf1a9c87c41050c966d87a3c4fc183133"} Feb 17 16:24:24 crc kubenswrapper[4672]: I0217 16:24:24.868503 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:24:24 crc kubenswrapper[4672]: W0217 16:24:24.871196 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fb90291_b26f_465e_9f31_aa9336133b6b.slice/crio-4e4ec28a701ea2c5da5b087df22142abf47f9f72adaf5dc315117a0f862538fb WatchSource:0}: Error finding container 4e4ec28a701ea2c5da5b087df22142abf47f9f72adaf5dc315117a0f862538fb: Status 404 returned error can't find the container with id 4e4ec28a701ea2c5da5b087df22142abf47f9f72adaf5dc315117a0f862538fb Feb 17 16:24:25 crc kubenswrapper[4672]: I0217 16:24:25.685322 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"4fb90291-b26f-465e-9f31-aa9336133b6b","Type":"ContainerStarted","Data":"b877e8b3d89cf892ca7aa871b8526b7c195569c43fc2db7f55606d64eacbbf15"} Feb 17 16:24:25 crc kubenswrapper[4672]: I0217 16:24:25.685837 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Feb 17 16:24:25 crc kubenswrapper[4672]: I0217 16:24:25.685851 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"4fb90291-b26f-465e-9f31-aa9336133b6b","Type":"ContainerStarted","Data":"b06e1d1c6f54383551049aef80e1960706c15499527c554b3d29951770cf2929"} Feb 17 16:24:25 crc kubenswrapper[4672]: I0217 16:24:25.685863 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"4fb90291-b26f-465e-9f31-aa9336133b6b","Type":"ContainerStarted","Data":"4e4ec28a701ea2c5da5b087df22142abf47f9f72adaf5dc315117a0f862538fb"} Feb 17 16:24:25 crc kubenswrapper[4672]: I0217 16:24:25.715173 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=2.71514924 podStartE2EDuration="2.71514924s" podCreationTimestamp="2026-02-17 16:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:25.703099692 +0000 UTC m=+1274.457188424" watchObservedRunningTime="2026-02-17 16:24:25.71514924 +0000 UTC m=+1274.469237992" Feb 17 16:24:25 crc kubenswrapper[4672]: I0217 16:24:25.976907 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20df91b1-1934-4461-a13b-c9461a066562" path="/var/lib/kubelet/pods/20df91b1-1934-4461-a13b-c9461a066562/volumes" Feb 17 16:24:27 crc kubenswrapper[4672]: I0217 16:24:27.161639 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 17 16:24:27 crc kubenswrapper[4672]: I0217 16:24:27.595725 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6fd59c5bf8-d6vtf" podUID="87bfbbe6-1b72-4f9b-bb5e-ef6560acab76" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.181:9311/healthcheck\": read tcp 10.217.0.2:59090->10.217.0.181:9311: read: connection reset by peer" Feb 17 16:24:27 crc kubenswrapper[4672]: I0217 16:24:27.595796 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6fd59c5bf8-d6vtf" podUID="87bfbbe6-1b72-4f9b-bb5e-ef6560acab76" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.181:9311/healthcheck\": read tcp 10.217.0.2:59100->10.217.0.181:9311: read: connection reset by peer" Feb 17 16:24:27 crc kubenswrapper[4672]: I0217 16:24:27.734082 4672 generic.go:334] "Generic (PLEG): container finished" podID="87bfbbe6-1b72-4f9b-bb5e-ef6560acab76" containerID="6bc9df9fc3da8cf3b3195fbd8b1f5ea498fd9699337929a0fcc8e0ac1158d24c" exitCode=0 Feb 17 16:24:27 crc kubenswrapper[4672]: I0217 16:24:27.734123 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fd59c5bf8-d6vtf" event={"ID":"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76","Type":"ContainerDied","Data":"6bc9df9fc3da8cf3b3195fbd8b1f5ea498fd9699337929a0fcc8e0ac1158d24c"} Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.095921 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.177156 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-mlrbg"] Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.177446 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" podUID="4141331e-2369-48a5-9f23-15b35887e53b" containerName="dnsmasq-dns" containerID="cri-o://5de0f4fbb7d27105885f6d589fd071b134695174d91ec2848d522fa7b7395b1c" gracePeriod=10 Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.183693 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fd59c5bf8-d6vtf" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.238320 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-combined-ca-bundle\") pod \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\" (UID: \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\") " Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.238384 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-config-data\") pod \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\" (UID: \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\") " Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.238518 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-config-data-custom\") pod \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\" (UID: \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\") " Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.238559 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pflrn\" (UniqueName: \"kubernetes.io/projected/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-kube-api-access-pflrn\") pod \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\" (UID: \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\") " Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.238684 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-logs\") pod \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\" (UID: \"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76\") " Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.239425 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-logs" (OuterVolumeSpecName: "logs") pod "87bfbbe6-1b72-4f9b-bb5e-ef6560acab76" (UID: "87bfbbe6-1b72-4f9b-bb5e-ef6560acab76"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.246978 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-kube-api-access-pflrn" (OuterVolumeSpecName: "kube-api-access-pflrn") pod "87bfbbe6-1b72-4f9b-bb5e-ef6560acab76" (UID: "87bfbbe6-1b72-4f9b-bb5e-ef6560acab76"). InnerVolumeSpecName "kube-api-access-pflrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.256642 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "87bfbbe6-1b72-4f9b-bb5e-ef6560acab76" (UID: "87bfbbe6-1b72-4f9b-bb5e-ef6560acab76"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.283679 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87bfbbe6-1b72-4f9b-bb5e-ef6560acab76" (UID: "87bfbbe6-1b72-4f9b-bb5e-ef6560acab76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.311637 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-config-data" (OuterVolumeSpecName: "config-data") pod "87bfbbe6-1b72-4f9b-bb5e-ef6560acab76" (UID: "87bfbbe6-1b72-4f9b-bb5e-ef6560acab76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.341923 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.341952 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.341962 4672 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.341971 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pflrn\" (UniqueName: \"kubernetes.io/projected/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-kube-api-access-pflrn\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.341981 4672 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.447085 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.448127 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.757621 4672 generic.go:334] "Generic (PLEG): container finished" podID="4141331e-2369-48a5-9f23-15b35887e53b" containerID="5de0f4fbb7d27105885f6d589fd071b134695174d91ec2848d522fa7b7395b1c" exitCode=0 Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.757672 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" event={"ID":"4141331e-2369-48a5-9f23-15b35887e53b","Type":"ContainerDied","Data":"5de0f4fbb7d27105885f6d589fd071b134695174d91ec2848d522fa7b7395b1c"} Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.757697 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" event={"ID":"4141331e-2369-48a5-9f23-15b35887e53b","Type":"ContainerDied","Data":"b052e7a7a0321bb968c5b819aa99326f68c1c4421ba3015ee5028d49b5ccae44"} Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.757708 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b052e7a7a0321bb968c5b819aa99326f68c1c4421ba3015ee5028d49b5ccae44" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.759966 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fd59c5bf8-d6vtf" event={"ID":"87bfbbe6-1b72-4f9b-bb5e-ef6560acab76","Type":"ContainerDied","Data":"c80ab85df46ec477c1793e6b52e4d0e63d7919c1ee14a5fa4cf19a146f84222d"} Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.760028 4672 scope.go:117] "RemoveContainer" containerID="6bc9df9fc3da8cf3b3195fbd8b1f5ea498fd9699337929a0fcc8e0ac1158d24c" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.760211 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fd59c5bf8-d6vtf" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.767342 4672 generic.go:334] "Generic (PLEG): container finished" podID="57cae8a9-696d-48a0-9420-2a2d7ed2639f" containerID="61b5e2394a9a10a7bce25ecd8c016a4f6f7501899cddba68350096ffb8651bb1" exitCode=0 Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.767528 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"57cae8a9-696d-48a0-9420-2a2d7ed2639f","Type":"ContainerDied","Data":"61b5e2394a9a10a7bce25ecd8c016a4f6f7501899cddba68350096ffb8651bb1"} Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.793895 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.801769 4672 scope.go:117] "RemoveContainer" containerID="85f8023b11ad72082696a55d706231bbf1a9c87c41050c966d87a3c4fc183133" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.812700 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6fd59c5bf8-d6vtf"] Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.820868 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6fd59c5bf8-d6vtf"] Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.853437 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-ovsdbserver-sb\") pod \"4141331e-2369-48a5-9f23-15b35887e53b\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.853874 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-ovsdbserver-nb\") pod \"4141331e-2369-48a5-9f23-15b35887e53b\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.854085 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-dns-swift-storage-0\") pod \"4141331e-2369-48a5-9f23-15b35887e53b\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.854210 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nq6j\" (UniqueName: \"kubernetes.io/projected/4141331e-2369-48a5-9f23-15b35887e53b-kube-api-access-5nq6j\") pod \"4141331e-2369-48a5-9f23-15b35887e53b\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.854314 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-dns-svc\") pod \"4141331e-2369-48a5-9f23-15b35887e53b\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.854490 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-config\") pod \"4141331e-2369-48a5-9f23-15b35887e53b\" (UID: \"4141331e-2369-48a5-9f23-15b35887e53b\") " Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.859793 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4141331e-2369-48a5-9f23-15b35887e53b-kube-api-access-5nq6j" (OuterVolumeSpecName: "kube-api-access-5nq6j") pod "4141331e-2369-48a5-9f23-15b35887e53b" (UID: "4141331e-2369-48a5-9f23-15b35887e53b"). InnerVolumeSpecName "kube-api-access-5nq6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.921974 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4141331e-2369-48a5-9f23-15b35887e53b" (UID: "4141331e-2369-48a5-9f23-15b35887e53b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.922859 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4141331e-2369-48a5-9f23-15b35887e53b" (UID: "4141331e-2369-48a5-9f23-15b35887e53b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.942874 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4141331e-2369-48a5-9f23-15b35887e53b" (UID: "4141331e-2369-48a5-9f23-15b35887e53b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.944358 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4141331e-2369-48a5-9f23-15b35887e53b" (UID: "4141331e-2369-48a5-9f23-15b35887e53b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.952765 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-config" (OuterVolumeSpecName: "config") pod "4141331e-2369-48a5-9f23-15b35887e53b" (UID: "4141331e-2369-48a5-9f23-15b35887e53b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.957149 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.957181 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.957192 4672 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.957201 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nq6j\" (UniqueName: \"kubernetes.io/projected/4141331e-2369-48a5-9f23-15b35887e53b-kube-api-access-5nq6j\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.957212 4672 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:28 crc kubenswrapper[4672]: I0217 16:24:28.957221 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4141331e-2369-48a5-9f23-15b35887e53b-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.014370 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.034786 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-865bd5d96d-f924s" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.059485 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-config-data\") pod \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.059542 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/57cae8a9-696d-48a0-9420-2a2d7ed2639f-certs\") pod \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.059614 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4htvg\" (UniqueName: \"kubernetes.io/projected/57cae8a9-696d-48a0-9420-2a2d7ed2639f-kube-api-access-4htvg\") pod \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.059644 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-scripts\") pod \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.059691 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-config-data-custom\") pod \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.059750 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-combined-ca-bundle\") pod \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\" (UID: \"57cae8a9-696d-48a0-9420-2a2d7ed2639f\") " Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.065562 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57cae8a9-696d-48a0-9420-2a2d7ed2639f-certs" (OuterVolumeSpecName: "certs") pod "57cae8a9-696d-48a0-9420-2a2d7ed2639f" (UID: "57cae8a9-696d-48a0-9420-2a2d7ed2639f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.067786 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "57cae8a9-696d-48a0-9420-2a2d7ed2639f" (UID: "57cae8a9-696d-48a0-9420-2a2d7ed2639f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.068291 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57cae8a9-696d-48a0-9420-2a2d7ed2639f-kube-api-access-4htvg" (OuterVolumeSpecName: "kube-api-access-4htvg") pod "57cae8a9-696d-48a0-9420-2a2d7ed2639f" (UID: "57cae8a9-696d-48a0-9420-2a2d7ed2639f"). InnerVolumeSpecName "kube-api-access-4htvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.071047 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-scripts" (OuterVolumeSpecName: "scripts") pod "57cae8a9-696d-48a0-9420-2a2d7ed2639f" (UID: "57cae8a9-696d-48a0-9420-2a2d7ed2639f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.102774 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-config-data" (OuterVolumeSpecName: "config-data") pod "57cae8a9-696d-48a0-9420-2a2d7ed2639f" (UID: "57cae8a9-696d-48a0-9420-2a2d7ed2639f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.105585 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57cae8a9-696d-48a0-9420-2a2d7ed2639f" (UID: "57cae8a9-696d-48a0-9420-2a2d7ed2639f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.162555 4672 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.162606 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.162616 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.162624 4672 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/57cae8a9-696d-48a0-9420-2a2d7ed2639f-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.162637 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4htvg\" (UniqueName: \"kubernetes.io/projected/57cae8a9-696d-48a0-9420-2a2d7ed2639f-kube-api-access-4htvg\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.162644 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57cae8a9-696d-48a0-9420-2a2d7ed2639f-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.776377 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"57cae8a9-696d-48a0-9420-2a2d7ed2639f","Type":"ContainerDied","Data":"6d86a4ac41648b299250f82035b9c5711f19cebe12c89fbe0ce1f1e57a44136c"} Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.776710 4672 scope.go:117] "RemoveContainer" containerID="61b5e2394a9a10a7bce25ecd8c016a4f6f7501899cddba68350096ffb8651bb1" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.776828 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.778989 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-mlrbg" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.817206 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-mlrbg"] Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.830567 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-mlrbg"] Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.844984 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.853551 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.866157 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:24:29 crc kubenswrapper[4672]: E0217 16:24:29.866597 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57cae8a9-696d-48a0-9420-2a2d7ed2639f" containerName="cloudkitty-proc" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.866613 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="57cae8a9-696d-48a0-9420-2a2d7ed2639f" containerName="cloudkitty-proc" Feb 17 16:24:29 crc kubenswrapper[4672]: E0217 16:24:29.866627 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87bfbbe6-1b72-4f9b-bb5e-ef6560acab76" containerName="barbican-api" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.866635 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="87bfbbe6-1b72-4f9b-bb5e-ef6560acab76" containerName="barbican-api" Feb 17 16:24:29 crc kubenswrapper[4672]: E0217 16:24:29.866651 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4141331e-2369-48a5-9f23-15b35887e53b" containerName="init" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.866657 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="4141331e-2369-48a5-9f23-15b35887e53b" containerName="init" Feb 17 16:24:29 crc kubenswrapper[4672]: E0217 16:24:29.866680 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4141331e-2369-48a5-9f23-15b35887e53b" containerName="dnsmasq-dns" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.866686 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="4141331e-2369-48a5-9f23-15b35887e53b" containerName="dnsmasq-dns" Feb 17 16:24:29 crc kubenswrapper[4672]: E0217 16:24:29.866698 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87bfbbe6-1b72-4f9b-bb5e-ef6560acab76" containerName="barbican-api-log" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.866704 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="87bfbbe6-1b72-4f9b-bb5e-ef6560acab76" containerName="barbican-api-log" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.866873 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="4141331e-2369-48a5-9f23-15b35887e53b" containerName="dnsmasq-dns" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.866894 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="57cae8a9-696d-48a0-9420-2a2d7ed2639f" containerName="cloudkitty-proc" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.866903 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="87bfbbe6-1b72-4f9b-bb5e-ef6560acab76" containerName="barbican-api" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.866913 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="87bfbbe6-1b72-4f9b-bb5e-ef6560acab76" containerName="barbican-api-log" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.867658 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.871015 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.892875 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.959941 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4141331e-2369-48a5-9f23-15b35887e53b" path="/var/lib/kubelet/pods/4141331e-2369-48a5-9f23-15b35887e53b/volumes" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.960765 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57cae8a9-696d-48a0-9420-2a2d7ed2639f" path="/var/lib/kubelet/pods/57cae8a9-696d-48a0-9420-2a2d7ed2639f/volumes" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.961397 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87bfbbe6-1b72-4f9b-bb5e-ef6560acab76" path="/var/lib/kubelet/pods/87bfbbe6-1b72-4f9b-bb5e-ef6560acab76/volumes" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.977750 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hszc6\" (UniqueName: \"kubernetes.io/projected/8660afe8-86d8-4fff-9707-c67a3ad7f842-kube-api-access-hszc6\") pod \"cloudkitty-proc-0\" (UID: \"8660afe8-86d8-4fff-9707-c67a3ad7f842\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.977841 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8660afe8-86d8-4fff-9707-c67a3ad7f842-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"8660afe8-86d8-4fff-9707-c67a3ad7f842\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.977997 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8660afe8-86d8-4fff-9707-c67a3ad7f842-config-data\") pod \"cloudkitty-proc-0\" (UID: \"8660afe8-86d8-4fff-9707-c67a3ad7f842\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.978248 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8660afe8-86d8-4fff-9707-c67a3ad7f842-certs\") pod \"cloudkitty-proc-0\" (UID: \"8660afe8-86d8-4fff-9707-c67a3ad7f842\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.978362 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8660afe8-86d8-4fff-9707-c67a3ad7f842-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"8660afe8-86d8-4fff-9707-c67a3ad7f842\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:29 crc kubenswrapper[4672]: I0217 16:24:29.978450 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8660afe8-86d8-4fff-9707-c67a3ad7f842-scripts\") pod \"cloudkitty-proc-0\" (UID: \"8660afe8-86d8-4fff-9707-c67a3ad7f842\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.080567 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8660afe8-86d8-4fff-9707-c67a3ad7f842-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"8660afe8-86d8-4fff-9707-c67a3ad7f842\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.080631 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8660afe8-86d8-4fff-9707-c67a3ad7f842-config-data\") pod \"cloudkitty-proc-0\" (UID: \"8660afe8-86d8-4fff-9707-c67a3ad7f842\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.080994 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8660afe8-86d8-4fff-9707-c67a3ad7f842-certs\") pod \"cloudkitty-proc-0\" (UID: \"8660afe8-86d8-4fff-9707-c67a3ad7f842\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.081100 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8660afe8-86d8-4fff-9707-c67a3ad7f842-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"8660afe8-86d8-4fff-9707-c67a3ad7f842\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.081201 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8660afe8-86d8-4fff-9707-c67a3ad7f842-scripts\") pod \"cloudkitty-proc-0\" (UID: \"8660afe8-86d8-4fff-9707-c67a3ad7f842\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.081550 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hszc6\" (UniqueName: \"kubernetes.io/projected/8660afe8-86d8-4fff-9707-c67a3ad7f842-kube-api-access-hszc6\") pod \"cloudkitty-proc-0\" (UID: \"8660afe8-86d8-4fff-9707-c67a3ad7f842\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.086337 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8660afe8-86d8-4fff-9707-c67a3ad7f842-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"8660afe8-86d8-4fff-9707-c67a3ad7f842\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.086364 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8660afe8-86d8-4fff-9707-c67a3ad7f842-certs\") pod \"cloudkitty-proc-0\" (UID: \"8660afe8-86d8-4fff-9707-c67a3ad7f842\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.086859 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8660afe8-86d8-4fff-9707-c67a3ad7f842-scripts\") pod \"cloudkitty-proc-0\" (UID: \"8660afe8-86d8-4fff-9707-c67a3ad7f842\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.087330 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8660afe8-86d8-4fff-9707-c67a3ad7f842-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"8660afe8-86d8-4fff-9707-c67a3ad7f842\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.096053 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hszc6\" (UniqueName: \"kubernetes.io/projected/8660afe8-86d8-4fff-9707-c67a3ad7f842-kube-api-access-hszc6\") pod \"cloudkitty-proc-0\" (UID: \"8660afe8-86d8-4fff-9707-c67a3ad7f842\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.108221 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8660afe8-86d8-4fff-9707-c67a3ad7f842-config-data\") pod \"cloudkitty-proc-0\" (UID: \"8660afe8-86d8-4fff-9707-c67a3ad7f842\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.184343 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.514062 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.516571 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.519067 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.519258 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.519669 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-9gwbf" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.527808 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.599847 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/12ce4f59-1f9d-4db9-8448-0fbd0a296559-openstack-config\") pod \"openstackclient\" (UID: \"12ce4f59-1f9d-4db9-8448-0fbd0a296559\") " pod="openstack/openstackclient" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.599928 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ce4f59-1f9d-4db9-8448-0fbd0a296559-combined-ca-bundle\") pod \"openstackclient\" (UID: \"12ce4f59-1f9d-4db9-8448-0fbd0a296559\") " pod="openstack/openstackclient" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.600100 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/12ce4f59-1f9d-4db9-8448-0fbd0a296559-openstack-config-secret\") pod \"openstackclient\" (UID: \"12ce4f59-1f9d-4db9-8448-0fbd0a296559\") " pod="openstack/openstackclient" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.600152 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr5cn\" (UniqueName: \"kubernetes.io/projected/12ce4f59-1f9d-4db9-8448-0fbd0a296559-kube-api-access-pr5cn\") pod \"openstackclient\" (UID: \"12ce4f59-1f9d-4db9-8448-0fbd0a296559\") " pod="openstack/openstackclient" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.634348 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:24:30 crc kubenswrapper[4672]: W0217 16:24:30.637528 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8660afe8_86d8_4fff_9707_c67a3ad7f842.slice/crio-e183f36fb11adf743db26bfbafa83ebf2add5899fe142a938e0834a12baedd0a WatchSource:0}: Error finding container e183f36fb11adf743db26bfbafa83ebf2add5899fe142a938e0834a12baedd0a: Status 404 returned error can't find the container with id e183f36fb11adf743db26bfbafa83ebf2add5899fe142a938e0834a12baedd0a Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.707843 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/12ce4f59-1f9d-4db9-8448-0fbd0a296559-openstack-config-secret\") pod \"openstackclient\" (UID: \"12ce4f59-1f9d-4db9-8448-0fbd0a296559\") " pod="openstack/openstackclient" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.707925 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr5cn\" (UniqueName: \"kubernetes.io/projected/12ce4f59-1f9d-4db9-8448-0fbd0a296559-kube-api-access-pr5cn\") pod \"openstackclient\" (UID: \"12ce4f59-1f9d-4db9-8448-0fbd0a296559\") " pod="openstack/openstackclient" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.708032 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/12ce4f59-1f9d-4db9-8448-0fbd0a296559-openstack-config\") pod \"openstackclient\" (UID: \"12ce4f59-1f9d-4db9-8448-0fbd0a296559\") " pod="openstack/openstackclient" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.708203 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ce4f59-1f9d-4db9-8448-0fbd0a296559-combined-ca-bundle\") pod \"openstackclient\" (UID: \"12ce4f59-1f9d-4db9-8448-0fbd0a296559\") " pod="openstack/openstackclient" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.709771 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/12ce4f59-1f9d-4db9-8448-0fbd0a296559-openstack-config\") pod \"openstackclient\" (UID: \"12ce4f59-1f9d-4db9-8448-0fbd0a296559\") " pod="openstack/openstackclient" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.715201 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ce4f59-1f9d-4db9-8448-0fbd0a296559-combined-ca-bundle\") pod \"openstackclient\" (UID: \"12ce4f59-1f9d-4db9-8448-0fbd0a296559\") " pod="openstack/openstackclient" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.717044 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/12ce4f59-1f9d-4db9-8448-0fbd0a296559-openstack-config-secret\") pod \"openstackclient\" (UID: \"12ce4f59-1f9d-4db9-8448-0fbd0a296559\") " pod="openstack/openstackclient" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.733842 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr5cn\" (UniqueName: \"kubernetes.io/projected/12ce4f59-1f9d-4db9-8448-0fbd0a296559-kube-api-access-pr5cn\") pod \"openstackclient\" (UID: \"12ce4f59-1f9d-4db9-8448-0fbd0a296559\") " pod="openstack/openstackclient" Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.800235 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"8660afe8-86d8-4fff-9707-c67a3ad7f842","Type":"ContainerStarted","Data":"e183f36fb11adf743db26bfbafa83ebf2add5899fe142a938e0834a12baedd0a"} Feb 17 16:24:30 crc kubenswrapper[4672]: I0217 16:24:30.842002 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.106283 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.118142 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.126433 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.128658 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.134038 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.216980 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b02e419f-9426-4e56-9b4b-17ec702acb0a-openstack-config\") pod \"openstackclient\" (UID: \"b02e419f-9426-4e56-9b4b-17ec702acb0a\") " pod="openstack/openstackclient" Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.217034 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b02e419f-9426-4e56-9b4b-17ec702acb0a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b02e419f-9426-4e56-9b4b-17ec702acb0a\") " pod="openstack/openstackclient" Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.217074 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg4r5\" (UniqueName: \"kubernetes.io/projected/b02e419f-9426-4e56-9b4b-17ec702acb0a-kube-api-access-hg4r5\") pod \"openstackclient\" (UID: \"b02e419f-9426-4e56-9b4b-17ec702acb0a\") " pod="openstack/openstackclient" Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.217229 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b02e419f-9426-4e56-9b4b-17ec702acb0a-openstack-config-secret\") pod \"openstackclient\" (UID: \"b02e419f-9426-4e56-9b4b-17ec702acb0a\") " pod="openstack/openstackclient" Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.321730 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg4r5\" (UniqueName: \"kubernetes.io/projected/b02e419f-9426-4e56-9b4b-17ec702acb0a-kube-api-access-hg4r5\") pod \"openstackclient\" (UID: \"b02e419f-9426-4e56-9b4b-17ec702acb0a\") " pod="openstack/openstackclient" Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.322293 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b02e419f-9426-4e56-9b4b-17ec702acb0a-openstack-config-secret\") pod \"openstackclient\" (UID: \"b02e419f-9426-4e56-9b4b-17ec702acb0a\") " pod="openstack/openstackclient" Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.322370 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b02e419f-9426-4e56-9b4b-17ec702acb0a-openstack-config\") pod \"openstackclient\" (UID: \"b02e419f-9426-4e56-9b4b-17ec702acb0a\") " pod="openstack/openstackclient" Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.322414 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b02e419f-9426-4e56-9b4b-17ec702acb0a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b02e419f-9426-4e56-9b4b-17ec702acb0a\") " pod="openstack/openstackclient" Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.323631 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b02e419f-9426-4e56-9b4b-17ec702acb0a-openstack-config\") pod \"openstackclient\" (UID: \"b02e419f-9426-4e56-9b4b-17ec702acb0a\") " pod="openstack/openstackclient" Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.335115 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b02e419f-9426-4e56-9b4b-17ec702acb0a-openstack-config-secret\") pod \"openstackclient\" (UID: \"b02e419f-9426-4e56-9b4b-17ec702acb0a\") " pod="openstack/openstackclient" Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.341206 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b02e419f-9426-4e56-9b4b-17ec702acb0a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b02e419f-9426-4e56-9b4b-17ec702acb0a\") " pod="openstack/openstackclient" Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.344195 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg4r5\" (UniqueName: \"kubernetes.io/projected/b02e419f-9426-4e56-9b4b-17ec702acb0a-kube-api-access-hg4r5\") pod \"openstackclient\" (UID: \"b02e419f-9426-4e56-9b4b-17ec702acb0a\") " pod="openstack/openstackclient" Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.452413 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 16:24:31 crc kubenswrapper[4672]: E0217 16:24:31.514944 4672 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 17 16:24:31 crc kubenswrapper[4672]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_12ce4f59-1f9d-4db9-8448-0fbd0a296559_0(d6da581e18c6588b2757b71681cbe2cf80a0816b384ca993a9d7e50743aaaa8a): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d6da581e18c6588b2757b71681cbe2cf80a0816b384ca993a9d7e50743aaaa8a" Netns:"/var/run/netns/81433a64-9510-42df-a0d7-6844e87a3427" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=d6da581e18c6588b2757b71681cbe2cf80a0816b384ca993a9d7e50743aaaa8a;K8S_POD_UID=12ce4f59-1f9d-4db9-8448-0fbd0a296559" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: [openstack/openstackclient/12ce4f59-1f9d-4db9-8448-0fbd0a296559:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openstack/openstackclient d6da581e18c6588b2757b71681cbe2cf80a0816b384ca993a9d7e50743aaaa8a network default NAD default] [openstack/openstackclient d6da581e18c6588b2757b71681cbe2cf80a0816b384ca993a9d7e50743aaaa8a network default NAD default] failed to configure pod interface: canceled old pod sandbox waiting for OVS port binding for 0a:58:0a:d9:00:c4 [10.217.0.196/23] Feb 17 16:24:31 crc kubenswrapper[4672]: ' Feb 17 16:24:31 crc kubenswrapper[4672]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 16:24:31 crc kubenswrapper[4672]: > Feb 17 16:24:31 crc kubenswrapper[4672]: E0217 16:24:31.515009 4672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 17 16:24:31 crc kubenswrapper[4672]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_12ce4f59-1f9d-4db9-8448-0fbd0a296559_0(d6da581e18c6588b2757b71681cbe2cf80a0816b384ca993a9d7e50743aaaa8a): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d6da581e18c6588b2757b71681cbe2cf80a0816b384ca993a9d7e50743aaaa8a" Netns:"/var/run/netns/81433a64-9510-42df-a0d7-6844e87a3427" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=d6da581e18c6588b2757b71681cbe2cf80a0816b384ca993a9d7e50743aaaa8a;K8S_POD_UID=12ce4f59-1f9d-4db9-8448-0fbd0a296559" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: [openstack/openstackclient/12ce4f59-1f9d-4db9-8448-0fbd0a296559:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openstack/openstackclient d6da581e18c6588b2757b71681cbe2cf80a0816b384ca993a9d7e50743aaaa8a network default NAD default] [openstack/openstackclient d6da581e18c6588b2757b71681cbe2cf80a0816b384ca993a9d7e50743aaaa8a network default NAD default] failed to configure pod interface: canceled old pod sandbox waiting for OVS port binding for 0a:58:0a:d9:00:c4 [10.217.0.196/23] Feb 17 16:24:31 crc kubenswrapper[4672]: ' Feb 17 16:24:31 crc kubenswrapper[4672]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 17 16:24:31 crc kubenswrapper[4672]: > pod="openstack/openstackclient" Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.822252 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.823706 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"8660afe8-86d8-4fff-9707-c67a3ad7f842","Type":"ContainerStarted","Data":"05fb584f676120ed2f119f7dd7c20eea7668da68508d408547a09d17ca3e9957"} Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.840626 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.862750 4672 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="12ce4f59-1f9d-4db9-8448-0fbd0a296559" podUID="b02e419f-9426-4e56-9b4b-17ec702acb0a" Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.866131 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.866113657 podStartE2EDuration="2.866113657s" podCreationTimestamp="2026-02-17 16:24:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:31.856833503 +0000 UTC m=+1280.610922235" watchObservedRunningTime="2026-02-17 16:24:31.866113657 +0000 UTC m=+1280.620202389" Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.940666 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.941328 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/12ce4f59-1f9d-4db9-8448-0fbd0a296559-openstack-config\") pod \"12ce4f59-1f9d-4db9-8448-0fbd0a296559\" (UID: \"12ce4f59-1f9d-4db9-8448-0fbd0a296559\") " Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.941374 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/12ce4f59-1f9d-4db9-8448-0fbd0a296559-openstack-config-secret\") pod \"12ce4f59-1f9d-4db9-8448-0fbd0a296559\" (UID: \"12ce4f59-1f9d-4db9-8448-0fbd0a296559\") " Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.941462 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ce4f59-1f9d-4db9-8448-0fbd0a296559-combined-ca-bundle\") pod \"12ce4f59-1f9d-4db9-8448-0fbd0a296559\" (UID: \"12ce4f59-1f9d-4db9-8448-0fbd0a296559\") " Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.941502 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr5cn\" (UniqueName: \"kubernetes.io/projected/12ce4f59-1f9d-4db9-8448-0fbd0a296559-kube-api-access-pr5cn\") pod \"12ce4f59-1f9d-4db9-8448-0fbd0a296559\" (UID: \"12ce4f59-1f9d-4db9-8448-0fbd0a296559\") " Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.943381 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12ce4f59-1f9d-4db9-8448-0fbd0a296559-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "12ce4f59-1f9d-4db9-8448-0fbd0a296559" (UID: "12ce4f59-1f9d-4db9-8448-0fbd0a296559"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.946929 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12ce4f59-1f9d-4db9-8448-0fbd0a296559-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12ce4f59-1f9d-4db9-8448-0fbd0a296559" (UID: "12ce4f59-1f9d-4db9-8448-0fbd0a296559"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.952012 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12ce4f59-1f9d-4db9-8448-0fbd0a296559-kube-api-access-pr5cn" (OuterVolumeSpecName: "kube-api-access-pr5cn") pod "12ce4f59-1f9d-4db9-8448-0fbd0a296559" (UID: "12ce4f59-1f9d-4db9-8448-0fbd0a296559"). InnerVolumeSpecName "kube-api-access-pr5cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.952461 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12ce4f59-1f9d-4db9-8448-0fbd0a296559-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "12ce4f59-1f9d-4db9-8448-0fbd0a296559" (UID: "12ce4f59-1f9d-4db9-8448-0fbd0a296559"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:31 crc kubenswrapper[4672]: I0217 16:24:31.970733 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12ce4f59-1f9d-4db9-8448-0fbd0a296559" path="/var/lib/kubelet/pods/12ce4f59-1f9d-4db9-8448-0fbd0a296559/volumes" Feb 17 16:24:32 crc kubenswrapper[4672]: I0217 16:24:32.044266 4672 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/12ce4f59-1f9d-4db9-8448-0fbd0a296559-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:32 crc kubenswrapper[4672]: I0217 16:24:32.044452 4672 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/12ce4f59-1f9d-4db9-8448-0fbd0a296559-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:32 crc kubenswrapper[4672]: I0217 16:24:32.044528 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ce4f59-1f9d-4db9-8448-0fbd0a296559-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:32 crc kubenswrapper[4672]: I0217 16:24:32.044614 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr5cn\" (UniqueName: \"kubernetes.io/projected/12ce4f59-1f9d-4db9-8448-0fbd0a296559-kube-api-access-pr5cn\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:32 crc kubenswrapper[4672]: I0217 16:24:32.398971 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:32 crc kubenswrapper[4672]: I0217 16:24:32.399258 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6fd764cdf6-q8qss" Feb 17 16:24:32 crc kubenswrapper[4672]: I0217 16:24:32.543115 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6cc66b5c9b-dpjsg"] Feb 17 16:24:32 crc kubenswrapper[4672]: I0217 16:24:32.543354 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6cc66b5c9b-dpjsg" podUID="12d8802b-c666-49da-ac6f-cd885f46f9f0" containerName="placement-log" containerID="cri-o://12b2de443d40f1eb4dd2fd2516396777577d97a923427a4beab259f2e173d6bf" gracePeriod=30 Feb 17 16:24:32 crc kubenswrapper[4672]: I0217 16:24:32.543649 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6cc66b5c9b-dpjsg" podUID="12d8802b-c666-49da-ac6f-cd885f46f9f0" containerName="placement-api" containerID="cri-o://5f2b62ca220bf20e868e16ffc57ed2e7cb6589839f91d9a7b7d458cb26f03373" gracePeriod=30 Feb 17 16:24:32 crc kubenswrapper[4672]: I0217 16:24:32.849710 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b02e419f-9426-4e56-9b4b-17ec702acb0a","Type":"ContainerStarted","Data":"cad2bfe6580b8c2f9350be379318fb766a5423434fb4cc57d52e8da8f5f730f8"} Feb 17 16:24:32 crc kubenswrapper[4672]: I0217 16:24:32.853128 4672 generic.go:334] "Generic (PLEG): container finished" podID="12d8802b-c666-49da-ac6f-cd885f46f9f0" containerID="12b2de443d40f1eb4dd2fd2516396777577d97a923427a4beab259f2e173d6bf" exitCode=143 Feb 17 16:24:32 crc kubenswrapper[4672]: I0217 16:24:32.853155 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cc66b5c9b-dpjsg" event={"ID":"12d8802b-c666-49da-ac6f-cd885f46f9f0","Type":"ContainerDied","Data":"12b2de443d40f1eb4dd2fd2516396777577d97a923427a4beab259f2e173d6bf"} Feb 17 16:24:32 crc kubenswrapper[4672]: I0217 16:24:32.853187 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 16:24:32 crc kubenswrapper[4672]: I0217 16:24:32.862621 4672 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="12ce4f59-1f9d-4db9-8448-0fbd0a296559" podUID="b02e419f-9426-4e56-9b4b-17ec702acb0a" Feb 17 16:24:32 crc kubenswrapper[4672]: I0217 16:24:32.879895 4672 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="12ce4f59-1f9d-4db9-8448-0fbd0a296559" podUID="b02e419f-9426-4e56-9b4b-17ec702acb0a" Feb 17 16:24:33 crc kubenswrapper[4672]: E0217 16:24:33.471082 4672 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6ad1f8c_ef18_4bd8_ac43_b8f1151277f6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6ad1f8c_ef18_4bd8_ac43_b8f1151277f6.slice/crio-4fede41a6b3d442704cc0b64a71cfcde9ecee5251694f4b5e0c64343367e5adb\": RecentStats: unable to find data in memory cache]" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.452225 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.608659 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-scripts\") pod \"54fe368a-64c0-447e-969b-06cc444a1bd8\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.608730 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-combined-ca-bundle\") pod \"54fe368a-64c0-447e-969b-06cc444a1bd8\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.608780 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fe368a-64c0-447e-969b-06cc444a1bd8-logs\") pod \"54fe368a-64c0-447e-969b-06cc444a1bd8\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.608831 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtrf8\" (UniqueName: \"kubernetes.io/projected/54fe368a-64c0-447e-969b-06cc444a1bd8-kube-api-access-gtrf8\") pod \"54fe368a-64c0-447e-969b-06cc444a1bd8\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.608906 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-config-data\") pod \"54fe368a-64c0-447e-969b-06cc444a1bd8\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.608950 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-config-data-custom\") pod \"54fe368a-64c0-447e-969b-06cc444a1bd8\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.608970 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54fe368a-64c0-447e-969b-06cc444a1bd8-etc-machine-id\") pod \"54fe368a-64c0-447e-969b-06cc444a1bd8\" (UID: \"54fe368a-64c0-447e-969b-06cc444a1bd8\") " Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.609429 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54fe368a-64c0-447e-969b-06cc444a1bd8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "54fe368a-64c0-447e-969b-06cc444a1bd8" (UID: "54fe368a-64c0-447e-969b-06cc444a1bd8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.614679 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-scripts" (OuterVolumeSpecName: "scripts") pod "54fe368a-64c0-447e-969b-06cc444a1bd8" (UID: "54fe368a-64c0-447e-969b-06cc444a1bd8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.619949 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54fe368a-64c0-447e-969b-06cc444a1bd8-logs" (OuterVolumeSpecName: "logs") pod "54fe368a-64c0-447e-969b-06cc444a1bd8" (UID: "54fe368a-64c0-447e-969b-06cc444a1bd8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.621807 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54fe368a-64c0-447e-969b-06cc444a1bd8-kube-api-access-gtrf8" (OuterVolumeSpecName: "kube-api-access-gtrf8") pod "54fe368a-64c0-447e-969b-06cc444a1bd8" (UID: "54fe368a-64c0-447e-969b-06cc444a1bd8"). InnerVolumeSpecName "kube-api-access-gtrf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.634929 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "54fe368a-64c0-447e-969b-06cc444a1bd8" (UID: "54fe368a-64c0-447e-969b-06cc444a1bd8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.651146 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54fe368a-64c0-447e-969b-06cc444a1bd8" (UID: "54fe368a-64c0-447e-969b-06cc444a1bd8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.690013 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-config-data" (OuterVolumeSpecName: "config-data") pod "54fe368a-64c0-447e-969b-06cc444a1bd8" (UID: "54fe368a-64c0-447e-969b-06cc444a1bd8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.711268 4672 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fe368a-64c0-447e-969b-06cc444a1bd8-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.711300 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtrf8\" (UniqueName: \"kubernetes.io/projected/54fe368a-64c0-447e-969b-06cc444a1bd8-kube-api-access-gtrf8\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.711312 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.711321 4672 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.711330 4672 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54fe368a-64c0-447e-969b-06cc444a1bd8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.711338 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.711347 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54fe368a-64c0-447e-969b-06cc444a1bd8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.876303 4672 generic.go:334] "Generic (PLEG): container finished" podID="54fe368a-64c0-447e-969b-06cc444a1bd8" containerID="b4602447459674aa4f8ee42f5265444575c54ef38969fde41a64ebae307376a5" exitCode=137 Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.876344 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54fe368a-64c0-447e-969b-06cc444a1bd8","Type":"ContainerDied","Data":"b4602447459674aa4f8ee42f5265444575c54ef38969fde41a64ebae307376a5"} Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.876372 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54fe368a-64c0-447e-969b-06cc444a1bd8","Type":"ContainerDied","Data":"a0a2bcab6637a0686543ca1aad0384f7079e8a3f1e1edd4b776609093c98b26d"} Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.876389 4672 scope.go:117] "RemoveContainer" containerID="b4602447459674aa4f8ee42f5265444575c54ef38969fde41a64ebae307376a5" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.876410 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.925470 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.934756 4672 scope.go:117] "RemoveContainer" containerID="b8e8eb412fa2c12dcdafdf2863a3b3efcae5847a6c678cf55b8b039bae580a01" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.945831 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.955026 4672 scope.go:117] "RemoveContainer" containerID="b4602447459674aa4f8ee42f5265444575c54ef38969fde41a64ebae307376a5" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.955128 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:24:34 crc kubenswrapper[4672]: E0217 16:24:34.955458 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54fe368a-64c0-447e-969b-06cc444a1bd8" containerName="cinder-api-log" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.955474 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="54fe368a-64c0-447e-969b-06cc444a1bd8" containerName="cinder-api-log" Feb 17 16:24:34 crc kubenswrapper[4672]: E0217 16:24:34.955490 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54fe368a-64c0-447e-969b-06cc444a1bd8" containerName="cinder-api" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.955496 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="54fe368a-64c0-447e-969b-06cc444a1bd8" containerName="cinder-api" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.956318 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="54fe368a-64c0-447e-969b-06cc444a1bd8" containerName="cinder-api" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.956352 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="54fe368a-64c0-447e-969b-06cc444a1bd8" containerName="cinder-api-log" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.957461 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:24:34 crc kubenswrapper[4672]: E0217 16:24:34.958593 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4602447459674aa4f8ee42f5265444575c54ef38969fde41a64ebae307376a5\": container with ID starting with b4602447459674aa4f8ee42f5265444575c54ef38969fde41a64ebae307376a5 not found: ID does not exist" containerID="b4602447459674aa4f8ee42f5265444575c54ef38969fde41a64ebae307376a5" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.958623 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4602447459674aa4f8ee42f5265444575c54ef38969fde41a64ebae307376a5"} err="failed to get container status \"b4602447459674aa4f8ee42f5265444575c54ef38969fde41a64ebae307376a5\": rpc error: code = NotFound desc = could not find container \"b4602447459674aa4f8ee42f5265444575c54ef38969fde41a64ebae307376a5\": container with ID starting with b4602447459674aa4f8ee42f5265444575c54ef38969fde41a64ebae307376a5 not found: ID does not exist" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.958643 4672 scope.go:117] "RemoveContainer" containerID="b8e8eb412fa2c12dcdafdf2863a3b3efcae5847a6c678cf55b8b039bae580a01" Feb 17 16:24:34 crc kubenswrapper[4672]: E0217 16:24:34.960420 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8e8eb412fa2c12dcdafdf2863a3b3efcae5847a6c678cf55b8b039bae580a01\": container with ID starting with b8e8eb412fa2c12dcdafdf2863a3b3efcae5847a6c678cf55b8b039bae580a01 not found: ID does not exist" containerID="b8e8eb412fa2c12dcdafdf2863a3b3efcae5847a6c678cf55b8b039bae580a01" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.960462 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8e8eb412fa2c12dcdafdf2863a3b3efcae5847a6c678cf55b8b039bae580a01"} err="failed to get container status \"b8e8eb412fa2c12dcdafdf2863a3b3efcae5847a6c678cf55b8b039bae580a01\": rpc error: code = NotFound desc = could not find container \"b8e8eb412fa2c12dcdafdf2863a3b3efcae5847a6c678cf55b8b039bae580a01\": container with ID starting with b8e8eb412fa2c12dcdafdf2863a3b3efcae5847a6c678cf55b8b039bae580a01 not found: ID does not exist" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.961195 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.961237 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.961202 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 17 16:24:34 crc kubenswrapper[4672]: I0217 16:24:34.966070 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.016576 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d024e6ad-924f-42a9-94e3-21cf7d00f62f-logs\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.016632 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l4jp\" (UniqueName: \"kubernetes.io/projected/d024e6ad-924f-42a9-94e3-21cf7d00f62f-kube-api-access-5l4jp\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.016709 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d024e6ad-924f-42a9-94e3-21cf7d00f62f-config-data-custom\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.016775 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d024e6ad-924f-42a9-94e3-21cf7d00f62f-config-data\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.016800 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d024e6ad-924f-42a9-94e3-21cf7d00f62f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.016833 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d024e6ad-924f-42a9-94e3-21cf7d00f62f-scripts\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.016849 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d024e6ad-924f-42a9-94e3-21cf7d00f62f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.016917 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d024e6ad-924f-42a9-94e3-21cf7d00f62f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.016938 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d024e6ad-924f-42a9-94e3-21cf7d00f62f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.118709 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d024e6ad-924f-42a9-94e3-21cf7d00f62f-logs\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.118778 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l4jp\" (UniqueName: \"kubernetes.io/projected/d024e6ad-924f-42a9-94e3-21cf7d00f62f-kube-api-access-5l4jp\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.118842 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d024e6ad-924f-42a9-94e3-21cf7d00f62f-config-data-custom\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.118896 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d024e6ad-924f-42a9-94e3-21cf7d00f62f-config-data\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.118925 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d024e6ad-924f-42a9-94e3-21cf7d00f62f-scripts\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.118939 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d024e6ad-924f-42a9-94e3-21cf7d00f62f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.118955 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d024e6ad-924f-42a9-94e3-21cf7d00f62f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.118994 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d024e6ad-924f-42a9-94e3-21cf7d00f62f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.119018 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d024e6ad-924f-42a9-94e3-21cf7d00f62f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.119214 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d024e6ad-924f-42a9-94e3-21cf7d00f62f-logs\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.119282 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d024e6ad-924f-42a9-94e3-21cf7d00f62f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.124963 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d024e6ad-924f-42a9-94e3-21cf7d00f62f-scripts\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.125473 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d024e6ad-924f-42a9-94e3-21cf7d00f62f-config-data-custom\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.126001 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d024e6ad-924f-42a9-94e3-21cf7d00f62f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.128664 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d024e6ad-924f-42a9-94e3-21cf7d00f62f-config-data\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.130122 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d024e6ad-924f-42a9-94e3-21cf7d00f62f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.138868 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l4jp\" (UniqueName: \"kubernetes.io/projected/d024e6ad-924f-42a9-94e3-21cf7d00f62f-kube-api-access-5l4jp\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.145595 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d024e6ad-924f-42a9-94e3-21cf7d00f62f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d024e6ad-924f-42a9-94e3-21cf7d00f62f\") " pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.301275 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.790021 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-844c787c5c-l2cm9"] Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.791906 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.796365 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.805708 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.805806 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.811834 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-844c787c5c-l2cm9"] Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.850372 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:24:35 crc kubenswrapper[4672]: W0217 16:24:35.856584 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd024e6ad_924f_42a9_94e3_21cf7d00f62f.slice/crio-f237a567d324ee43f017f737b1de78474854cbad5ed447caa893237f56e19fcb WatchSource:0}: Error finding container f237a567d324ee43f017f737b1de78474854cbad5ed447caa893237f56e19fcb: Status 404 returned error can't find the container with id f237a567d324ee43f017f737b1de78474854cbad5ed447caa893237f56e19fcb Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.906617 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d024e6ad-924f-42a9-94e3-21cf7d00f62f","Type":"ContainerStarted","Data":"f237a567d324ee43f017f737b1de78474854cbad5ed447caa893237f56e19fcb"} Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.918631 4672 generic.go:334] "Generic (PLEG): container finished" podID="12d8802b-c666-49da-ac6f-cd885f46f9f0" containerID="5f2b62ca220bf20e868e16ffc57ed2e7cb6589839f91d9a7b7d458cb26f03373" exitCode=0 Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.918701 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cc66b5c9b-dpjsg" event={"ID":"12d8802b-c666-49da-ac6f-cd885f46f9f0","Type":"ContainerDied","Data":"5f2b62ca220bf20e868e16ffc57ed2e7cb6589839f91d9a7b7d458cb26f03373"} Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.932653 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-log-httpd\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.932725 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wq5c\" (UniqueName: \"kubernetes.io/projected/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-kube-api-access-5wq5c\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.932768 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-etc-swift\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.932800 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-internal-tls-certs\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.932853 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-combined-ca-bundle\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.932869 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-config-data\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.932898 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-public-tls-certs\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.932919 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-run-httpd\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:35 crc kubenswrapper[4672]: I0217 16:24:35.977552 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54fe368a-64c0-447e-969b-06cc444a1bd8" path="/var/lib/kubelet/pods/54fe368a-64c0-447e-969b-06cc444a1bd8/volumes" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.034867 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-etc-swift\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.034925 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-internal-tls-certs\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.034982 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-combined-ca-bundle\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.034998 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-config-data\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.035046 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-public-tls-certs\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.035069 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-run-httpd\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.035106 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-log-httpd\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.035170 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wq5c\" (UniqueName: \"kubernetes.io/projected/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-kube-api-access-5wq5c\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.036024 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-log-httpd\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.036881 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-run-httpd\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.047385 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-internal-tls-certs\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.047505 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-config-data\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.051967 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-etc-swift\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.080151 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-public-tls-certs\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.083348 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wq5c\" (UniqueName: \"kubernetes.io/projected/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-kube-api-access-5wq5c\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.104673 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f2974b4-c465-4d64-b3b3-e79e4d1b74a2-combined-ca-bundle\") pod \"swift-proxy-844c787c5c-l2cm9\" (UID: \"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2\") " pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.116125 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.293118 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.445728 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12d8802b-c666-49da-ac6f-cd885f46f9f0-logs\") pod \"12d8802b-c666-49da-ac6f-cd885f46f9f0\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.446088 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-public-tls-certs\") pod \"12d8802b-c666-49da-ac6f-cd885f46f9f0\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.446154 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-internal-tls-certs\") pod \"12d8802b-c666-49da-ac6f-cd885f46f9f0\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.446189 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-combined-ca-bundle\") pod \"12d8802b-c666-49da-ac6f-cd885f46f9f0\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.446283 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-config-data\") pod \"12d8802b-c666-49da-ac6f-cd885f46f9f0\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.446280 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12d8802b-c666-49da-ac6f-cd885f46f9f0-logs" (OuterVolumeSpecName: "logs") pod "12d8802b-c666-49da-ac6f-cd885f46f9f0" (UID: "12d8802b-c666-49da-ac6f-cd885f46f9f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.446312 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-scripts\") pod \"12d8802b-c666-49da-ac6f-cd885f46f9f0\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.446375 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfkfn\" (UniqueName: \"kubernetes.io/projected/12d8802b-c666-49da-ac6f-cd885f46f9f0-kube-api-access-vfkfn\") pod \"12d8802b-c666-49da-ac6f-cd885f46f9f0\" (UID: \"12d8802b-c666-49da-ac6f-cd885f46f9f0\") " Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.446777 4672 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12d8802b-c666-49da-ac6f-cd885f46f9f0-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.453087 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-scripts" (OuterVolumeSpecName: "scripts") pod "12d8802b-c666-49da-ac6f-cd885f46f9f0" (UID: "12d8802b-c666-49da-ac6f-cd885f46f9f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.455048 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12d8802b-c666-49da-ac6f-cd885f46f9f0-kube-api-access-vfkfn" (OuterVolumeSpecName: "kube-api-access-vfkfn") pod "12d8802b-c666-49da-ac6f-cd885f46f9f0" (UID: "12d8802b-c666-49da-ac6f-cd885f46f9f0"). InnerVolumeSpecName "kube-api-access-vfkfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.471465 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.474037 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="261e82ba-d901-48e9-9890-768595c3e9df" containerName="ceilometer-central-agent" containerID="cri-o://2ae893b986748b9f358e2e0f5f2eda2e2e881935f9b3538bd9ccd113b9ae6f5d" gracePeriod=30 Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.474153 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="261e82ba-d901-48e9-9890-768595c3e9df" containerName="ceilometer-notification-agent" containerID="cri-o://291bfa87a02917eb4de97c1b46645bab031f200ccf9a9bb7eb9d3ba35f0f5f06" gracePeriod=30 Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.474143 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="261e82ba-d901-48e9-9890-768595c3e9df" containerName="sg-core" containerID="cri-o://76bcbfd3216cfc1f37682ca551e3fa5d5fe00389def7eda63afec5c06fe5de87" gracePeriod=30 Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.474283 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="261e82ba-d901-48e9-9890-768595c3e9df" containerName="proxy-httpd" containerID="cri-o://372e811646bfd7b1d3aab1076f1c0032e9fb71de3e71b2cbef15c4059bf12a48" gracePeriod=30 Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.539139 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-config-data" (OuterVolumeSpecName: "config-data") pod "12d8802b-c666-49da-ac6f-cd885f46f9f0" (UID: "12d8802b-c666-49da-ac6f-cd885f46f9f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.548472 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.548499 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.548519 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfkfn\" (UniqueName: \"kubernetes.io/projected/12d8802b-c666-49da-ac6f-cd885f46f9f0-kube-api-access-vfkfn\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.591820 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="261e82ba-d901-48e9-9890-768595c3e9df" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.189:3000/\": read tcp 10.217.0.2:44614->10.217.0.189:3000: read: connection reset by peer" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.624657 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12d8802b-c666-49da-ac6f-cd885f46f9f0" (UID: "12d8802b-c666-49da-ac6f-cd885f46f9f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.652413 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.689622 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "12d8802b-c666-49da-ac6f-cd885f46f9f0" (UID: "12d8802b-c666-49da-ac6f-cd885f46f9f0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.691671 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "12d8802b-c666-49da-ac6f-cd885f46f9f0" (UID: "12d8802b-c666-49da-ac6f-cd885f46f9f0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.754794 4672 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.754818 4672 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12d8802b-c666-49da-ac6f-cd885f46f9f0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.926753 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-844c787c5c-l2cm9"] Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.972615 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d024e6ad-924f-42a9-94e3-21cf7d00f62f","Type":"ContainerStarted","Data":"ed0da40220b25486230e3a4afc069e6eb27431dc94ecd73e657dd1f052a4ae3f"} Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.977389 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6cc66b5c9b-dpjsg" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.977395 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cc66b5c9b-dpjsg" event={"ID":"12d8802b-c666-49da-ac6f-cd885f46f9f0","Type":"ContainerDied","Data":"c20f78e17a84156701a25d44ee9b660d3f52157fce1cdd02d91e84d872e46a78"} Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.977459 4672 scope.go:117] "RemoveContainer" containerID="5f2b62ca220bf20e868e16ffc57ed2e7cb6589839f91d9a7b7d458cb26f03373" Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.991544 4672 generic.go:334] "Generic (PLEG): container finished" podID="261e82ba-d901-48e9-9890-768595c3e9df" containerID="372e811646bfd7b1d3aab1076f1c0032e9fb71de3e71b2cbef15c4059bf12a48" exitCode=0 Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.991584 4672 generic.go:334] "Generic (PLEG): container finished" podID="261e82ba-d901-48e9-9890-768595c3e9df" containerID="76bcbfd3216cfc1f37682ca551e3fa5d5fe00389def7eda63afec5c06fe5de87" exitCode=2 Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.991605 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"261e82ba-d901-48e9-9890-768595c3e9df","Type":"ContainerDied","Data":"372e811646bfd7b1d3aab1076f1c0032e9fb71de3e71b2cbef15c4059bf12a48"} Feb 17 16:24:36 crc kubenswrapper[4672]: I0217 16:24:36.991646 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"261e82ba-d901-48e9-9890-768595c3e9df","Type":"ContainerDied","Data":"76bcbfd3216cfc1f37682ca551e3fa5d5fe00389def7eda63afec5c06fe5de87"} Feb 17 16:24:37 crc kubenswrapper[4672]: I0217 16:24:37.012024 4672 scope.go:117] "RemoveContainer" containerID="12b2de443d40f1eb4dd2fd2516396777577d97a923427a4beab259f2e173d6bf" Feb 17 16:24:37 crc kubenswrapper[4672]: I0217 16:24:37.074599 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6cc66b5c9b-dpjsg"] Feb 17 16:24:37 crc kubenswrapper[4672]: I0217 16:24:37.092189 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6cc66b5c9b-dpjsg"] Feb 17 16:24:37 crc kubenswrapper[4672]: I0217 16:24:37.958628 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12d8802b-c666-49da-ac6f-cd885f46f9f0" path="/var/lib/kubelet/pods/12d8802b-c666-49da-ac6f-cd885f46f9f0/volumes" Feb 17 16:24:38 crc kubenswrapper[4672]: I0217 16:24:38.007180 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-844c787c5c-l2cm9" event={"ID":"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2","Type":"ContainerStarted","Data":"1168e8fce56a25b5cb88b3b05c27743eb4d9baf61f7141e2315c7a033f8241f7"} Feb 17 16:24:38 crc kubenswrapper[4672]: I0217 16:24:38.007234 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-844c787c5c-l2cm9" event={"ID":"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2","Type":"ContainerStarted","Data":"13c3772a350fb492fa1c894949c6ea626acf0506dcaa754d9fa6ea35c879b851"} Feb 17 16:24:38 crc kubenswrapper[4672]: I0217 16:24:38.007248 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-844c787c5c-l2cm9" event={"ID":"9f2974b4-c465-4d64-b3b3-e79e4d1b74a2","Type":"ContainerStarted","Data":"3d1a0668220ab572bfb7315ec58a0c58ade47231d9a7cb97499033d70e5d89d9"} Feb 17 16:24:38 crc kubenswrapper[4672]: I0217 16:24:38.007347 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:38 crc kubenswrapper[4672]: I0217 16:24:38.007365 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:38 crc kubenswrapper[4672]: I0217 16:24:38.025155 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d024e6ad-924f-42a9-94e3-21cf7d00f62f","Type":"ContainerStarted","Data":"5e86414accca86198c082ba028fab86f3f4434eeeed4f98a8b3255e7f79d64f9"} Feb 17 16:24:38 crc kubenswrapper[4672]: I0217 16:24:38.026249 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 17 16:24:38 crc kubenswrapper[4672]: I0217 16:24:38.033804 4672 generic.go:334] "Generic (PLEG): container finished" podID="261e82ba-d901-48e9-9890-768595c3e9df" containerID="2ae893b986748b9f358e2e0f5f2eda2e2e881935f9b3538bd9ccd113b9ae6f5d" exitCode=0 Feb 17 16:24:38 crc kubenswrapper[4672]: I0217 16:24:38.033839 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"261e82ba-d901-48e9-9890-768595c3e9df","Type":"ContainerDied","Data":"2ae893b986748b9f358e2e0f5f2eda2e2e881935f9b3538bd9ccd113b9ae6f5d"} Feb 17 16:24:38 crc kubenswrapper[4672]: I0217 16:24:38.049472 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-844c787c5c-l2cm9" podStartSLOduration=3.04944911 podStartE2EDuration="3.04944911s" podCreationTimestamp="2026-02-17 16:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:38.041726237 +0000 UTC m=+1286.795814989" watchObservedRunningTime="2026-02-17 16:24:38.04944911 +0000 UTC m=+1286.803537842" Feb 17 16:24:38 crc kubenswrapper[4672]: I0217 16:24:38.064941 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.064925828 podStartE2EDuration="4.064925828s" podCreationTimestamp="2026-02-17 16:24:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:38.064870797 +0000 UTC m=+1286.818959549" watchObservedRunningTime="2026-02-17 16:24:38.064925828 +0000 UTC m=+1286.819014560" Feb 17 16:24:38 crc kubenswrapper[4672]: I0217 16:24:38.357694 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:24:38 crc kubenswrapper[4672]: I0217 16:24:38.362975 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b6cb035c-108b-40c2-82fa-bc9db8599b1a" containerName="glance-log" containerID="cri-o://e22babdf03c40fc0728d4b21ad9b7217ccecd7f2b1f505089b250125de3732cc" gracePeriod=30 Feb 17 16:24:38 crc kubenswrapper[4672]: I0217 16:24:38.363027 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b6cb035c-108b-40c2-82fa-bc9db8599b1a" containerName="glance-httpd" containerID="cri-o://6b9470433ecb636a8d0409bc7e66e74931bba2d3c00b15e7ca5d37a5ed3f8849" gracePeriod=30 Feb 17 16:24:39 crc kubenswrapper[4672]: I0217 16:24:39.049650 4672 generic.go:334] "Generic (PLEG): container finished" podID="b6cb035c-108b-40c2-82fa-bc9db8599b1a" containerID="e22babdf03c40fc0728d4b21ad9b7217ccecd7f2b1f505089b250125de3732cc" exitCode=143 Feb 17 16:24:39 crc kubenswrapper[4672]: I0217 16:24:39.050613 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6cb035c-108b-40c2-82fa-bc9db8599b1a","Type":"ContainerDied","Data":"e22babdf03c40fc0728d4b21ad9b7217ccecd7f2b1f505089b250125de3732cc"} Feb 17 16:24:40 crc kubenswrapper[4672]: I0217 16:24:40.197640 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:24:40 crc kubenswrapper[4672]: I0217 16:24:40.198092 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4640eeb0-bf75-4e1b-a291-964288b3ecb1" containerName="glance-log" containerID="cri-o://5d2d94cfbfd60ba5d23a98ac38e4f21d1c6147c03fd72322065afe51d172d515" gracePeriod=30 Feb 17 16:24:40 crc kubenswrapper[4672]: I0217 16:24:40.198188 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4640eeb0-bf75-4e1b-a291-964288b3ecb1" containerName="glance-httpd" containerID="cri-o://dfdd97b715abd1e15945c0e67790e24f6002e4e322d35ac6e781335fb439eb6e" gracePeriod=30 Feb 17 16:24:41 crc kubenswrapper[4672]: I0217 16:24:41.082747 4672 generic.go:334] "Generic (PLEG): container finished" podID="4640eeb0-bf75-4e1b-a291-964288b3ecb1" containerID="5d2d94cfbfd60ba5d23a98ac38e4f21d1c6147c03fd72322065afe51d172d515" exitCode=143 Feb 17 16:24:41 crc kubenswrapper[4672]: I0217 16:24:41.083040 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4640eeb0-bf75-4e1b-a291-964288b3ecb1","Type":"ContainerDied","Data":"5d2d94cfbfd60ba5d23a98ac38e4f21d1c6147c03fd72322065afe51d172d515"} Feb 17 16:24:41 crc kubenswrapper[4672]: I0217 16:24:41.094633 4672 generic.go:334] "Generic (PLEG): container finished" podID="261e82ba-d901-48e9-9890-768595c3e9df" containerID="291bfa87a02917eb4de97c1b46645bab031f200ccf9a9bb7eb9d3ba35f0f5f06" exitCode=0 Feb 17 16:24:41 crc kubenswrapper[4672]: I0217 16:24:41.094666 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"261e82ba-d901-48e9-9890-768595c3e9df","Type":"ContainerDied","Data":"291bfa87a02917eb4de97c1b46645bab031f200ccf9a9bb7eb9d3ba35f0f5f06"} Feb 17 16:24:41 crc kubenswrapper[4672]: I0217 16:24:41.570415 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-78b4dc5857-f54l5" Feb 17 16:24:41 crc kubenswrapper[4672]: I0217 16:24:41.668088 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b745f78d8-8tmpn"] Feb 17 16:24:41 crc kubenswrapper[4672]: I0217 16:24:41.668320 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b745f78d8-8tmpn" podUID="c7309196-390c-4dc4-b9a0-a88f48e270db" containerName="neutron-api" containerID="cri-o://5b2cbea1afc020385cf8f7fca1f19050ede9a7becdf554b76f685bc785707433" gracePeriod=30 Feb 17 16:24:41 crc kubenswrapper[4672]: I0217 16:24:41.668740 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b745f78d8-8tmpn" podUID="c7309196-390c-4dc4-b9a0-a88f48e270db" containerName="neutron-httpd" containerID="cri-o://55a2f0e4b37b6e17d04f9873a319170827fcc78003e5c7c76ad8375346ca3b3e" gracePeriod=30 Feb 17 16:24:42 crc kubenswrapper[4672]: I0217 16:24:42.105952 4672 generic.go:334] "Generic (PLEG): container finished" podID="b6cb035c-108b-40c2-82fa-bc9db8599b1a" containerID="6b9470433ecb636a8d0409bc7e66e74931bba2d3c00b15e7ca5d37a5ed3f8849" exitCode=0 Feb 17 16:24:42 crc kubenswrapper[4672]: I0217 16:24:42.106037 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6cb035c-108b-40c2-82fa-bc9db8599b1a","Type":"ContainerDied","Data":"6b9470433ecb636a8d0409bc7e66e74931bba2d3c00b15e7ca5d37a5ed3f8849"} Feb 17 16:24:43 crc kubenswrapper[4672]: I0217 16:24:43.118219 4672 generic.go:334] "Generic (PLEG): container finished" podID="c7309196-390c-4dc4-b9a0-a88f48e270db" containerID="55a2f0e4b37b6e17d04f9873a319170827fcc78003e5c7c76ad8375346ca3b3e" exitCode=0 Feb 17 16:24:43 crc kubenswrapper[4672]: I0217 16:24:43.118268 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b745f78d8-8tmpn" event={"ID":"c7309196-390c-4dc4-b9a0-a88f48e270db","Type":"ContainerDied","Data":"55a2f0e4b37b6e17d04f9873a319170827fcc78003e5c7c76ad8375346ca3b3e"} Feb 17 16:24:43 crc kubenswrapper[4672]: E0217 16:24:43.695547 4672 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6ad1f8c_ef18_4bd8_ac43_b8f1151277f6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6ad1f8c_ef18_4bd8_ac43_b8f1151277f6.slice/crio-4fede41a6b3d442704cc0b64a71cfcde9ecee5251694f4b5e0c64343367e5adb\": RecentStats: unable to find data in memory cache]" Feb 17 16:24:43 crc kubenswrapper[4672]: I0217 16:24:43.913319 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="261e82ba-d901-48e9-9890-768595c3e9df" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.189:3000/\": dial tcp 10.217.0.189:3000: connect: connection refused" Feb 17 16:24:44 crc kubenswrapper[4672]: I0217 16:24:44.131060 4672 generic.go:334] "Generic (PLEG): container finished" podID="4640eeb0-bf75-4e1b-a291-964288b3ecb1" containerID="dfdd97b715abd1e15945c0e67790e24f6002e4e322d35ac6e781335fb439eb6e" exitCode=0 Feb 17 16:24:44 crc kubenswrapper[4672]: I0217 16:24:44.131101 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4640eeb0-bf75-4e1b-a291-964288b3ecb1","Type":"ContainerDied","Data":"dfdd97b715abd1e15945c0e67790e24f6002e4e322d35ac6e781335fb439eb6e"} Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.125371 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.126003 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-844c787c5c-l2cm9" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.167815 4672 generic.go:334] "Generic (PLEG): container finished" podID="c7309196-390c-4dc4-b9a0-a88f48e270db" containerID="5b2cbea1afc020385cf8f7fca1f19050ede9a7becdf554b76f685bc785707433" exitCode=0 Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.168430 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b745f78d8-8tmpn" event={"ID":"c7309196-390c-4dc4-b9a0-a88f48e270db","Type":"ContainerDied","Data":"5b2cbea1afc020385cf8f7fca1f19050ede9a7becdf554b76f685bc785707433"} Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.549761 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.674420 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b745f78d8-8tmpn" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.720791 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.737080 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-combined-ca-bundle\") pod \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.737156 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6cb035c-108b-40c2-82fa-bc9db8599b1a-logs\") pod \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.737210 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnzsf\" (UniqueName: \"kubernetes.io/projected/b6cb035c-108b-40c2-82fa-bc9db8599b1a-kube-api-access-rnzsf\") pod \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.737319 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-303d4d28-face-45fe-b658-7e12a6040182\") pod \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.737339 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6cb035c-108b-40c2-82fa-bc9db8599b1a-httpd-run\") pod \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.737365 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-config-data\") pod \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.737421 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-scripts\") pod \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.737581 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-public-tls-certs\") pod \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\" (UID: \"b6cb035c-108b-40c2-82fa-bc9db8599b1a\") " Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.737800 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6cb035c-108b-40c2-82fa-bc9db8599b1a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b6cb035c-108b-40c2-82fa-bc9db8599b1a" (UID: "b6cb035c-108b-40c2-82fa-bc9db8599b1a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.738837 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6cb035c-108b-40c2-82fa-bc9db8599b1a-logs" (OuterVolumeSpecName: "logs") pod "b6cb035c-108b-40c2-82fa-bc9db8599b1a" (UID: "b6cb035c-108b-40c2-82fa-bc9db8599b1a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.740037 4672 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6cb035c-108b-40c2-82fa-bc9db8599b1a-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.740230 4672 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6cb035c-108b-40c2-82fa-bc9db8599b1a-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.741409 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cb035c-108b-40c2-82fa-bc9db8599b1a-kube-api-access-rnzsf" (OuterVolumeSpecName: "kube-api-access-rnzsf") pod "b6cb035c-108b-40c2-82fa-bc9db8599b1a" (UID: "b6cb035c-108b-40c2-82fa-bc9db8599b1a"). InnerVolumeSpecName "kube-api-access-rnzsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.742803 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-scripts" (OuterVolumeSpecName: "scripts") pod "b6cb035c-108b-40c2-82fa-bc9db8599b1a" (UID: "b6cb035c-108b-40c2-82fa-bc9db8599b1a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.761090 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-303d4d28-face-45fe-b658-7e12a6040182" (OuterVolumeSpecName: "glance") pod "b6cb035c-108b-40c2-82fa-bc9db8599b1a" (UID: "b6cb035c-108b-40c2-82fa-bc9db8599b1a"). InnerVolumeSpecName "pvc-303d4d28-face-45fe-b658-7e12a6040182". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.777044 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6cb035c-108b-40c2-82fa-bc9db8599b1a" (UID: "b6cb035c-108b-40c2-82fa-bc9db8599b1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.799067 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-config-data" (OuterVolumeSpecName: "config-data") pod "b6cb035c-108b-40c2-82fa-bc9db8599b1a" (UID: "b6cb035c-108b-40c2-82fa-bc9db8599b1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.823675 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b6cb035c-108b-40c2-82fa-bc9db8599b1a" (UID: "b6cb035c-108b-40c2-82fa-bc9db8599b1a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.842130 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-httpd-config\") pod \"c7309196-390c-4dc4-b9a0-a88f48e270db\" (UID: \"c7309196-390c-4dc4-b9a0-a88f48e270db\") " Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.842194 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/261e82ba-d901-48e9-9890-768595c3e9df-run-httpd\") pod \"261e82ba-d901-48e9-9890-768595c3e9df\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.842229 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-config\") pod \"c7309196-390c-4dc4-b9a0-a88f48e270db\" (UID: \"c7309196-390c-4dc4-b9a0-a88f48e270db\") " Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.842284 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/261e82ba-d901-48e9-9890-768595c3e9df-log-httpd\") pod \"261e82ba-d901-48e9-9890-768595c3e9df\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.842328 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-combined-ca-bundle\") pod \"261e82ba-d901-48e9-9890-768595c3e9df\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.842347 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-sg-core-conf-yaml\") pod \"261e82ba-d901-48e9-9890-768595c3e9df\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.842397 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-scripts\") pod \"261e82ba-d901-48e9-9890-768595c3e9df\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.842425 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgdhk\" (UniqueName: \"kubernetes.io/projected/c7309196-390c-4dc4-b9a0-a88f48e270db-kube-api-access-hgdhk\") pod \"c7309196-390c-4dc4-b9a0-a88f48e270db\" (UID: \"c7309196-390c-4dc4-b9a0-a88f48e270db\") " Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.842463 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-config-data\") pod \"261e82ba-d901-48e9-9890-768595c3e9df\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.842495 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-combined-ca-bundle\") pod \"c7309196-390c-4dc4-b9a0-a88f48e270db\" (UID: \"c7309196-390c-4dc4-b9a0-a88f48e270db\") " Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.842588 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-ovndb-tls-certs\") pod \"c7309196-390c-4dc4-b9a0-a88f48e270db\" (UID: \"c7309196-390c-4dc4-b9a0-a88f48e270db\") " Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.842608 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/261e82ba-d901-48e9-9890-768595c3e9df-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "261e82ba-d901-48e9-9890-768595c3e9df" (UID: "261e82ba-d901-48e9-9890-768595c3e9df"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.842633 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df6q8\" (UniqueName: \"kubernetes.io/projected/261e82ba-d901-48e9-9890-768595c3e9df-kube-api-access-df6q8\") pod \"261e82ba-d901-48e9-9890-768595c3e9df\" (UID: \"261e82ba-d901-48e9-9890-768595c3e9df\") " Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.842948 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/261e82ba-d901-48e9-9890-768595c3e9df-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "261e82ba-d901-48e9-9890-768595c3e9df" (UID: "261e82ba-d901-48e9-9890-768595c3e9df"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.843792 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.843873 4672 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/261e82ba-d901-48e9-9890-768595c3e9df-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.843928 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnzsf\" (UniqueName: \"kubernetes.io/projected/b6cb035c-108b-40c2-82fa-bc9db8599b1a-kube-api-access-rnzsf\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.844003 4672 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-303d4d28-face-45fe-b658-7e12a6040182\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-303d4d28-face-45fe-b658-7e12a6040182\") on node \"crc\" " Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.844070 4672 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/261e82ba-d901-48e9-9890-768595c3e9df-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.844132 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.844413 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.844468 4672 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6cb035c-108b-40c2-82fa-bc9db8599b1a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.846643 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c7309196-390c-4dc4-b9a0-a88f48e270db" (UID: "c7309196-390c-4dc4-b9a0-a88f48e270db"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.848859 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7309196-390c-4dc4-b9a0-a88f48e270db-kube-api-access-hgdhk" (OuterVolumeSpecName: "kube-api-access-hgdhk") pod "c7309196-390c-4dc4-b9a0-a88f48e270db" (UID: "c7309196-390c-4dc4-b9a0-a88f48e270db"). InnerVolumeSpecName "kube-api-access-hgdhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.851249 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-scripts" (OuterVolumeSpecName: "scripts") pod "261e82ba-d901-48e9-9890-768595c3e9df" (UID: "261e82ba-d901-48e9-9890-768595c3e9df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.851831 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/261e82ba-d901-48e9-9890-768595c3e9df-kube-api-access-df6q8" (OuterVolumeSpecName: "kube-api-access-df6q8") pod "261e82ba-d901-48e9-9890-768595c3e9df" (UID: "261e82ba-d901-48e9-9890-768595c3e9df"). InnerVolumeSpecName "kube-api-access-df6q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.882634 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "261e82ba-d901-48e9-9890-768595c3e9df" (UID: "261e82ba-d901-48e9-9890-768595c3e9df"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.887400 4672 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.888375 4672 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-303d4d28-face-45fe-b658-7e12a6040182" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-303d4d28-face-45fe-b658-7e12a6040182") on node "crc" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.924126 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7309196-390c-4dc4-b9a0-a88f48e270db" (UID: "c7309196-390c-4dc4-b9a0-a88f48e270db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.931763 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-config" (OuterVolumeSpecName: "config") pod "c7309196-390c-4dc4-b9a0-a88f48e270db" (UID: "c7309196-390c-4dc4-b9a0-a88f48e270db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.942232 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c7309196-390c-4dc4-b9a0-a88f48e270db" (UID: "c7309196-390c-4dc4-b9a0-a88f48e270db"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.944825 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "261e82ba-d901-48e9-9890-768595c3e9df" (UID: "261e82ba-d901-48e9-9890-768595c3e9df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.946594 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.946618 4672 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.946627 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.946637 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgdhk\" (UniqueName: \"kubernetes.io/projected/c7309196-390c-4dc4-b9a0-a88f48e270db-kube-api-access-hgdhk\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.946649 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.946657 4672 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.946665 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df6q8\" (UniqueName: \"kubernetes.io/projected/261e82ba-d901-48e9-9890-768595c3e9df-kube-api-access-df6q8\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.946673 4672 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.946690 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7309196-390c-4dc4-b9a0-a88f48e270db-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:46 crc kubenswrapper[4672]: I0217 16:24:46.946700 4672 reconciler_common.go:293] "Volume detached for volume \"pvc-303d4d28-face-45fe-b658-7e12a6040182\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-303d4d28-face-45fe-b658-7e12a6040182\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.024200 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-config-data" (OuterVolumeSpecName: "config-data") pod "261e82ba-d901-48e9-9890-768595c3e9df" (UID: "261e82ba-d901-48e9-9890-768595c3e9df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.047484 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261e82ba-d901-48e9-9890-768595c3e9df-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.189974 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b6cb035c-108b-40c2-82fa-bc9db8599b1a","Type":"ContainerDied","Data":"bc0dd05b1c9dbb98013f4c0dba7ee9d268926ef1b38ce101cae5c17ac3f09a91"} Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.190030 4672 scope.go:117] "RemoveContainer" containerID="6b9470433ecb636a8d0409bc7e66e74931bba2d3c00b15e7ca5d37a5ed3f8849" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.190150 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.196766 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b02e419f-9426-4e56-9b4b-17ec702acb0a","Type":"ContainerStarted","Data":"af3c1c0d2b77c4e19c0dd87770d116a71a254faea9b82788817cbad6ff532271"} Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.201379 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"261e82ba-d901-48e9-9890-768595c3e9df","Type":"ContainerDied","Data":"5e405dcc8193a94eb51442f9ee3b1638d6fb4e55cafec7184caf4cc99e7e71ee"} Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.201479 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.205374 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b745f78d8-8tmpn" event={"ID":"c7309196-390c-4dc4-b9a0-a88f48e270db","Type":"ContainerDied","Data":"520785baedee2ef2049423c6c2023ede3985b18a9b0fd31c7881ac220ab8ab65"} Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.205481 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b745f78d8-8tmpn" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.211210 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.001754957 podStartE2EDuration="16.211193501s" podCreationTimestamp="2026-02-17 16:24:31 +0000 UTC" firstStartedPulling="2026-02-17 16:24:31.936527864 +0000 UTC m=+1280.690616596" lastFinishedPulling="2026-02-17 16:24:46.145966418 +0000 UTC m=+1294.900055140" observedRunningTime="2026-02-17 16:24:47.209051884 +0000 UTC m=+1295.963140616" watchObservedRunningTime="2026-02-17 16:24:47.211193501 +0000 UTC m=+1295.965282233" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.274965 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.307160 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.319772 4672 scope.go:117] "RemoveContainer" containerID="e22babdf03c40fc0728d4b21ad9b7217ccecd7f2b1f505089b250125de3732cc" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.353833 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.368573 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.376554 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.387343 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:24:47 crc kubenswrapper[4672]: E0217 16:24:47.387756 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12d8802b-c666-49da-ac6f-cd885f46f9f0" containerName="placement-api" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.387771 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="12d8802b-c666-49da-ac6f-cd885f46f9f0" containerName="placement-api" Feb 17 16:24:47 crc kubenswrapper[4672]: E0217 16:24:47.387783 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12d8802b-c666-49da-ac6f-cd885f46f9f0" containerName="placement-log" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.387788 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="12d8802b-c666-49da-ac6f-cd885f46f9f0" containerName="placement-log" Feb 17 16:24:47 crc kubenswrapper[4672]: E0217 16:24:47.387797 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6cb035c-108b-40c2-82fa-bc9db8599b1a" containerName="glance-log" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.387804 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6cb035c-108b-40c2-82fa-bc9db8599b1a" containerName="glance-log" Feb 17 16:24:47 crc kubenswrapper[4672]: E0217 16:24:47.387820 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261e82ba-d901-48e9-9890-768595c3e9df" containerName="ceilometer-central-agent" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.387827 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="261e82ba-d901-48e9-9890-768595c3e9df" containerName="ceilometer-central-agent" Feb 17 16:24:47 crc kubenswrapper[4672]: E0217 16:24:47.387837 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261e82ba-d901-48e9-9890-768595c3e9df" containerName="sg-core" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.387842 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="261e82ba-d901-48e9-9890-768595c3e9df" containerName="sg-core" Feb 17 16:24:47 crc kubenswrapper[4672]: E0217 16:24:47.387851 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4640eeb0-bf75-4e1b-a291-964288b3ecb1" containerName="glance-log" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.387856 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="4640eeb0-bf75-4e1b-a291-964288b3ecb1" containerName="glance-log" Feb 17 16:24:47 crc kubenswrapper[4672]: E0217 16:24:47.387868 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6cb035c-108b-40c2-82fa-bc9db8599b1a" containerName="glance-httpd" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.387873 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6cb035c-108b-40c2-82fa-bc9db8599b1a" containerName="glance-httpd" Feb 17 16:24:47 crc kubenswrapper[4672]: E0217 16:24:47.387891 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4640eeb0-bf75-4e1b-a291-964288b3ecb1" containerName="glance-httpd" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.387897 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="4640eeb0-bf75-4e1b-a291-964288b3ecb1" containerName="glance-httpd" Feb 17 16:24:47 crc kubenswrapper[4672]: E0217 16:24:47.387907 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261e82ba-d901-48e9-9890-768595c3e9df" containerName="ceilometer-notification-agent" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.387913 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="261e82ba-d901-48e9-9890-768595c3e9df" containerName="ceilometer-notification-agent" Feb 17 16:24:47 crc kubenswrapper[4672]: E0217 16:24:47.387922 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7309196-390c-4dc4-b9a0-a88f48e270db" containerName="neutron-api" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.387928 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7309196-390c-4dc4-b9a0-a88f48e270db" containerName="neutron-api" Feb 17 16:24:47 crc kubenswrapper[4672]: E0217 16:24:47.387940 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261e82ba-d901-48e9-9890-768595c3e9df" containerName="proxy-httpd" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.387947 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="261e82ba-d901-48e9-9890-768595c3e9df" containerName="proxy-httpd" Feb 17 16:24:47 crc kubenswrapper[4672]: E0217 16:24:47.387958 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7309196-390c-4dc4-b9a0-a88f48e270db" containerName="neutron-httpd" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.387963 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7309196-390c-4dc4-b9a0-a88f48e270db" containerName="neutron-httpd" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.388125 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="12d8802b-c666-49da-ac6f-cd885f46f9f0" containerName="placement-api" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.388141 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="261e82ba-d901-48e9-9890-768595c3e9df" containerName="sg-core" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.388149 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6cb035c-108b-40c2-82fa-bc9db8599b1a" containerName="glance-log" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.388159 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7309196-390c-4dc4-b9a0-a88f48e270db" containerName="neutron-httpd" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.388170 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="4640eeb0-bf75-4e1b-a291-964288b3ecb1" containerName="glance-log" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.388181 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="261e82ba-d901-48e9-9890-768595c3e9df" containerName="proxy-httpd" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.388189 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="4640eeb0-bf75-4e1b-a291-964288b3ecb1" containerName="glance-httpd" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.388198 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7309196-390c-4dc4-b9a0-a88f48e270db" containerName="neutron-api" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.388209 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="261e82ba-d901-48e9-9890-768595c3e9df" containerName="ceilometer-central-agent" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.388218 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="12d8802b-c666-49da-ac6f-cd885f46f9f0" containerName="placement-log" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.388224 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="261e82ba-d901-48e9-9890-768595c3e9df" containerName="ceilometer-notification-agent" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.388234 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6cb035c-108b-40c2-82fa-bc9db8599b1a" containerName="glance-httpd" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.389892 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.392096 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.397281 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.401445 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.408064 4672 scope.go:117] "RemoveContainer" containerID="372e811646bfd7b1d3aab1076f1c0032e9fb71de3e71b2cbef15c4059bf12a48" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.419387 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.421321 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.440033 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.440247 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.456293 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-combined-ca-bundle\") pod \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.456346 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-internal-tls-certs\") pod \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.456556 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4640eeb0-bf75-4e1b-a291-964288b3ecb1-httpd-run\") pod \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.456597 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-config-data\") pod \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.456627 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-scripts\") pod \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.456663 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf2xh\" (UniqueName: \"kubernetes.io/projected/4640eeb0-bf75-4e1b-a291-964288b3ecb1-kube-api-access-xf2xh\") pod \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.456683 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4640eeb0-bf75-4e1b-a291-964288b3ecb1-logs\") pod \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.456786 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\") pod \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\" (UID: \"4640eeb0-bf75-4e1b-a291-964288b3ecb1\") " Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.461681 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4640eeb0-bf75-4e1b-a291-964288b3ecb1-logs" (OuterVolumeSpecName: "logs") pod "4640eeb0-bf75-4e1b-a291-964288b3ecb1" (UID: "4640eeb0-bf75-4e1b-a291-964288b3ecb1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.466587 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.466936 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4640eeb0-bf75-4e1b-a291-964288b3ecb1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4640eeb0-bf75-4e1b-a291-964288b3ecb1" (UID: "4640eeb0-bf75-4e1b-a291-964288b3ecb1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.504718 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4640eeb0-bf75-4e1b-a291-964288b3ecb1-kube-api-access-xf2xh" (OuterVolumeSpecName: "kube-api-access-xf2xh") pod "4640eeb0-bf75-4e1b-a291-964288b3ecb1" (UID: "4640eeb0-bf75-4e1b-a291-964288b3ecb1"). InnerVolumeSpecName "kube-api-access-xf2xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.505366 4672 scope.go:117] "RemoveContainer" containerID="76bcbfd3216cfc1f37682ca551e3fa5d5fe00389def7eda63afec5c06fe5de87" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.509791 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-scripts" (OuterVolumeSpecName: "scripts") pod "4640eeb0-bf75-4e1b-a291-964288b3ecb1" (UID: "4640eeb0-bf75-4e1b-a291-964288b3ecb1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.547739 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4640eeb0-bf75-4e1b-a291-964288b3ecb1" (UID: "4640eeb0-bf75-4e1b-a291-964288b3ecb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.558767 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9b45\" (UniqueName: \"kubernetes.io/projected/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-kube-api-access-f9b45\") pod \"ceilometer-0\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.558821 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d125baba-09b1-4d4e-9d09-d040ee9323d1-scripts\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.558847 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rk9n\" (UniqueName: \"kubernetes.io/projected/d125baba-09b1-4d4e-9d09-d040ee9323d1-kube-api-access-2rk9n\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.558875 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d125baba-09b1-4d4e-9d09-d040ee9323d1-config-data\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.558896 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-303d4d28-face-45fe-b658-7e12a6040182\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-303d4d28-face-45fe-b658-7e12a6040182\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.558920 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-scripts\") pod \"ceilometer-0\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.558942 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.558957 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d125baba-09b1-4d4e-9d09-d040ee9323d1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.558980 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d125baba-09b1-4d4e-9d09-d040ee9323d1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.558999 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d125baba-09b1-4d4e-9d09-d040ee9323d1-logs\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.559050 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.559083 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-config-data\") pod \"ceilometer-0\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.559130 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-run-httpd\") pod \"ceilometer-0\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.559169 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-log-httpd\") pod \"ceilometer-0\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.559188 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d125baba-09b1-4d4e-9d09-d040ee9323d1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.559233 4672 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4640eeb0-bf75-4e1b-a291-964288b3ecb1-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.559243 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.559251 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf2xh\" (UniqueName: \"kubernetes.io/projected/4640eeb0-bf75-4e1b-a291-964288b3ecb1-kube-api-access-xf2xh\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.559262 4672 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4640eeb0-bf75-4e1b-a291-964288b3ecb1-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.559272 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.559525 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b745f78d8-8tmpn"] Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.564865 4672 scope.go:117] "RemoveContainer" containerID="291bfa87a02917eb4de97c1b46645bab031f200ccf9a9bb7eb9d3ba35f0f5f06" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.609613 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b745f78d8-8tmpn"] Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.623696 4672 scope.go:117] "RemoveContainer" containerID="2ae893b986748b9f358e2e0f5f2eda2e2e881935f9b3538bd9ccd113b9ae6f5d" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.632651 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4640eeb0-bf75-4e1b-a291-964288b3ecb1" (UID: "4640eeb0-bf75-4e1b-a291-964288b3ecb1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.662628 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-run-httpd\") pod \"ceilometer-0\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.662687 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-log-httpd\") pod \"ceilometer-0\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.662711 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d125baba-09b1-4d4e-9d09-d040ee9323d1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.662736 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9b45\" (UniqueName: \"kubernetes.io/projected/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-kube-api-access-f9b45\") pod \"ceilometer-0\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.662763 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d125baba-09b1-4d4e-9d09-d040ee9323d1-scripts\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.662790 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rk9n\" (UniqueName: \"kubernetes.io/projected/d125baba-09b1-4d4e-9d09-d040ee9323d1-kube-api-access-2rk9n\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.662824 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d125baba-09b1-4d4e-9d09-d040ee9323d1-config-data\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.662851 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-303d4d28-face-45fe-b658-7e12a6040182\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-303d4d28-face-45fe-b658-7e12a6040182\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.662880 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-scripts\") pod \"ceilometer-0\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.662909 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.662929 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d125baba-09b1-4d4e-9d09-d040ee9323d1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.662955 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d125baba-09b1-4d4e-9d09-d040ee9323d1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.662978 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d125baba-09b1-4d4e-9d09-d040ee9323d1-logs\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.663033 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.664078 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d125baba-09b1-4d4e-9d09-d040ee9323d1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.664988 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-config-data\") pod \"ceilometer-0\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.665900 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d125baba-09b1-4d4e-9d09-d040ee9323d1-logs\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.666138 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-log-httpd\") pod \"ceilometer-0\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.666202 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-run-httpd\") pod \"ceilometer-0\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.666529 4672 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.671819 4672 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.671846 4672 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-303d4d28-face-45fe-b658-7e12a6040182\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-303d4d28-face-45fe-b658-7e12a6040182\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d02cc0527f533ee65b155f740f514c1487916eea5ba6e0a075365c01e7203db4/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.671930 4672 scope.go:117] "RemoveContainer" containerID="55a2f0e4b37b6e17d04f9873a319170827fcc78003e5c7c76ad8375346ca3b3e" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.672263 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7" (OuterVolumeSpecName: "glance") pod "4640eeb0-bf75-4e1b-a291-964288b3ecb1" (UID: "4640eeb0-bf75-4e1b-a291-964288b3ecb1"). InnerVolumeSpecName "pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.672914 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-scripts\") pod \"ceilometer-0\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.673024 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-config-data" (OuterVolumeSpecName: "config-data") pod "4640eeb0-bf75-4e1b-a291-964288b3ecb1" (UID: "4640eeb0-bf75-4e1b-a291-964288b3ecb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.675408 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.675448 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d125baba-09b1-4d4e-9d09-d040ee9323d1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.676416 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d125baba-09b1-4d4e-9d09-d040ee9323d1-scripts\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.676797 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.677092 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d125baba-09b1-4d4e-9d09-d040ee9323d1-config-data\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.680443 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-config-data\") pod \"ceilometer-0\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.683425 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d125baba-09b1-4d4e-9d09-d040ee9323d1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.702212 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rk9n\" (UniqueName: \"kubernetes.io/projected/d125baba-09b1-4d4e-9d09-d040ee9323d1-kube-api-access-2rk9n\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.710185 4672 scope.go:117] "RemoveContainer" containerID="5b2cbea1afc020385cf8f7fca1f19050ede9a7becdf554b76f685bc785707433" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.716227 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9b45\" (UniqueName: \"kubernetes.io/projected/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-kube-api-access-f9b45\") pod \"ceilometer-0\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.724208 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.742434 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.744188 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-303d4d28-face-45fe-b658-7e12a6040182\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-303d4d28-face-45fe-b658-7e12a6040182\") pod \"glance-default-external-api-0\" (UID: \"d125baba-09b1-4d4e-9d09-d040ee9323d1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.768049 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4640eeb0-bf75-4e1b-a291-964288b3ecb1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.768092 4672 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\") on node \"crc\" " Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.805609 4672 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.805774 4672 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7") on node "crc" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.843292 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.872027 4672 reconciler_common.go:293] "Volume detached for volume \"pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.960214 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="261e82ba-d901-48e9-9890-768595c3e9df" path="/var/lib/kubelet/pods/261e82ba-d901-48e9-9890-768595c3e9df/volumes" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.961236 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cb035c-108b-40c2-82fa-bc9db8599b1a" path="/var/lib/kubelet/pods/b6cb035c-108b-40c2-82fa-bc9db8599b1a/volumes" Feb 17 16:24:47 crc kubenswrapper[4672]: I0217 16:24:47.962362 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7309196-390c-4dc4-b9a0-a88f48e270db" path="/var/lib/kubelet/pods/c7309196-390c-4dc4-b9a0-a88f48e270db/volumes" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.218030 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.218070 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4640eeb0-bf75-4e1b-a291-964288b3ecb1","Type":"ContainerDied","Data":"3f782825d9bd6901043337bcedaaa165090660c76e21967f3120c4f6ef5ced19"} Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.218447 4672 scope.go:117] "RemoveContainer" containerID="dfdd97b715abd1e15945c0e67790e24f6002e4e322d35ac6e781335fb439eb6e" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.255612 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.255802 4672 scope.go:117] "RemoveContainer" containerID="5d2d94cfbfd60ba5d23a98ac38e4f21d1c6147c03fd72322065afe51d172d515" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.265070 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.273349 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.280518 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.282553 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.287937 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.288076 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 16:24:48 crc kubenswrapper[4672]: W0217 16:24:48.297944 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod884cd77d_f3d0_44d2_b4ec_bd53e2e5978b.slice/crio-a2bfed32bc3f07a31c2ac210de005e1d81301e5f6c63e426d60db1d1f84d1c84 WatchSource:0}: Error finding container a2bfed32bc3f07a31c2ac210de005e1d81301e5f6c63e426d60db1d1f84d1c84: Status 404 returned error can't find the container with id a2bfed32bc3f07a31c2ac210de005e1d81301e5f6c63e426d60db1d1f84d1c84 Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.299207 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.471037 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.489177 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb8b7b86-c10a-486b-aec2-87475a3af44f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.489217 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb8b7b86-c10a-486b-aec2-87475a3af44f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.489248 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb8b7b86-c10a-486b-aec2-87475a3af44f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.489289 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb8b7b86-c10a-486b-aec2-87475a3af44f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.489314 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb8b7b86-c10a-486b-aec2-87475a3af44f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.489342 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.489358 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb8b7b86-c10a-486b-aec2-87475a3af44f-logs\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.489395 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67888\" (UniqueName: \"kubernetes.io/projected/cb8b7b86-c10a-486b-aec2-87475a3af44f-kube-api-access-67888\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.591256 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.591296 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb8b7b86-c10a-486b-aec2-87475a3af44f-logs\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.591348 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67888\" (UniqueName: \"kubernetes.io/projected/cb8b7b86-c10a-486b-aec2-87475a3af44f-kube-api-access-67888\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.591435 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb8b7b86-c10a-486b-aec2-87475a3af44f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.591457 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb8b7b86-c10a-486b-aec2-87475a3af44f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.591481 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb8b7b86-c10a-486b-aec2-87475a3af44f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.591550 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb8b7b86-c10a-486b-aec2-87475a3af44f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.591586 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb8b7b86-c10a-486b-aec2-87475a3af44f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.591950 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb8b7b86-c10a-486b-aec2-87475a3af44f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.591947 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb8b7b86-c10a-486b-aec2-87475a3af44f-logs\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.603547 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb8b7b86-c10a-486b-aec2-87475a3af44f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.610718 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb8b7b86-c10a-486b-aec2-87475a3af44f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.611559 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb8b7b86-c10a-486b-aec2-87475a3af44f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.618235 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb8b7b86-c10a-486b-aec2-87475a3af44f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.619716 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67888\" (UniqueName: \"kubernetes.io/projected/cb8b7b86-c10a-486b-aec2-87475a3af44f-kube-api-access-67888\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.623430 4672 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.623472 4672 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f554e28ce6891cf21f3390de6086eedf40118aa722324f2faa0d19b98e9f8a02/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.693937 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d48c035-9289-4ccc-b714-8e32aee74eb7\") pod \"glance-default-internal-api-0\" (UID: \"cb8b7b86-c10a-486b-aec2-87475a3af44f\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:24:48 crc kubenswrapper[4672]: I0217 16:24:48.925204 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:49 crc kubenswrapper[4672]: I0217 16:24:49.242450 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b","Type":"ContainerStarted","Data":"a2bfed32bc3f07a31c2ac210de005e1d81301e5f6c63e426d60db1d1f84d1c84"} Feb 17 16:24:49 crc kubenswrapper[4672]: I0217 16:24:49.243733 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d125baba-09b1-4d4e-9d09-d040ee9323d1","Type":"ContainerStarted","Data":"2d87b81149efffed7d4f38891e9bb670b76fd2b039f879ad77addb291bf2269d"} Feb 17 16:24:49 crc kubenswrapper[4672]: I0217 16:24:49.572886 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:24:49 crc kubenswrapper[4672]: W0217 16:24:49.583148 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb8b7b86_c10a_486b_aec2_87475a3af44f.slice/crio-830078b07abad2f52f5b8ebe43bba091b45cffca60d7f983529c7037f8b8a1ec WatchSource:0}: Error finding container 830078b07abad2f52f5b8ebe43bba091b45cffca60d7f983529c7037f8b8a1ec: Status 404 returned error can't find the container with id 830078b07abad2f52f5b8ebe43bba091b45cffca60d7f983529c7037f8b8a1ec Feb 17 16:24:49 crc kubenswrapper[4672]: I0217 16:24:49.894211 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:24:49 crc kubenswrapper[4672]: I0217 16:24:49.967895 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4640eeb0-bf75-4e1b-a291-964288b3ecb1" path="/var/lib/kubelet/pods/4640eeb0-bf75-4e1b-a291-964288b3ecb1/volumes" Feb 17 16:24:50 crc kubenswrapper[4672]: I0217 16:24:50.264141 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb8b7b86-c10a-486b-aec2-87475a3af44f","Type":"ContainerStarted","Data":"830078b07abad2f52f5b8ebe43bba091b45cffca60d7f983529c7037f8b8a1ec"} Feb 17 16:24:50 crc kubenswrapper[4672]: I0217 16:24:50.266567 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b","Type":"ContainerStarted","Data":"ae70710761e13ae8498a8a4071662368349ac2df6155b57c344606bd2cdda2a1"} Feb 17 16:24:50 crc kubenswrapper[4672]: I0217 16:24:50.269209 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d125baba-09b1-4d4e-9d09-d040ee9323d1","Type":"ContainerStarted","Data":"9cf96b10c31fa5b3ff3d777b7285f549c12d9cefbc214a6cb6dc224598010b04"} Feb 17 16:24:51 crc kubenswrapper[4672]: I0217 16:24:51.297134 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b","Type":"ContainerStarted","Data":"f5682e9918435f796b983b0b4adb03ae9af2fdf8a8096f7b8554d861ad6ac9a4"} Feb 17 16:24:51 crc kubenswrapper[4672]: I0217 16:24:51.301190 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d125baba-09b1-4d4e-9d09-d040ee9323d1","Type":"ContainerStarted","Data":"677523cb679e7d5d2cf78f26b0fdfb9dba62aa011af9a5337b6c9c161caa6f5c"} Feb 17 16:24:51 crc kubenswrapper[4672]: I0217 16:24:51.304684 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb8b7b86-c10a-486b-aec2-87475a3af44f","Type":"ContainerStarted","Data":"f2069bb933c469bef33d230cbb157489c7b31e32d5bfebf4a7ce4e0bdd15cc12"} Feb 17 16:24:51 crc kubenswrapper[4672]: I0217 16:24:51.304769 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb8b7b86-c10a-486b-aec2-87475a3af44f","Type":"ContainerStarted","Data":"7bb4776e0bd52fe0738cedeaf8d1aab34b6a4fc259dc7049a700fcc10c50b501"} Feb 17 16:24:51 crc kubenswrapper[4672]: I0217 16:24:51.338220 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.338181741 podStartE2EDuration="4.338181741s" podCreationTimestamp="2026-02-17 16:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:51.325562338 +0000 UTC m=+1300.079651100" watchObservedRunningTime="2026-02-17 16:24:51.338181741 +0000 UTC m=+1300.092270473" Feb 17 16:24:51 crc kubenswrapper[4672]: I0217 16:24:51.355490 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.355470636 podStartE2EDuration="3.355470636s" podCreationTimestamp="2026-02-17 16:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:51.348352299 +0000 UTC m=+1300.102441021" watchObservedRunningTime="2026-02-17 16:24:51.355470636 +0000 UTC m=+1300.109559368" Feb 17 16:24:52 crc kubenswrapper[4672]: I0217 16:24:52.317564 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b","Type":"ContainerStarted","Data":"766f677d17e92ddc68ec3570f49102457fef07fc8c4e987462374b141a0c5040"} Feb 17 16:24:53 crc kubenswrapper[4672]: I0217 16:24:53.327202 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b","Type":"ContainerStarted","Data":"4ab30193e5c0c3caa90691d41851f7a28ad5072965b0d73c18a7a1af2decf462"} Feb 17 16:24:53 crc kubenswrapper[4672]: I0217 16:24:53.327320 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" containerName="ceilometer-central-agent" containerID="cri-o://ae70710761e13ae8498a8a4071662368349ac2df6155b57c344606bd2cdda2a1" gracePeriod=30 Feb 17 16:24:53 crc kubenswrapper[4672]: I0217 16:24:53.327399 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" containerName="sg-core" containerID="cri-o://766f677d17e92ddc68ec3570f49102457fef07fc8c4e987462374b141a0c5040" gracePeriod=30 Feb 17 16:24:53 crc kubenswrapper[4672]: I0217 16:24:53.327445 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" containerName="proxy-httpd" containerID="cri-o://4ab30193e5c0c3caa90691d41851f7a28ad5072965b0d73c18a7a1af2decf462" gracePeriod=30 Feb 17 16:24:53 crc kubenswrapper[4672]: I0217 16:24:53.327803 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:24:53 crc kubenswrapper[4672]: I0217 16:24:53.327572 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" containerName="ceilometer-notification-agent" containerID="cri-o://f5682e9918435f796b983b0b4adb03ae9af2fdf8a8096f7b8554d861ad6ac9a4" gracePeriod=30 Feb 17 16:24:53 crc kubenswrapper[4672]: I0217 16:24:53.358314 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8390173349999999 podStartE2EDuration="6.358293386s" podCreationTimestamp="2026-02-17 16:24:47 +0000 UTC" firstStartedPulling="2026-02-17 16:24:48.333733965 +0000 UTC m=+1297.087822687" lastFinishedPulling="2026-02-17 16:24:52.853009996 +0000 UTC m=+1301.607098738" observedRunningTime="2026-02-17 16:24:53.347092601 +0000 UTC m=+1302.101181343" watchObservedRunningTime="2026-02-17 16:24:53.358293386 +0000 UTC m=+1302.112382118" Feb 17 16:24:53 crc kubenswrapper[4672]: I0217 16:24:53.807990 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-vkcbs"] Feb 17 16:24:53 crc kubenswrapper[4672]: I0217 16:24:53.809385 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vkcbs" Feb 17 16:24:53 crc kubenswrapper[4672]: I0217 16:24:53.835132 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vkcbs"] Feb 17 16:24:53 crc kubenswrapper[4672]: I0217 16:24:53.969123 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9b97\" (UniqueName: \"kubernetes.io/projected/abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9-kube-api-access-b9b97\") pod \"nova-api-db-create-vkcbs\" (UID: \"abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9\") " pod="openstack/nova-api-db-create-vkcbs" Feb 17 16:24:53 crc kubenswrapper[4672]: I0217 16:24:53.969564 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9-operator-scripts\") pod \"nova-api-db-create-vkcbs\" (UID: \"abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9\") " pod="openstack/nova-api-db-create-vkcbs" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.019889 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-wtfr9"] Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.021827 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wtfr9" Feb 17 16:24:54 crc kubenswrapper[4672]: E0217 16:24:54.025074 4672 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6ad1f8c_ef18_4bd8_ac43_b8f1151277f6.slice/crio-4fede41a6b3d442704cc0b64a71cfcde9ecee5251694f4b5e0c64343367e5adb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6ad1f8c_ef18_4bd8_ac43_b8f1151277f6.slice\": RecentStats: unable to find data in memory cache]" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.071541 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9b97\" (UniqueName: \"kubernetes.io/projected/abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9-kube-api-access-b9b97\") pod \"nova-api-db-create-vkcbs\" (UID: \"abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9\") " pod="openstack/nova-api-db-create-vkcbs" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.074245 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9-operator-scripts\") pod \"nova-api-db-create-vkcbs\" (UID: \"abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9\") " pod="openstack/nova-api-db-create-vkcbs" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.075069 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9-operator-scripts\") pod \"nova-api-db-create-vkcbs\" (UID: \"abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9\") " pod="openstack/nova-api-db-create-vkcbs" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.073171 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wtfr9"] Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.108472 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9b97\" (UniqueName: \"kubernetes.io/projected/abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9-kube-api-access-b9b97\") pod \"nova-api-db-create-vkcbs\" (UID: \"abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9\") " pod="openstack/nova-api-db-create-vkcbs" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.138961 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vkcbs" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.143834 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-84dtj"] Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.145688 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-84dtj" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.171762 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a99c-account-create-update-f7kt2"] Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.173338 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a99c-account-create-update-f7kt2" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.178045 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.186104 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2f5d855-0026-4e76-969c-87603f5fe608-operator-scripts\") pod \"nova-cell0-db-create-wtfr9\" (UID: \"a2f5d855-0026-4e76-969c-87603f5fe608\") " pod="openstack/nova-cell0-db-create-wtfr9" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.186360 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm7kp\" (UniqueName: \"kubernetes.io/projected/a2f5d855-0026-4e76-969c-87603f5fe608-kube-api-access-fm7kp\") pod \"nova-cell0-db-create-wtfr9\" (UID: \"a2f5d855-0026-4e76-969c-87603f5fe608\") " pod="openstack/nova-cell0-db-create-wtfr9" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.194971 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-84dtj"] Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.210182 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a99c-account-create-update-f7kt2"] Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.289154 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2f5d855-0026-4e76-969c-87603f5fe608-operator-scripts\") pod \"nova-cell0-db-create-wtfr9\" (UID: \"a2f5d855-0026-4e76-969c-87603f5fe608\") " pod="openstack/nova-cell0-db-create-wtfr9" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.289598 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a5a3d1d-ff96-486d-bb60-b8c390c738e9-operator-scripts\") pod \"nova-cell1-db-create-84dtj\" (UID: \"8a5a3d1d-ff96-486d-bb60-b8c390c738e9\") " pod="openstack/nova-cell1-db-create-84dtj" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.289665 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6chr2\" (UniqueName: \"kubernetes.io/projected/8a5a3d1d-ff96-486d-bb60-b8c390c738e9-kube-api-access-6chr2\") pod \"nova-cell1-db-create-84dtj\" (UID: \"8a5a3d1d-ff96-486d-bb60-b8c390c738e9\") " pod="openstack/nova-cell1-db-create-84dtj" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.289714 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm7kp\" (UniqueName: \"kubernetes.io/projected/a2f5d855-0026-4e76-969c-87603f5fe608-kube-api-access-fm7kp\") pod \"nova-cell0-db-create-wtfr9\" (UID: \"a2f5d855-0026-4e76-969c-87603f5fe608\") " pod="openstack/nova-cell0-db-create-wtfr9" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.289772 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90804f80-91b4-44bb-b3c1-04c56c687c65-operator-scripts\") pod \"nova-api-a99c-account-create-update-f7kt2\" (UID: \"90804f80-91b4-44bb-b3c1-04c56c687c65\") " pod="openstack/nova-api-a99c-account-create-update-f7kt2" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.289836 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxsl4\" (UniqueName: \"kubernetes.io/projected/90804f80-91b4-44bb-b3c1-04c56c687c65-kube-api-access-mxsl4\") pod \"nova-api-a99c-account-create-update-f7kt2\" (UID: \"90804f80-91b4-44bb-b3c1-04c56c687c65\") " pod="openstack/nova-api-a99c-account-create-update-f7kt2" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.290556 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2f5d855-0026-4e76-969c-87603f5fe608-operator-scripts\") pod \"nova-cell0-db-create-wtfr9\" (UID: \"a2f5d855-0026-4e76-969c-87603f5fe608\") " pod="openstack/nova-cell0-db-create-wtfr9" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.326430 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-e3bf-account-create-update-5gj9s"] Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.327985 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e3bf-account-create-update-5gj9s" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.329681 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e3bf-account-create-update-5gj9s"] Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.333785 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.334201 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm7kp\" (UniqueName: \"kubernetes.io/projected/a2f5d855-0026-4e76-969c-87603f5fe608-kube-api-access-fm7kp\") pod \"nova-cell0-db-create-wtfr9\" (UID: \"a2f5d855-0026-4e76-969c-87603f5fe608\") " pod="openstack/nova-cell0-db-create-wtfr9" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.354113 4672 generic.go:334] "Generic (PLEG): container finished" podID="884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" containerID="4ab30193e5c0c3caa90691d41851f7a28ad5072965b0d73c18a7a1af2decf462" exitCode=0 Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.354150 4672 generic.go:334] "Generic (PLEG): container finished" podID="884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" containerID="766f677d17e92ddc68ec3570f49102457fef07fc8c4e987462374b141a0c5040" exitCode=2 Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.354160 4672 generic.go:334] "Generic (PLEG): container finished" podID="884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" containerID="f5682e9918435f796b983b0b4adb03ae9af2fdf8a8096f7b8554d861ad6ac9a4" exitCode=0 Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.354186 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b","Type":"ContainerDied","Data":"4ab30193e5c0c3caa90691d41851f7a28ad5072965b0d73c18a7a1af2decf462"} Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.354220 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b","Type":"ContainerDied","Data":"766f677d17e92ddc68ec3570f49102457fef07fc8c4e987462374b141a0c5040"} Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.354233 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b","Type":"ContainerDied","Data":"f5682e9918435f796b983b0b4adb03ae9af2fdf8a8096f7b8554d861ad6ac9a4"} Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.359496 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wtfr9" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.391230 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxsl4\" (UniqueName: \"kubernetes.io/projected/90804f80-91b4-44bb-b3c1-04c56c687c65-kube-api-access-mxsl4\") pod \"nova-api-a99c-account-create-update-f7kt2\" (UID: \"90804f80-91b4-44bb-b3c1-04c56c687c65\") " pod="openstack/nova-api-a99c-account-create-update-f7kt2" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.391344 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a5a3d1d-ff96-486d-bb60-b8c390c738e9-operator-scripts\") pod \"nova-cell1-db-create-84dtj\" (UID: \"8a5a3d1d-ff96-486d-bb60-b8c390c738e9\") " pod="openstack/nova-cell1-db-create-84dtj" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.391382 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6chr2\" (UniqueName: \"kubernetes.io/projected/8a5a3d1d-ff96-486d-bb60-b8c390c738e9-kube-api-access-6chr2\") pod \"nova-cell1-db-create-84dtj\" (UID: \"8a5a3d1d-ff96-486d-bb60-b8c390c738e9\") " pod="openstack/nova-cell1-db-create-84dtj" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.391429 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90804f80-91b4-44bb-b3c1-04c56c687c65-operator-scripts\") pod \"nova-api-a99c-account-create-update-f7kt2\" (UID: \"90804f80-91b4-44bb-b3c1-04c56c687c65\") " pod="openstack/nova-api-a99c-account-create-update-f7kt2" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.392005 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90804f80-91b4-44bb-b3c1-04c56c687c65-operator-scripts\") pod \"nova-api-a99c-account-create-update-f7kt2\" (UID: \"90804f80-91b4-44bb-b3c1-04c56c687c65\") " pod="openstack/nova-api-a99c-account-create-update-f7kt2" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.393839 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a5a3d1d-ff96-486d-bb60-b8c390c738e9-operator-scripts\") pod \"nova-cell1-db-create-84dtj\" (UID: \"8a5a3d1d-ff96-486d-bb60-b8c390c738e9\") " pod="openstack/nova-cell1-db-create-84dtj" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.416822 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6chr2\" (UniqueName: \"kubernetes.io/projected/8a5a3d1d-ff96-486d-bb60-b8c390c738e9-kube-api-access-6chr2\") pod \"nova-cell1-db-create-84dtj\" (UID: \"8a5a3d1d-ff96-486d-bb60-b8c390c738e9\") " pod="openstack/nova-cell1-db-create-84dtj" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.416828 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxsl4\" (UniqueName: \"kubernetes.io/projected/90804f80-91b4-44bb-b3c1-04c56c687c65-kube-api-access-mxsl4\") pod \"nova-api-a99c-account-create-update-f7kt2\" (UID: \"90804f80-91b4-44bb-b3c1-04c56c687c65\") " pod="openstack/nova-api-a99c-account-create-update-f7kt2" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.457585 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-6cbf-account-create-update-2hf29"] Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.458962 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6cbf-account-create-update-2hf29" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.461173 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.487245 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6cbf-account-create-update-2hf29"] Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.493551 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d00fc6a7-229f-4a5f-af9a-8b39b110b5ad-operator-scripts\") pod \"nova-cell0-e3bf-account-create-update-5gj9s\" (UID: \"d00fc6a7-229f-4a5f-af9a-8b39b110b5ad\") " pod="openstack/nova-cell0-e3bf-account-create-update-5gj9s" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.493617 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6xnn\" (UniqueName: \"kubernetes.io/projected/d00fc6a7-229f-4a5f-af9a-8b39b110b5ad-kube-api-access-j6xnn\") pod \"nova-cell0-e3bf-account-create-update-5gj9s\" (UID: \"d00fc6a7-229f-4a5f-af9a-8b39b110b5ad\") " pod="openstack/nova-cell0-e3bf-account-create-update-5gj9s" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.549588 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-84dtj" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.552932 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a99c-account-create-update-f7kt2" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.595755 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnjbr\" (UniqueName: \"kubernetes.io/projected/b947707f-716d-48ae-9151-a27658bc5a91-kube-api-access-nnjbr\") pod \"nova-cell1-6cbf-account-create-update-2hf29\" (UID: \"b947707f-716d-48ae-9151-a27658bc5a91\") " pod="openstack/nova-cell1-6cbf-account-create-update-2hf29" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.595857 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b947707f-716d-48ae-9151-a27658bc5a91-operator-scripts\") pod \"nova-cell1-6cbf-account-create-update-2hf29\" (UID: \"b947707f-716d-48ae-9151-a27658bc5a91\") " pod="openstack/nova-cell1-6cbf-account-create-update-2hf29" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.595895 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d00fc6a7-229f-4a5f-af9a-8b39b110b5ad-operator-scripts\") pod \"nova-cell0-e3bf-account-create-update-5gj9s\" (UID: \"d00fc6a7-229f-4a5f-af9a-8b39b110b5ad\") " pod="openstack/nova-cell0-e3bf-account-create-update-5gj9s" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.595942 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6xnn\" (UniqueName: \"kubernetes.io/projected/d00fc6a7-229f-4a5f-af9a-8b39b110b5ad-kube-api-access-j6xnn\") pod \"nova-cell0-e3bf-account-create-update-5gj9s\" (UID: \"d00fc6a7-229f-4a5f-af9a-8b39b110b5ad\") " pod="openstack/nova-cell0-e3bf-account-create-update-5gj9s" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.596606 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d00fc6a7-229f-4a5f-af9a-8b39b110b5ad-operator-scripts\") pod \"nova-cell0-e3bf-account-create-update-5gj9s\" (UID: \"d00fc6a7-229f-4a5f-af9a-8b39b110b5ad\") " pod="openstack/nova-cell0-e3bf-account-create-update-5gj9s" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.615221 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6xnn\" (UniqueName: \"kubernetes.io/projected/d00fc6a7-229f-4a5f-af9a-8b39b110b5ad-kube-api-access-j6xnn\") pod \"nova-cell0-e3bf-account-create-update-5gj9s\" (UID: \"d00fc6a7-229f-4a5f-af9a-8b39b110b5ad\") " pod="openstack/nova-cell0-e3bf-account-create-update-5gj9s" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.690576 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e3bf-account-create-update-5gj9s" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.707835 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnjbr\" (UniqueName: \"kubernetes.io/projected/b947707f-716d-48ae-9151-a27658bc5a91-kube-api-access-nnjbr\") pod \"nova-cell1-6cbf-account-create-update-2hf29\" (UID: \"b947707f-716d-48ae-9151-a27658bc5a91\") " pod="openstack/nova-cell1-6cbf-account-create-update-2hf29" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.708021 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b947707f-716d-48ae-9151-a27658bc5a91-operator-scripts\") pod \"nova-cell1-6cbf-account-create-update-2hf29\" (UID: \"b947707f-716d-48ae-9151-a27658bc5a91\") " pod="openstack/nova-cell1-6cbf-account-create-update-2hf29" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.710531 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b947707f-716d-48ae-9151-a27658bc5a91-operator-scripts\") pod \"nova-cell1-6cbf-account-create-update-2hf29\" (UID: \"b947707f-716d-48ae-9151-a27658bc5a91\") " pod="openstack/nova-cell1-6cbf-account-create-update-2hf29" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.733577 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnjbr\" (UniqueName: \"kubernetes.io/projected/b947707f-716d-48ae-9151-a27658bc5a91-kube-api-access-nnjbr\") pod \"nova-cell1-6cbf-account-create-update-2hf29\" (UID: \"b947707f-716d-48ae-9151-a27658bc5a91\") " pod="openstack/nova-cell1-6cbf-account-create-update-2hf29" Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.814842 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vkcbs"] Feb 17 16:24:54 crc kubenswrapper[4672]: W0217 16:24:54.828605 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabb9a2a6_5ab3_4c3a_8d4e_523a92a3e6b9.slice/crio-4c29e8c844703547841f3fc44770bdd7b799d9f757800a4cbc74be8cb547e28f WatchSource:0}: Error finding container 4c29e8c844703547841f3fc44770bdd7b799d9f757800a4cbc74be8cb547e28f: Status 404 returned error can't find the container with id 4c29e8c844703547841f3fc44770bdd7b799d9f757800a4cbc74be8cb547e28f Feb 17 16:24:54 crc kubenswrapper[4672]: I0217 16:24:54.841262 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6cbf-account-create-update-2hf29" Feb 17 16:24:55 crc kubenswrapper[4672]: I0217 16:24:55.028843 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wtfr9"] Feb 17 16:24:55 crc kubenswrapper[4672]: I0217 16:24:55.269599 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a99c-account-create-update-f7kt2"] Feb 17 16:24:55 crc kubenswrapper[4672]: I0217 16:24:55.310028 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e3bf-account-create-update-5gj9s"] Feb 17 16:24:55 crc kubenswrapper[4672]: I0217 16:24:55.314554 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-84dtj"] Feb 17 16:24:55 crc kubenswrapper[4672]: I0217 16:24:55.394502 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wtfr9" event={"ID":"a2f5d855-0026-4e76-969c-87603f5fe608","Type":"ContainerStarted","Data":"e15cdb408b2113a0efca62534226d10d0f394eb65bf4d018046b16597a210adb"} Feb 17 16:24:55 crc kubenswrapper[4672]: I0217 16:24:55.394571 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wtfr9" event={"ID":"a2f5d855-0026-4e76-969c-87603f5fe608","Type":"ContainerStarted","Data":"37f090b24a7a0e9c3337c0c905a40a41ac4e11ffe12c408669c2abe5f39d42f8"} Feb 17 16:24:55 crc kubenswrapper[4672]: I0217 16:24:55.397463 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-84dtj" event={"ID":"8a5a3d1d-ff96-486d-bb60-b8c390c738e9","Type":"ContainerStarted","Data":"22eb4e05f1188de1fcae58cb88ba26932ece6884b40421b64c9033a2599076c7"} Feb 17 16:24:55 crc kubenswrapper[4672]: I0217 16:24:55.398688 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e3bf-account-create-update-5gj9s" event={"ID":"d00fc6a7-229f-4a5f-af9a-8b39b110b5ad","Type":"ContainerStarted","Data":"dea6f1a71be44555d19820aae2c3cd5d8097f8070c4ef29e52867c4758e9458e"} Feb 17 16:24:55 crc kubenswrapper[4672]: I0217 16:24:55.402129 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a99c-account-create-update-f7kt2" event={"ID":"90804f80-91b4-44bb-b3c1-04c56c687c65","Type":"ContainerStarted","Data":"5b9ef65b0fbaa2a7065640ae097e0abfeb2f19a3d9e3c1a91e3e96fdc40b7356"} Feb 17 16:24:55 crc kubenswrapper[4672]: I0217 16:24:55.403713 4672 generic.go:334] "Generic (PLEG): container finished" podID="abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9" containerID="99152be0a5d138bdb9afcbc9ad4a3d6d4231280e5c4e3754de962fdecc48ecdd" exitCode=0 Feb 17 16:24:55 crc kubenswrapper[4672]: I0217 16:24:55.403738 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vkcbs" event={"ID":"abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9","Type":"ContainerDied","Data":"99152be0a5d138bdb9afcbc9ad4a3d6d4231280e5c4e3754de962fdecc48ecdd"} Feb 17 16:24:55 crc kubenswrapper[4672]: I0217 16:24:55.403752 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vkcbs" event={"ID":"abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9","Type":"ContainerStarted","Data":"4c29e8c844703547841f3fc44770bdd7b799d9f757800a4cbc74be8cb547e28f"} Feb 17 16:24:55 crc kubenswrapper[4672]: I0217 16:24:55.443784 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-wtfr9" podStartSLOduration=2.443765386 podStartE2EDuration="2.443765386s" podCreationTimestamp="2026-02-17 16:24:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:24:55.424789616 +0000 UTC m=+1304.178878348" watchObservedRunningTime="2026-02-17 16:24:55.443765386 +0000 UTC m=+1304.197854118" Feb 17 16:24:55 crc kubenswrapper[4672]: I0217 16:24:55.535220 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6cbf-account-create-update-2hf29"] Feb 17 16:24:55 crc kubenswrapper[4672]: W0217 16:24:55.617495 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb947707f_716d_48ae_9151_a27658bc5a91.slice/crio-50c6866ab8a17ac4ee11ccd88f4f37c8eb67d80e553fc3c061ffd8ff053beab2 WatchSource:0}: Error finding container 50c6866ab8a17ac4ee11ccd88f4f37c8eb67d80e553fc3c061ffd8ff053beab2: Status 404 returned error can't find the container with id 50c6866ab8a17ac4ee11ccd88f4f37c8eb67d80e553fc3c061ffd8ff053beab2 Feb 17 16:24:56 crc kubenswrapper[4672]: I0217 16:24:56.415244 4672 generic.go:334] "Generic (PLEG): container finished" podID="d00fc6a7-229f-4a5f-af9a-8b39b110b5ad" containerID="bcde18a66e4281c3d0ad1da47a163fd8192986163cb6c145ea94051ce5ce0488" exitCode=0 Feb 17 16:24:56 crc kubenswrapper[4672]: I0217 16:24:56.415290 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e3bf-account-create-update-5gj9s" event={"ID":"d00fc6a7-229f-4a5f-af9a-8b39b110b5ad","Type":"ContainerDied","Data":"bcde18a66e4281c3d0ad1da47a163fd8192986163cb6c145ea94051ce5ce0488"} Feb 17 16:24:56 crc kubenswrapper[4672]: I0217 16:24:56.417465 4672 generic.go:334] "Generic (PLEG): container finished" podID="90804f80-91b4-44bb-b3c1-04c56c687c65" containerID="da2357b4e4ebc0b9c24c13cb1be6aa1344fddc17e78a794a9b103a054861fdc3" exitCode=0 Feb 17 16:24:56 crc kubenswrapper[4672]: I0217 16:24:56.417637 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a99c-account-create-update-f7kt2" event={"ID":"90804f80-91b4-44bb-b3c1-04c56c687c65","Type":"ContainerDied","Data":"da2357b4e4ebc0b9c24c13cb1be6aa1344fddc17e78a794a9b103a054861fdc3"} Feb 17 16:24:56 crc kubenswrapper[4672]: I0217 16:24:56.419195 4672 generic.go:334] "Generic (PLEG): container finished" podID="a2f5d855-0026-4e76-969c-87603f5fe608" containerID="e15cdb408b2113a0efca62534226d10d0f394eb65bf4d018046b16597a210adb" exitCode=0 Feb 17 16:24:56 crc kubenswrapper[4672]: I0217 16:24:56.419333 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wtfr9" event={"ID":"a2f5d855-0026-4e76-969c-87603f5fe608","Type":"ContainerDied","Data":"e15cdb408b2113a0efca62534226d10d0f394eb65bf4d018046b16597a210adb"} Feb 17 16:24:56 crc kubenswrapper[4672]: I0217 16:24:56.421440 4672 generic.go:334] "Generic (PLEG): container finished" podID="8a5a3d1d-ff96-486d-bb60-b8c390c738e9" containerID="0ded762023c5a0f104b5b57988b861026e9ead73d89c2f2bb27a9658ed5f7c03" exitCode=0 Feb 17 16:24:56 crc kubenswrapper[4672]: I0217 16:24:56.421475 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-84dtj" event={"ID":"8a5a3d1d-ff96-486d-bb60-b8c390c738e9","Type":"ContainerDied","Data":"0ded762023c5a0f104b5b57988b861026e9ead73d89c2f2bb27a9658ed5f7c03"} Feb 17 16:24:56 crc kubenswrapper[4672]: I0217 16:24:56.423485 4672 generic.go:334] "Generic (PLEG): container finished" podID="b947707f-716d-48ae-9151-a27658bc5a91" containerID="07b0589eea8d6b5abdee4373cb76c4f3f74040bbe13f2f2a89f34fb8e44bef77" exitCode=0 Feb 17 16:24:56 crc kubenswrapper[4672]: I0217 16:24:56.423625 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6cbf-account-create-update-2hf29" event={"ID":"b947707f-716d-48ae-9151-a27658bc5a91","Type":"ContainerDied","Data":"07b0589eea8d6b5abdee4373cb76c4f3f74040bbe13f2f2a89f34fb8e44bef77"} Feb 17 16:24:56 crc kubenswrapper[4672]: I0217 16:24:56.423666 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6cbf-account-create-update-2hf29" event={"ID":"b947707f-716d-48ae-9151-a27658bc5a91","Type":"ContainerStarted","Data":"50c6866ab8a17ac4ee11ccd88f4f37c8eb67d80e553fc3c061ffd8ff053beab2"} Feb 17 16:24:56 crc kubenswrapper[4672]: I0217 16:24:56.932061 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vkcbs" Feb 17 16:24:57 crc kubenswrapper[4672]: I0217 16:24:57.053490 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9b97\" (UniqueName: \"kubernetes.io/projected/abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9-kube-api-access-b9b97\") pod \"abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9\" (UID: \"abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9\") " Feb 17 16:24:57 crc kubenswrapper[4672]: I0217 16:24:57.053655 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9-operator-scripts\") pod \"abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9\" (UID: \"abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9\") " Feb 17 16:24:57 crc kubenswrapper[4672]: I0217 16:24:57.054244 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9" (UID: "abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:57 crc kubenswrapper[4672]: I0217 16:24:57.057908 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9-kube-api-access-b9b97" (OuterVolumeSpecName: "kube-api-access-b9b97") pod "abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9" (UID: "abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9"). InnerVolumeSpecName "kube-api-access-b9b97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:57 crc kubenswrapper[4672]: I0217 16:24:57.155559 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9b97\" (UniqueName: \"kubernetes.io/projected/abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9-kube-api-access-b9b97\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:57 crc kubenswrapper[4672]: I0217 16:24:57.155597 4672 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:57 crc kubenswrapper[4672]: I0217 16:24:57.437467 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vkcbs" event={"ID":"abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9","Type":"ContainerDied","Data":"4c29e8c844703547841f3fc44770bdd7b799d9f757800a4cbc74be8cb547e28f"} Feb 17 16:24:57 crc kubenswrapper[4672]: I0217 16:24:57.437532 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c29e8c844703547841f3fc44770bdd7b799d9f757800a4cbc74be8cb547e28f" Feb 17 16:24:57 crc kubenswrapper[4672]: I0217 16:24:57.437548 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vkcbs" Feb 17 16:24:57 crc kubenswrapper[4672]: I0217 16:24:57.566001 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:24:57 crc kubenswrapper[4672]: I0217 16:24:57.566078 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:24:57 crc kubenswrapper[4672]: I0217 16:24:57.843966 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:24:57 crc kubenswrapper[4672]: I0217 16:24:57.844008 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:24:57 crc kubenswrapper[4672]: I0217 16:24:57.894626 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:24:57 crc kubenswrapper[4672]: I0217 16:24:57.900859 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6cbf-account-create-update-2hf29" Feb 17 16:24:57 crc kubenswrapper[4672]: I0217 16:24:57.927308 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:24:57 crc kubenswrapper[4672]: I0217 16:24:57.976825 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnjbr\" (UniqueName: \"kubernetes.io/projected/b947707f-716d-48ae-9151-a27658bc5a91-kube-api-access-nnjbr\") pod \"b947707f-716d-48ae-9151-a27658bc5a91\" (UID: \"b947707f-716d-48ae-9151-a27658bc5a91\") " Feb 17 16:24:57 crc kubenswrapper[4672]: I0217 16:24:57.976894 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b947707f-716d-48ae-9151-a27658bc5a91-operator-scripts\") pod \"b947707f-716d-48ae-9151-a27658bc5a91\" (UID: \"b947707f-716d-48ae-9151-a27658bc5a91\") " Feb 17 16:24:57 crc kubenswrapper[4672]: I0217 16:24:57.982406 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b947707f-716d-48ae-9151-a27658bc5a91-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b947707f-716d-48ae-9151-a27658bc5a91" (UID: "b947707f-716d-48ae-9151-a27658bc5a91"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:57 crc kubenswrapper[4672]: I0217 16:24:57.985172 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b947707f-716d-48ae-9151-a27658bc5a91-kube-api-access-nnjbr" (OuterVolumeSpecName: "kube-api-access-nnjbr") pod "b947707f-716d-48ae-9151-a27658bc5a91" (UID: "b947707f-716d-48ae-9151-a27658bc5a91"). InnerVolumeSpecName "kube-api-access-nnjbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.082013 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnjbr\" (UniqueName: \"kubernetes.io/projected/b947707f-716d-48ae-9151-a27658bc5a91-kube-api-access-nnjbr\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.082046 4672 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b947707f-716d-48ae-9151-a27658bc5a91-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.133842 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a99c-account-create-update-f7kt2" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.141157 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-84dtj" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.155773 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wtfr9" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.162717 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e3bf-account-create-update-5gj9s" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.289376 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90804f80-91b4-44bb-b3c1-04c56c687c65-operator-scripts\") pod \"90804f80-91b4-44bb-b3c1-04c56c687c65\" (UID: \"90804f80-91b4-44bb-b3c1-04c56c687c65\") " Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.290049 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90804f80-91b4-44bb-b3c1-04c56c687c65-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90804f80-91b4-44bb-b3c1-04c56c687c65" (UID: "90804f80-91b4-44bb-b3c1-04c56c687c65"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.290449 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6chr2\" (UniqueName: \"kubernetes.io/projected/8a5a3d1d-ff96-486d-bb60-b8c390c738e9-kube-api-access-6chr2\") pod \"8a5a3d1d-ff96-486d-bb60-b8c390c738e9\" (UID: \"8a5a3d1d-ff96-486d-bb60-b8c390c738e9\") " Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.290535 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a5a3d1d-ff96-486d-bb60-b8c390c738e9-operator-scripts\") pod \"8a5a3d1d-ff96-486d-bb60-b8c390c738e9\" (UID: \"8a5a3d1d-ff96-486d-bb60-b8c390c738e9\") " Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.290600 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm7kp\" (UniqueName: \"kubernetes.io/projected/a2f5d855-0026-4e76-969c-87603f5fe608-kube-api-access-fm7kp\") pod \"a2f5d855-0026-4e76-969c-87603f5fe608\" (UID: \"a2f5d855-0026-4e76-969c-87603f5fe608\") " Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.290666 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6xnn\" (UniqueName: \"kubernetes.io/projected/d00fc6a7-229f-4a5f-af9a-8b39b110b5ad-kube-api-access-j6xnn\") pod \"d00fc6a7-229f-4a5f-af9a-8b39b110b5ad\" (UID: \"d00fc6a7-229f-4a5f-af9a-8b39b110b5ad\") " Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.290733 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2f5d855-0026-4e76-969c-87603f5fe608-operator-scripts\") pod \"a2f5d855-0026-4e76-969c-87603f5fe608\" (UID: \"a2f5d855-0026-4e76-969c-87603f5fe608\") " Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.291359 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxsl4\" (UniqueName: \"kubernetes.io/projected/90804f80-91b4-44bb-b3c1-04c56c687c65-kube-api-access-mxsl4\") pod \"90804f80-91b4-44bb-b3c1-04c56c687c65\" (UID: \"90804f80-91b4-44bb-b3c1-04c56c687c65\") " Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.291186 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a5a3d1d-ff96-486d-bb60-b8c390c738e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a5a3d1d-ff96-486d-bb60-b8c390c738e9" (UID: "8a5a3d1d-ff96-486d-bb60-b8c390c738e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.291215 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2f5d855-0026-4e76-969c-87603f5fe608-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a2f5d855-0026-4e76-969c-87603f5fe608" (UID: "a2f5d855-0026-4e76-969c-87603f5fe608"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.291481 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d00fc6a7-229f-4a5f-af9a-8b39b110b5ad-operator-scripts\") pod \"d00fc6a7-229f-4a5f-af9a-8b39b110b5ad\" (UID: \"d00fc6a7-229f-4a5f-af9a-8b39b110b5ad\") " Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.291993 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d00fc6a7-229f-4a5f-af9a-8b39b110b5ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d00fc6a7-229f-4a5f-af9a-8b39b110b5ad" (UID: "d00fc6a7-229f-4a5f-af9a-8b39b110b5ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.292070 4672 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2f5d855-0026-4e76-969c-87603f5fe608-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.292086 4672 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90804f80-91b4-44bb-b3c1-04c56c687c65-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.292100 4672 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a5a3d1d-ff96-486d-bb60-b8c390c738e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.294853 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d00fc6a7-229f-4a5f-af9a-8b39b110b5ad-kube-api-access-j6xnn" (OuterVolumeSpecName: "kube-api-access-j6xnn") pod "d00fc6a7-229f-4a5f-af9a-8b39b110b5ad" (UID: "d00fc6a7-229f-4a5f-af9a-8b39b110b5ad"). InnerVolumeSpecName "kube-api-access-j6xnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.295683 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2f5d855-0026-4e76-969c-87603f5fe608-kube-api-access-fm7kp" (OuterVolumeSpecName: "kube-api-access-fm7kp") pod "a2f5d855-0026-4e76-969c-87603f5fe608" (UID: "a2f5d855-0026-4e76-969c-87603f5fe608"). InnerVolumeSpecName "kube-api-access-fm7kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.295795 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a5a3d1d-ff96-486d-bb60-b8c390c738e9-kube-api-access-6chr2" (OuterVolumeSpecName: "kube-api-access-6chr2") pod "8a5a3d1d-ff96-486d-bb60-b8c390c738e9" (UID: "8a5a3d1d-ff96-486d-bb60-b8c390c738e9"). InnerVolumeSpecName "kube-api-access-6chr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.297842 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90804f80-91b4-44bb-b3c1-04c56c687c65-kube-api-access-mxsl4" (OuterVolumeSpecName: "kube-api-access-mxsl4") pod "90804f80-91b4-44bb-b3c1-04c56c687c65" (UID: "90804f80-91b4-44bb-b3c1-04c56c687c65"). InnerVolumeSpecName "kube-api-access-mxsl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.394188 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6chr2\" (UniqueName: \"kubernetes.io/projected/8a5a3d1d-ff96-486d-bb60-b8c390c738e9-kube-api-access-6chr2\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.394222 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm7kp\" (UniqueName: \"kubernetes.io/projected/a2f5d855-0026-4e76-969c-87603f5fe608-kube-api-access-fm7kp\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.394233 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6xnn\" (UniqueName: \"kubernetes.io/projected/d00fc6a7-229f-4a5f-af9a-8b39b110b5ad-kube-api-access-j6xnn\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.394241 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxsl4\" (UniqueName: \"kubernetes.io/projected/90804f80-91b4-44bb-b3c1-04c56c687c65-kube-api-access-mxsl4\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.394250 4672 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d00fc6a7-229f-4a5f-af9a-8b39b110b5ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.446057 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6cbf-account-create-update-2hf29" event={"ID":"b947707f-716d-48ae-9151-a27658bc5a91","Type":"ContainerDied","Data":"50c6866ab8a17ac4ee11ccd88f4f37c8eb67d80e553fc3c061ffd8ff053beab2"} Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.446076 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6cbf-account-create-update-2hf29" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.446093 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50c6866ab8a17ac4ee11ccd88f4f37c8eb67d80e553fc3c061ffd8ff053beab2" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.448407 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e3bf-account-create-update-5gj9s" event={"ID":"d00fc6a7-229f-4a5f-af9a-8b39b110b5ad","Type":"ContainerDied","Data":"dea6f1a71be44555d19820aae2c3cd5d8097f8070c4ef29e52867c4758e9458e"} Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.448487 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dea6f1a71be44555d19820aae2c3cd5d8097f8070c4ef29e52867c4758e9458e" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.448677 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e3bf-account-create-update-5gj9s" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.456496 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a99c-account-create-update-f7kt2" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.456497 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a99c-account-create-update-f7kt2" event={"ID":"90804f80-91b4-44bb-b3c1-04c56c687c65","Type":"ContainerDied","Data":"5b9ef65b0fbaa2a7065640ae097e0abfeb2f19a3d9e3c1a91e3e96fdc40b7356"} Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.456587 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b9ef65b0fbaa2a7065640ae097e0abfeb2f19a3d9e3c1a91e3e96fdc40b7356" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.473545 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wtfr9" event={"ID":"a2f5d855-0026-4e76-969c-87603f5fe608","Type":"ContainerDied","Data":"37f090b24a7a0e9c3337c0c905a40a41ac4e11ffe12c408669c2abe5f39d42f8"} Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.473606 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37f090b24a7a0e9c3337c0c905a40a41ac4e11ffe12c408669c2abe5f39d42f8" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.473576 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wtfr9" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.475867 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-84dtj" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.475906 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-84dtj" event={"ID":"8a5a3d1d-ff96-486d-bb60-b8c390c738e9","Type":"ContainerDied","Data":"22eb4e05f1188de1fcae58cb88ba26932ece6884b40421b64c9033a2599076c7"} Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.475943 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22eb4e05f1188de1fcae58cb88ba26932ece6884b40421b64c9033a2599076c7" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.476303 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.476348 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.925796 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.926054 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.970943 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:58 crc kubenswrapper[4672]: I0217 16:24:58.986661 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:59 crc kubenswrapper[4672]: I0217 16:24:59.486793 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:24:59 crc kubenswrapper[4672]: I0217 16:24:59.486842 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:25:00 crc kubenswrapper[4672]: I0217 16:25:00.489189 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:25:00 crc kubenswrapper[4672]: I0217 16:25:00.518527 4672 generic.go:334] "Generic (PLEG): container finished" podID="884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" containerID="ae70710761e13ae8498a8a4071662368349ac2df6155b57c344606bd2cdda2a1" exitCode=0 Feb 17 16:25:00 crc kubenswrapper[4672]: I0217 16:25:00.518611 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b","Type":"ContainerDied","Data":"ae70710761e13ae8498a8a4071662368349ac2df6155b57c344606bd2cdda2a1"} Feb 17 16:25:00 crc kubenswrapper[4672]: I0217 16:25:00.518674 4672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:25:00 crc kubenswrapper[4672]: I0217 16:25:00.844127 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:00 crc kubenswrapper[4672]: I0217 16:25:00.943603 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-sg-core-conf-yaml\") pod \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " Feb 17 16:25:00 crc kubenswrapper[4672]: I0217 16:25:00.943641 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-log-httpd\") pod \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " Feb 17 16:25:00 crc kubenswrapper[4672]: I0217 16:25:00.943698 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-combined-ca-bundle\") pod \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " Feb 17 16:25:00 crc kubenswrapper[4672]: I0217 16:25:00.943750 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-config-data\") pod \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " Feb 17 16:25:00 crc kubenswrapper[4672]: I0217 16:25:00.943772 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-run-httpd\") pod \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " Feb 17 16:25:00 crc kubenswrapper[4672]: I0217 16:25:00.943816 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-scripts\") pod \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " Feb 17 16:25:00 crc kubenswrapper[4672]: I0217 16:25:00.943879 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9b45\" (UniqueName: \"kubernetes.io/projected/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-kube-api-access-f9b45\") pod \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\" (UID: \"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b\") " Feb 17 16:25:00 crc kubenswrapper[4672]: I0217 16:25:00.944255 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" (UID: "884cd77d-f3d0-44d2-b4ec-bd53e2e5978b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:00 crc kubenswrapper[4672]: I0217 16:25:00.944284 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" (UID: "884cd77d-f3d0-44d2-b4ec-bd53e2e5978b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:00 crc kubenswrapper[4672]: I0217 16:25:00.951615 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-scripts" (OuterVolumeSpecName: "scripts") pod "884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" (UID: "884cd77d-f3d0-44d2-b4ec-bd53e2e5978b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:00 crc kubenswrapper[4672]: I0217 16:25:00.967787 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-kube-api-access-f9b45" (OuterVolumeSpecName: "kube-api-access-f9b45") pod "884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" (UID: "884cd77d-f3d0-44d2-b4ec-bd53e2e5978b"). InnerVolumeSpecName "kube-api-access-f9b45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:00 crc kubenswrapper[4672]: I0217 16:25:00.982675 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" (UID: "884cd77d-f3d0-44d2-b4ec-bd53e2e5978b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.023555 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" (UID: "884cd77d-f3d0-44d2-b4ec-bd53e2e5978b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.046201 4672 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.046230 4672 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.046242 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.046251 4672 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.046259 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.046268 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9b45\" (UniqueName: \"kubernetes.io/projected/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-kube-api-access-f9b45\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.073690 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-config-data" (OuterVolumeSpecName: "config-data") pod "884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" (UID: "884cd77d-f3d0-44d2-b4ec-bd53e2e5978b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.149670 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.242132 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.531856 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"884cd77d-f3d0-44d2-b4ec-bd53e2e5978b","Type":"ContainerDied","Data":"a2bfed32bc3f07a31c2ac210de005e1d81301e5f6c63e426d60db1d1f84d1c84"} Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.531916 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.531933 4672 scope.go:117] "RemoveContainer" containerID="4ab30193e5c0c3caa90691d41851f7a28ad5072965b0d73c18a7a1af2decf462" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.557674 4672 scope.go:117] "RemoveContainer" containerID="766f677d17e92ddc68ec3570f49102457fef07fc8c4e987462374b141a0c5040" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.585084 4672 scope.go:117] "RemoveContainer" containerID="f5682e9918435f796b983b0b4adb03ae9af2fdf8a8096f7b8554d861ad6ac9a4" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.586067 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.605812 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.626988 4672 scope.go:117] "RemoveContainer" containerID="ae70710761e13ae8498a8a4071662368349ac2df6155b57c344606bd2cdda2a1" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.629064 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:01 crc kubenswrapper[4672]: E0217 16:25:01.629673 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f5d855-0026-4e76-969c-87603f5fe608" containerName="mariadb-database-create" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.629687 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f5d855-0026-4e76-969c-87603f5fe608" containerName="mariadb-database-create" Feb 17 16:25:01 crc kubenswrapper[4672]: E0217 16:25:01.629706 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" containerName="ceilometer-notification-agent" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.629712 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" containerName="ceilometer-notification-agent" Feb 17 16:25:01 crc kubenswrapper[4672]: E0217 16:25:01.629722 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" containerName="sg-core" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.629727 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" containerName="sg-core" Feb 17 16:25:01 crc kubenswrapper[4672]: E0217 16:25:01.629735 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" containerName="proxy-httpd" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.629742 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" containerName="proxy-httpd" Feb 17 16:25:01 crc kubenswrapper[4672]: E0217 16:25:01.629750 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9" containerName="mariadb-database-create" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.629756 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9" containerName="mariadb-database-create" Feb 17 16:25:01 crc kubenswrapper[4672]: E0217 16:25:01.629770 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b947707f-716d-48ae-9151-a27658bc5a91" containerName="mariadb-account-create-update" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.629775 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="b947707f-716d-48ae-9151-a27658bc5a91" containerName="mariadb-account-create-update" Feb 17 16:25:01 crc kubenswrapper[4672]: E0217 16:25:01.629787 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" containerName="ceilometer-central-agent" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.629793 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" containerName="ceilometer-central-agent" Feb 17 16:25:01 crc kubenswrapper[4672]: E0217 16:25:01.629806 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d00fc6a7-229f-4a5f-af9a-8b39b110b5ad" containerName="mariadb-account-create-update" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.629812 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="d00fc6a7-229f-4a5f-af9a-8b39b110b5ad" containerName="mariadb-account-create-update" Feb 17 16:25:01 crc kubenswrapper[4672]: E0217 16:25:01.629823 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a5a3d1d-ff96-486d-bb60-b8c390c738e9" containerName="mariadb-database-create" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.629829 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a5a3d1d-ff96-486d-bb60-b8c390c738e9" containerName="mariadb-database-create" Feb 17 16:25:01 crc kubenswrapper[4672]: E0217 16:25:01.629845 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90804f80-91b4-44bb-b3c1-04c56c687c65" containerName="mariadb-account-create-update" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.629851 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="90804f80-91b4-44bb-b3c1-04c56c687c65" containerName="mariadb-account-create-update" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.630008 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a5a3d1d-ff96-486d-bb60-b8c390c738e9" containerName="mariadb-database-create" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.630018 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9" containerName="mariadb-database-create" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.630029 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" containerName="ceilometer-central-agent" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.630046 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="b947707f-716d-48ae-9151-a27658bc5a91" containerName="mariadb-account-create-update" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.630054 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" containerName="proxy-httpd" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.630062 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" containerName="ceilometer-notification-agent" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.630069 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" containerName="sg-core" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.630076 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f5d855-0026-4e76-969c-87603f5fe608" containerName="mariadb-database-create" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.630086 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="d00fc6a7-229f-4a5f-af9a-8b39b110b5ad" containerName="mariadb-account-create-update" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.630097 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="90804f80-91b4-44bb-b3c1-04c56c687c65" containerName="mariadb-account-create-update" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.631721 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.634181 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.634640 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.643350 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.760226 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h25dx\" (UniqueName: \"kubernetes.io/projected/a7ac5715-c17b-430a-8420-eb8fa198c515-kube-api-access-h25dx\") pod \"ceilometer-0\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " pod="openstack/ceilometer-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.760293 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7ac5715-c17b-430a-8420-eb8fa198c515-run-httpd\") pod \"ceilometer-0\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " pod="openstack/ceilometer-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.760317 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " pod="openstack/ceilometer-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.760364 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-config-data\") pod \"ceilometer-0\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " pod="openstack/ceilometer-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.760405 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-scripts\") pod \"ceilometer-0\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " pod="openstack/ceilometer-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.760426 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " pod="openstack/ceilometer-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.760463 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7ac5715-c17b-430a-8420-eb8fa198c515-log-httpd\") pod \"ceilometer-0\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " pod="openstack/ceilometer-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.857826 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.862229 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " pod="openstack/ceilometer-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.862324 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7ac5715-c17b-430a-8420-eb8fa198c515-log-httpd\") pod \"ceilometer-0\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " pod="openstack/ceilometer-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.862439 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h25dx\" (UniqueName: \"kubernetes.io/projected/a7ac5715-c17b-430a-8420-eb8fa198c515-kube-api-access-h25dx\") pod \"ceilometer-0\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " pod="openstack/ceilometer-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.862510 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7ac5715-c17b-430a-8420-eb8fa198c515-run-httpd\") pod \"ceilometer-0\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " pod="openstack/ceilometer-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.862625 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " pod="openstack/ceilometer-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.862692 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-config-data\") pod \"ceilometer-0\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " pod="openstack/ceilometer-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.862753 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-scripts\") pod \"ceilometer-0\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " pod="openstack/ceilometer-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.863149 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7ac5715-c17b-430a-8420-eb8fa198c515-log-httpd\") pod \"ceilometer-0\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " pod="openstack/ceilometer-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.863661 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7ac5715-c17b-430a-8420-eb8fa198c515-run-httpd\") pod \"ceilometer-0\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " pod="openstack/ceilometer-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.870107 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-config-data\") pod \"ceilometer-0\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " pod="openstack/ceilometer-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.872205 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " pod="openstack/ceilometer-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.874970 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-scripts\") pod \"ceilometer-0\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " pod="openstack/ceilometer-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.877444 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " pod="openstack/ceilometer-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.882642 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h25dx\" (UniqueName: \"kubernetes.io/projected/a7ac5715-c17b-430a-8420-eb8fa198c515-kube-api-access-h25dx\") pod \"ceilometer-0\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " pod="openstack/ceilometer-0" Feb 17 16:25:01 crc kubenswrapper[4672]: I0217 16:25:01.987597 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:02 crc kubenswrapper[4672]: I0217 16:25:02.011813 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="884cd77d-f3d0-44d2-b4ec-bd53e2e5978b" path="/var/lib/kubelet/pods/884cd77d-f3d0-44d2-b4ec-bd53e2e5978b/volumes" Feb 17 16:25:02 crc kubenswrapper[4672]: I0217 16:25:02.012999 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:25:02 crc kubenswrapper[4672]: I0217 16:25:02.013310 4672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:25:02 crc kubenswrapper[4672]: I0217 16:25:02.293126 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:25:02 crc kubenswrapper[4672]: I0217 16:25:02.537418 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:03 crc kubenswrapper[4672]: I0217 16:25:03.549295 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7ac5715-c17b-430a-8420-eb8fa198c515","Type":"ContainerStarted","Data":"01dc43c9f6156a79c6b30f3a3498a2bc04eebeb983c42bc077de43da8e9788df"} Feb 17 16:25:03 crc kubenswrapper[4672]: I0217 16:25:03.549734 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7ac5715-c17b-430a-8420-eb8fa198c515","Type":"ContainerStarted","Data":"d842afe1f2ecb562947dcd0ed45dfd5677027d10a9e886e2750a091ca606e142"} Feb 17 16:25:04 crc kubenswrapper[4672]: I0217 16:25:04.200642 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-g7n24"] Feb 17 16:25:04 crc kubenswrapper[4672]: I0217 16:25:04.202719 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-g7n24" Feb 17 16:25:04 crc kubenswrapper[4672]: I0217 16:25:04.209647 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-l88jt" Feb 17 16:25:04 crc kubenswrapper[4672]: I0217 16:25:04.209758 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 16:25:04 crc kubenswrapper[4672]: I0217 16:25:04.209850 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 17 16:25:04 crc kubenswrapper[4672]: I0217 16:25:04.240659 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-g7n24"] Feb 17 16:25:04 crc kubenswrapper[4672]: I0217 16:25:04.314305 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/348b9f8c-3534-40ae-9a6d-989fd1db076d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-g7n24\" (UID: \"348b9f8c-3534-40ae-9a6d-989fd1db076d\") " pod="openstack/nova-cell0-conductor-db-sync-g7n24" Feb 17 16:25:04 crc kubenswrapper[4672]: I0217 16:25:04.314397 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/348b9f8c-3534-40ae-9a6d-989fd1db076d-scripts\") pod \"nova-cell0-conductor-db-sync-g7n24\" (UID: \"348b9f8c-3534-40ae-9a6d-989fd1db076d\") " pod="openstack/nova-cell0-conductor-db-sync-g7n24" Feb 17 16:25:04 crc kubenswrapper[4672]: I0217 16:25:04.314493 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/348b9f8c-3534-40ae-9a6d-989fd1db076d-config-data\") pod \"nova-cell0-conductor-db-sync-g7n24\" (UID: \"348b9f8c-3534-40ae-9a6d-989fd1db076d\") " pod="openstack/nova-cell0-conductor-db-sync-g7n24" Feb 17 16:25:04 crc kubenswrapper[4672]: I0217 16:25:04.314584 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw9p6\" (UniqueName: \"kubernetes.io/projected/348b9f8c-3534-40ae-9a6d-989fd1db076d-kube-api-access-pw9p6\") pod \"nova-cell0-conductor-db-sync-g7n24\" (UID: \"348b9f8c-3534-40ae-9a6d-989fd1db076d\") " pod="openstack/nova-cell0-conductor-db-sync-g7n24" Feb 17 16:25:04 crc kubenswrapper[4672]: E0217 16:25:04.351896 4672 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6ad1f8c_ef18_4bd8_ac43_b8f1151277f6.slice/crio-4fede41a6b3d442704cc0b64a71cfcde9ecee5251694f4b5e0c64343367e5adb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6ad1f8c_ef18_4bd8_ac43_b8f1151277f6.slice\": RecentStats: unable to find data in memory cache]" Feb 17 16:25:04 crc kubenswrapper[4672]: I0217 16:25:04.415956 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/348b9f8c-3534-40ae-9a6d-989fd1db076d-scripts\") pod \"nova-cell0-conductor-db-sync-g7n24\" (UID: \"348b9f8c-3534-40ae-9a6d-989fd1db076d\") " pod="openstack/nova-cell0-conductor-db-sync-g7n24" Feb 17 16:25:04 crc kubenswrapper[4672]: I0217 16:25:04.416360 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/348b9f8c-3534-40ae-9a6d-989fd1db076d-config-data\") pod \"nova-cell0-conductor-db-sync-g7n24\" (UID: \"348b9f8c-3534-40ae-9a6d-989fd1db076d\") " pod="openstack/nova-cell0-conductor-db-sync-g7n24" Feb 17 16:25:04 crc kubenswrapper[4672]: I0217 16:25:04.416386 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw9p6\" (UniqueName: \"kubernetes.io/projected/348b9f8c-3534-40ae-9a6d-989fd1db076d-kube-api-access-pw9p6\") pod \"nova-cell0-conductor-db-sync-g7n24\" (UID: \"348b9f8c-3534-40ae-9a6d-989fd1db076d\") " pod="openstack/nova-cell0-conductor-db-sync-g7n24" Feb 17 16:25:04 crc kubenswrapper[4672]: I0217 16:25:04.416452 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/348b9f8c-3534-40ae-9a6d-989fd1db076d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-g7n24\" (UID: \"348b9f8c-3534-40ae-9a6d-989fd1db076d\") " pod="openstack/nova-cell0-conductor-db-sync-g7n24" Feb 17 16:25:04 crc kubenswrapper[4672]: I0217 16:25:04.421148 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/348b9f8c-3534-40ae-9a6d-989fd1db076d-scripts\") pod \"nova-cell0-conductor-db-sync-g7n24\" (UID: \"348b9f8c-3534-40ae-9a6d-989fd1db076d\") " pod="openstack/nova-cell0-conductor-db-sync-g7n24" Feb 17 16:25:04 crc kubenswrapper[4672]: I0217 16:25:04.426095 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/348b9f8c-3534-40ae-9a6d-989fd1db076d-config-data\") pod \"nova-cell0-conductor-db-sync-g7n24\" (UID: \"348b9f8c-3534-40ae-9a6d-989fd1db076d\") " pod="openstack/nova-cell0-conductor-db-sync-g7n24" Feb 17 16:25:04 crc kubenswrapper[4672]: I0217 16:25:04.426141 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/348b9f8c-3534-40ae-9a6d-989fd1db076d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-g7n24\" (UID: \"348b9f8c-3534-40ae-9a6d-989fd1db076d\") " pod="openstack/nova-cell0-conductor-db-sync-g7n24" Feb 17 16:25:04 crc kubenswrapper[4672]: I0217 16:25:04.437426 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw9p6\" (UniqueName: \"kubernetes.io/projected/348b9f8c-3534-40ae-9a6d-989fd1db076d-kube-api-access-pw9p6\") pod \"nova-cell0-conductor-db-sync-g7n24\" (UID: \"348b9f8c-3534-40ae-9a6d-989fd1db076d\") " pod="openstack/nova-cell0-conductor-db-sync-g7n24" Feb 17 16:25:04 crc kubenswrapper[4672]: I0217 16:25:04.553823 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-g7n24" Feb 17 16:25:04 crc kubenswrapper[4672]: I0217 16:25:04.614916 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7ac5715-c17b-430a-8420-eb8fa198c515","Type":"ContainerStarted","Data":"4f844107c93816dc198d3f8631f2a21337ca5f9943fcba55a77983974549b4a8"} Feb 17 16:25:05 crc kubenswrapper[4672]: I0217 16:25:05.205367 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-g7n24"] Feb 17 16:25:05 crc kubenswrapper[4672]: W0217 16:25:05.211089 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod348b9f8c_3534_40ae_9a6d_989fd1db076d.slice/crio-c4b7124de8e1df51c4618b80c1dd3b8fc67d81ecc4cfea39bd849ae8ff1c4af0 WatchSource:0}: Error finding container c4b7124de8e1df51c4618b80c1dd3b8fc67d81ecc4cfea39bd849ae8ff1c4af0: Status 404 returned error can't find the container with id c4b7124de8e1df51c4618b80c1dd3b8fc67d81ecc4cfea39bd849ae8ff1c4af0 Feb 17 16:25:05 crc kubenswrapper[4672]: I0217 16:25:05.365414 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:05 crc kubenswrapper[4672]: I0217 16:25:05.624831 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7ac5715-c17b-430a-8420-eb8fa198c515","Type":"ContainerStarted","Data":"7320fa5549c020a698ad2229be3d35594d24e751bef3108843c216e25126528e"} Feb 17 16:25:05 crc kubenswrapper[4672]: I0217 16:25:05.626202 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-g7n24" event={"ID":"348b9f8c-3534-40ae-9a6d-989fd1db076d","Type":"ContainerStarted","Data":"c4b7124de8e1df51c4618b80c1dd3b8fc67d81ecc4cfea39bd849ae8ff1c4af0"} Feb 17 16:25:06 crc kubenswrapper[4672]: I0217 16:25:06.640880 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7ac5715-c17b-430a-8420-eb8fa198c515","Type":"ContainerStarted","Data":"9be8a755dfba9cf4926a2823dcd8dffce0dc205d353a244217eaece084cf2fc3"} Feb 17 16:25:06 crc kubenswrapper[4672]: I0217 16:25:06.641283 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7ac5715-c17b-430a-8420-eb8fa198c515" containerName="ceilometer-central-agent" containerID="cri-o://01dc43c9f6156a79c6b30f3a3498a2bc04eebeb983c42bc077de43da8e9788df" gracePeriod=30 Feb 17 16:25:06 crc kubenswrapper[4672]: I0217 16:25:06.641563 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:25:06 crc kubenswrapper[4672]: I0217 16:25:06.641826 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7ac5715-c17b-430a-8420-eb8fa198c515" containerName="proxy-httpd" containerID="cri-o://9be8a755dfba9cf4926a2823dcd8dffce0dc205d353a244217eaece084cf2fc3" gracePeriod=30 Feb 17 16:25:06 crc kubenswrapper[4672]: I0217 16:25:06.641869 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7ac5715-c17b-430a-8420-eb8fa198c515" containerName="sg-core" containerID="cri-o://7320fa5549c020a698ad2229be3d35594d24e751bef3108843c216e25126528e" gracePeriod=30 Feb 17 16:25:06 crc kubenswrapper[4672]: I0217 16:25:06.641897 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7ac5715-c17b-430a-8420-eb8fa198c515" containerName="ceilometer-notification-agent" containerID="cri-o://4f844107c93816dc198d3f8631f2a21337ca5f9943fcba55a77983974549b4a8" gracePeriod=30 Feb 17 16:25:06 crc kubenswrapper[4672]: I0217 16:25:06.671220 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.094125823 podStartE2EDuration="5.671202765s" podCreationTimestamp="2026-02-17 16:25:01 +0000 UTC" firstStartedPulling="2026-02-17 16:25:02.541520344 +0000 UTC m=+1311.295609076" lastFinishedPulling="2026-02-17 16:25:06.118597276 +0000 UTC m=+1314.872686018" observedRunningTime="2026-02-17 16:25:06.663918103 +0000 UTC m=+1315.418006835" watchObservedRunningTime="2026-02-17 16:25:06.671202765 +0000 UTC m=+1315.425291497" Feb 17 16:25:07 crc kubenswrapper[4672]: I0217 16:25:07.654787 4672 generic.go:334] "Generic (PLEG): container finished" podID="a7ac5715-c17b-430a-8420-eb8fa198c515" containerID="9be8a755dfba9cf4926a2823dcd8dffce0dc205d353a244217eaece084cf2fc3" exitCode=0 Feb 17 16:25:07 crc kubenswrapper[4672]: I0217 16:25:07.654818 4672 generic.go:334] "Generic (PLEG): container finished" podID="a7ac5715-c17b-430a-8420-eb8fa198c515" containerID="7320fa5549c020a698ad2229be3d35594d24e751bef3108843c216e25126528e" exitCode=2 Feb 17 16:25:07 crc kubenswrapper[4672]: I0217 16:25:07.654825 4672 generic.go:334] "Generic (PLEG): container finished" podID="a7ac5715-c17b-430a-8420-eb8fa198c515" containerID="4f844107c93816dc198d3f8631f2a21337ca5f9943fcba55a77983974549b4a8" exitCode=0 Feb 17 16:25:07 crc kubenswrapper[4672]: I0217 16:25:07.654842 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7ac5715-c17b-430a-8420-eb8fa198c515","Type":"ContainerDied","Data":"9be8a755dfba9cf4926a2823dcd8dffce0dc205d353a244217eaece084cf2fc3"} Feb 17 16:25:07 crc kubenswrapper[4672]: I0217 16:25:07.654866 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7ac5715-c17b-430a-8420-eb8fa198c515","Type":"ContainerDied","Data":"7320fa5549c020a698ad2229be3d35594d24e751bef3108843c216e25126528e"} Feb 17 16:25:07 crc kubenswrapper[4672]: I0217 16:25:07.654876 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7ac5715-c17b-430a-8420-eb8fa198c515","Type":"ContainerDied","Data":"4f844107c93816dc198d3f8631f2a21337ca5f9943fcba55a77983974549b4a8"} Feb 17 16:25:11 crc kubenswrapper[4672]: I0217 16:25:11.747683 4672 generic.go:334] "Generic (PLEG): container finished" podID="a7ac5715-c17b-430a-8420-eb8fa198c515" containerID="01dc43c9f6156a79c6b30f3a3498a2bc04eebeb983c42bc077de43da8e9788df" exitCode=0 Feb 17 16:25:11 crc kubenswrapper[4672]: I0217 16:25:11.747876 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7ac5715-c17b-430a-8420-eb8fa198c515","Type":"ContainerDied","Data":"01dc43c9f6156a79c6b30f3a3498a2bc04eebeb983c42bc077de43da8e9788df"} Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.405274 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.547188 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-config-data\") pod \"a7ac5715-c17b-430a-8420-eb8fa198c515\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.547538 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h25dx\" (UniqueName: \"kubernetes.io/projected/a7ac5715-c17b-430a-8420-eb8fa198c515-kube-api-access-h25dx\") pod \"a7ac5715-c17b-430a-8420-eb8fa198c515\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.547580 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-combined-ca-bundle\") pod \"a7ac5715-c17b-430a-8420-eb8fa198c515\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.547675 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-scripts\") pod \"a7ac5715-c17b-430a-8420-eb8fa198c515\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.547708 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7ac5715-c17b-430a-8420-eb8fa198c515-log-httpd\") pod \"a7ac5715-c17b-430a-8420-eb8fa198c515\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.547811 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7ac5715-c17b-430a-8420-eb8fa198c515-run-httpd\") pod \"a7ac5715-c17b-430a-8420-eb8fa198c515\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.547845 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-sg-core-conf-yaml\") pod \"a7ac5715-c17b-430a-8420-eb8fa198c515\" (UID: \"a7ac5715-c17b-430a-8420-eb8fa198c515\") " Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.548776 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7ac5715-c17b-430a-8420-eb8fa198c515-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a7ac5715-c17b-430a-8420-eb8fa198c515" (UID: "a7ac5715-c17b-430a-8420-eb8fa198c515"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.552112 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7ac5715-c17b-430a-8420-eb8fa198c515-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a7ac5715-c17b-430a-8420-eb8fa198c515" (UID: "a7ac5715-c17b-430a-8420-eb8fa198c515"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.553709 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-scripts" (OuterVolumeSpecName: "scripts") pod "a7ac5715-c17b-430a-8420-eb8fa198c515" (UID: "a7ac5715-c17b-430a-8420-eb8fa198c515"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.553795 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7ac5715-c17b-430a-8420-eb8fa198c515-kube-api-access-h25dx" (OuterVolumeSpecName: "kube-api-access-h25dx") pod "a7ac5715-c17b-430a-8420-eb8fa198c515" (UID: "a7ac5715-c17b-430a-8420-eb8fa198c515"). InnerVolumeSpecName "kube-api-access-h25dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.585798 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a7ac5715-c17b-430a-8420-eb8fa198c515" (UID: "a7ac5715-c17b-430a-8420-eb8fa198c515"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.648112 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7ac5715-c17b-430a-8420-eb8fa198c515" (UID: "a7ac5715-c17b-430a-8420-eb8fa198c515"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.650406 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h25dx\" (UniqueName: \"kubernetes.io/projected/a7ac5715-c17b-430a-8420-eb8fa198c515-kube-api-access-h25dx\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.650429 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.650438 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.650448 4672 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7ac5715-c17b-430a-8420-eb8fa198c515-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.650457 4672 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7ac5715-c17b-430a-8420-eb8fa198c515-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.650466 4672 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.668645 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-config-data" (OuterVolumeSpecName: "config-data") pod "a7ac5715-c17b-430a-8420-eb8fa198c515" (UID: "a7ac5715-c17b-430a-8420-eb8fa198c515"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.752807 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7ac5715-c17b-430a-8420-eb8fa198c515-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.778144 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7ac5715-c17b-430a-8420-eb8fa198c515","Type":"ContainerDied","Data":"d842afe1f2ecb562947dcd0ed45dfd5677027d10a9e886e2750a091ca606e142"} Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.778194 4672 scope.go:117] "RemoveContainer" containerID="9be8a755dfba9cf4926a2823dcd8dffce0dc205d353a244217eaece084cf2fc3" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.778298 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.783862 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-g7n24" event={"ID":"348b9f8c-3534-40ae-9a6d-989fd1db076d","Type":"ContainerStarted","Data":"74545cce8d094ef3f457ff7851b63c1ecf4112b386b93e028a52cbbbe186b9d0"} Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.810613 4672 scope.go:117] "RemoveContainer" containerID="7320fa5549c020a698ad2229be3d35594d24e751bef3108843c216e25126528e" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.822266 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-g7n24" podStartSLOduration=1.9521107359999998 podStartE2EDuration="10.822239931s" podCreationTimestamp="2026-02-17 16:25:04 +0000 UTC" firstStartedPulling="2026-02-17 16:25:05.218934978 +0000 UTC m=+1313.973023710" lastFinishedPulling="2026-02-17 16:25:14.089064173 +0000 UTC m=+1322.843152905" observedRunningTime="2026-02-17 16:25:14.804864203 +0000 UTC m=+1323.558952945" watchObservedRunningTime="2026-02-17 16:25:14.822239931 +0000 UTC m=+1323.576328663" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.840925 4672 scope.go:117] "RemoveContainer" containerID="4f844107c93816dc198d3f8631f2a21337ca5f9943fcba55a77983974549b4a8" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.874230 4672 scope.go:117] "RemoveContainer" containerID="01dc43c9f6156a79c6b30f3a3498a2bc04eebeb983c42bc077de43da8e9788df" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.874381 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.882260 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.890864 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:14 crc kubenswrapper[4672]: E0217 16:25:14.891306 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ac5715-c17b-430a-8420-eb8fa198c515" containerName="sg-core" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.891322 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ac5715-c17b-430a-8420-eb8fa198c515" containerName="sg-core" Feb 17 16:25:14 crc kubenswrapper[4672]: E0217 16:25:14.891341 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ac5715-c17b-430a-8420-eb8fa198c515" containerName="ceilometer-notification-agent" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.891348 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ac5715-c17b-430a-8420-eb8fa198c515" containerName="ceilometer-notification-agent" Feb 17 16:25:14 crc kubenswrapper[4672]: E0217 16:25:14.891365 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ac5715-c17b-430a-8420-eb8fa198c515" containerName="ceilometer-central-agent" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.891372 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ac5715-c17b-430a-8420-eb8fa198c515" containerName="ceilometer-central-agent" Feb 17 16:25:14 crc kubenswrapper[4672]: E0217 16:25:14.891385 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ac5715-c17b-430a-8420-eb8fa198c515" containerName="proxy-httpd" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.891391 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ac5715-c17b-430a-8420-eb8fa198c515" containerName="proxy-httpd" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.891606 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7ac5715-c17b-430a-8420-eb8fa198c515" containerName="proxy-httpd" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.891619 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7ac5715-c17b-430a-8420-eb8fa198c515" containerName="ceilometer-central-agent" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.891636 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7ac5715-c17b-430a-8420-eb8fa198c515" containerName="ceilometer-notification-agent" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.891649 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7ac5715-c17b-430a-8420-eb8fa198c515" containerName="sg-core" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.893403 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.895960 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.896243 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.898105 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.962075 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a755047c-581e-405e-8462-7ecf72a91b4f-log-httpd\") pod \"ceilometer-0\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " pod="openstack/ceilometer-0" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.962162 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a755047c-581e-405e-8462-7ecf72a91b4f-run-httpd\") pod \"ceilometer-0\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " pod="openstack/ceilometer-0" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.962192 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79hhv\" (UniqueName: \"kubernetes.io/projected/a755047c-581e-405e-8462-7ecf72a91b4f-kube-api-access-79hhv\") pod \"ceilometer-0\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " pod="openstack/ceilometer-0" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.962276 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " pod="openstack/ceilometer-0" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.962332 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-config-data\") pod \"ceilometer-0\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " pod="openstack/ceilometer-0" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.962428 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " pod="openstack/ceilometer-0" Feb 17 16:25:14 crc kubenswrapper[4672]: I0217 16:25:14.962457 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-scripts\") pod \"ceilometer-0\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " pod="openstack/ceilometer-0" Feb 17 16:25:15 crc kubenswrapper[4672]: I0217 16:25:15.063584 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a755047c-581e-405e-8462-7ecf72a91b4f-run-httpd\") pod \"ceilometer-0\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " pod="openstack/ceilometer-0" Feb 17 16:25:15 crc kubenswrapper[4672]: I0217 16:25:15.063835 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79hhv\" (UniqueName: \"kubernetes.io/projected/a755047c-581e-405e-8462-7ecf72a91b4f-kube-api-access-79hhv\") pod \"ceilometer-0\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " pod="openstack/ceilometer-0" Feb 17 16:25:15 crc kubenswrapper[4672]: I0217 16:25:15.063882 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " pod="openstack/ceilometer-0" Feb 17 16:25:15 crc kubenswrapper[4672]: I0217 16:25:15.063921 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-config-data\") pod \"ceilometer-0\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " pod="openstack/ceilometer-0" Feb 17 16:25:15 crc kubenswrapper[4672]: I0217 16:25:15.063968 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " pod="openstack/ceilometer-0" Feb 17 16:25:15 crc kubenswrapper[4672]: I0217 16:25:15.063991 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-scripts\") pod \"ceilometer-0\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " pod="openstack/ceilometer-0" Feb 17 16:25:15 crc kubenswrapper[4672]: I0217 16:25:15.064024 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a755047c-581e-405e-8462-7ecf72a91b4f-log-httpd\") pod \"ceilometer-0\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " pod="openstack/ceilometer-0" Feb 17 16:25:15 crc kubenswrapper[4672]: I0217 16:25:15.064085 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a755047c-581e-405e-8462-7ecf72a91b4f-run-httpd\") pod \"ceilometer-0\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " pod="openstack/ceilometer-0" Feb 17 16:25:15 crc kubenswrapper[4672]: I0217 16:25:15.064283 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a755047c-581e-405e-8462-7ecf72a91b4f-log-httpd\") pod \"ceilometer-0\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " pod="openstack/ceilometer-0" Feb 17 16:25:15 crc kubenswrapper[4672]: I0217 16:25:15.067375 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " pod="openstack/ceilometer-0" Feb 17 16:25:15 crc kubenswrapper[4672]: I0217 16:25:15.067939 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-config-data\") pod \"ceilometer-0\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " pod="openstack/ceilometer-0" Feb 17 16:25:15 crc kubenswrapper[4672]: I0217 16:25:15.068376 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " pod="openstack/ceilometer-0" Feb 17 16:25:15 crc kubenswrapper[4672]: I0217 16:25:15.072302 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-scripts\") pod \"ceilometer-0\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " pod="openstack/ceilometer-0" Feb 17 16:25:15 crc kubenswrapper[4672]: I0217 16:25:15.093413 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79hhv\" (UniqueName: \"kubernetes.io/projected/a755047c-581e-405e-8462-7ecf72a91b4f-kube-api-access-79hhv\") pod \"ceilometer-0\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " pod="openstack/ceilometer-0" Feb 17 16:25:15 crc kubenswrapper[4672]: I0217 16:25:15.219502 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:15 crc kubenswrapper[4672]: I0217 16:25:15.661580 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:15 crc kubenswrapper[4672]: W0217 16:25:15.662855 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda755047c_581e_405e_8462_7ecf72a91b4f.slice/crio-a93a0610456a26da92c203024b3ed7466deaa0a1a310f945e7fa65b6e135d3bb WatchSource:0}: Error finding container a93a0610456a26da92c203024b3ed7466deaa0a1a310f945e7fa65b6e135d3bb: Status 404 returned error can't find the container with id a93a0610456a26da92c203024b3ed7466deaa0a1a310f945e7fa65b6e135d3bb Feb 17 16:25:15 crc kubenswrapper[4672]: I0217 16:25:15.798319 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:15 crc kubenswrapper[4672]: I0217 16:25:15.799281 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a755047c-581e-405e-8462-7ecf72a91b4f","Type":"ContainerStarted","Data":"a93a0610456a26da92c203024b3ed7466deaa0a1a310f945e7fa65b6e135d3bb"} Feb 17 16:25:15 crc kubenswrapper[4672]: I0217 16:25:15.961787 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7ac5715-c17b-430a-8420-eb8fa198c515" path="/var/lib/kubelet/pods/a7ac5715-c17b-430a-8420-eb8fa198c515/volumes" Feb 17 16:25:16 crc kubenswrapper[4672]: I0217 16:25:16.815920 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a755047c-581e-405e-8462-7ecf72a91b4f","Type":"ContainerStarted","Data":"b03168ff9f03453acbd9d374b8fe6ad044dd6cbe88d6fa85bf70a41f8f56dbeb"} Feb 17 16:25:17 crc kubenswrapper[4672]: I0217 16:25:17.826000 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a755047c-581e-405e-8462-7ecf72a91b4f","Type":"ContainerStarted","Data":"2aba0115e62ef264c72031a89bf72debbddccc82d2c4f3272141b73f59ef4e02"} Feb 17 16:25:17 crc kubenswrapper[4672]: I0217 16:25:17.826242 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a755047c-581e-405e-8462-7ecf72a91b4f","Type":"ContainerStarted","Data":"d617f79f4270742038e283cc39573504cae0fd5cbec6f2386e05eb65f11d2f11"} Feb 17 16:25:19 crc kubenswrapper[4672]: I0217 16:25:19.846321 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a755047c-581e-405e-8462-7ecf72a91b4f","Type":"ContainerStarted","Data":"7cb500201551e59be96467c69397dab4e27072e5d775438113e88bb3ff612973"} Feb 17 16:25:19 crc kubenswrapper[4672]: I0217 16:25:19.846901 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:25:19 crc kubenswrapper[4672]: I0217 16:25:19.846552 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a755047c-581e-405e-8462-7ecf72a91b4f" containerName="proxy-httpd" containerID="cri-o://7cb500201551e59be96467c69397dab4e27072e5d775438113e88bb3ff612973" gracePeriod=30 Feb 17 16:25:19 crc kubenswrapper[4672]: I0217 16:25:19.846456 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a755047c-581e-405e-8462-7ecf72a91b4f" containerName="ceilometer-central-agent" containerID="cri-o://b03168ff9f03453acbd9d374b8fe6ad044dd6cbe88d6fa85bf70a41f8f56dbeb" gracePeriod=30 Feb 17 16:25:19 crc kubenswrapper[4672]: I0217 16:25:19.846568 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a755047c-581e-405e-8462-7ecf72a91b4f" containerName="ceilometer-notification-agent" containerID="cri-o://d617f79f4270742038e283cc39573504cae0fd5cbec6f2386e05eb65f11d2f11" gracePeriod=30 Feb 17 16:25:19 crc kubenswrapper[4672]: I0217 16:25:19.846578 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a755047c-581e-405e-8462-7ecf72a91b4f" containerName="sg-core" containerID="cri-o://2aba0115e62ef264c72031a89bf72debbddccc82d2c4f3272141b73f59ef4e02" gracePeriod=30 Feb 17 16:25:19 crc kubenswrapper[4672]: I0217 16:25:19.876144 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.67671929 podStartE2EDuration="5.876123466s" podCreationTimestamp="2026-02-17 16:25:14 +0000 UTC" firstStartedPulling="2026-02-17 16:25:15.666689783 +0000 UTC m=+1324.420778535" lastFinishedPulling="2026-02-17 16:25:18.866093969 +0000 UTC m=+1327.620182711" observedRunningTime="2026-02-17 16:25:19.865285991 +0000 UTC m=+1328.619374743" watchObservedRunningTime="2026-02-17 16:25:19.876123466 +0000 UTC m=+1328.630212208" Feb 17 16:25:20 crc kubenswrapper[4672]: I0217 16:25:20.862075 4672 generic.go:334] "Generic (PLEG): container finished" podID="a755047c-581e-405e-8462-7ecf72a91b4f" containerID="7cb500201551e59be96467c69397dab4e27072e5d775438113e88bb3ff612973" exitCode=0 Feb 17 16:25:20 crc kubenswrapper[4672]: I0217 16:25:20.862531 4672 generic.go:334] "Generic (PLEG): container finished" podID="a755047c-581e-405e-8462-7ecf72a91b4f" containerID="2aba0115e62ef264c72031a89bf72debbddccc82d2c4f3272141b73f59ef4e02" exitCode=2 Feb 17 16:25:20 crc kubenswrapper[4672]: I0217 16:25:20.862546 4672 generic.go:334] "Generic (PLEG): container finished" podID="a755047c-581e-405e-8462-7ecf72a91b4f" containerID="d617f79f4270742038e283cc39573504cae0fd5cbec6f2386e05eb65f11d2f11" exitCode=0 Feb 17 16:25:20 crc kubenswrapper[4672]: I0217 16:25:20.862168 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a755047c-581e-405e-8462-7ecf72a91b4f","Type":"ContainerDied","Data":"7cb500201551e59be96467c69397dab4e27072e5d775438113e88bb3ff612973"} Feb 17 16:25:20 crc kubenswrapper[4672]: I0217 16:25:20.862594 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a755047c-581e-405e-8462-7ecf72a91b4f","Type":"ContainerDied","Data":"2aba0115e62ef264c72031a89bf72debbddccc82d2c4f3272141b73f59ef4e02"} Feb 17 16:25:20 crc kubenswrapper[4672]: I0217 16:25:20.862616 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a755047c-581e-405e-8462-7ecf72a91b4f","Type":"ContainerDied","Data":"d617f79f4270742038e283cc39573504cae0fd5cbec6f2386e05eb65f11d2f11"} Feb 17 16:25:23 crc kubenswrapper[4672]: I0217 16:25:23.916873 4672 generic.go:334] "Generic (PLEG): container finished" podID="a755047c-581e-405e-8462-7ecf72a91b4f" containerID="b03168ff9f03453acbd9d374b8fe6ad044dd6cbe88d6fa85bf70a41f8f56dbeb" exitCode=0 Feb 17 16:25:23 crc kubenswrapper[4672]: I0217 16:25:23.916943 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a755047c-581e-405e-8462-7ecf72a91b4f","Type":"ContainerDied","Data":"b03168ff9f03453acbd9d374b8fe6ad044dd6cbe88d6fa85bf70a41f8f56dbeb"} Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.613894 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.630984 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a755047c-581e-405e-8462-7ecf72a91b4f-run-httpd\") pod \"a755047c-581e-405e-8462-7ecf72a91b4f\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.631059 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79hhv\" (UniqueName: \"kubernetes.io/projected/a755047c-581e-405e-8462-7ecf72a91b4f-kube-api-access-79hhv\") pod \"a755047c-581e-405e-8462-7ecf72a91b4f\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.631137 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-scripts\") pod \"a755047c-581e-405e-8462-7ecf72a91b4f\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.631210 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-sg-core-conf-yaml\") pod \"a755047c-581e-405e-8462-7ecf72a91b4f\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.632490 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-combined-ca-bundle\") pod \"a755047c-581e-405e-8462-7ecf72a91b4f\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.632538 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a755047c-581e-405e-8462-7ecf72a91b4f-log-httpd\") pod \"a755047c-581e-405e-8462-7ecf72a91b4f\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.632583 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-config-data\") pod \"a755047c-581e-405e-8462-7ecf72a91b4f\" (UID: \"a755047c-581e-405e-8462-7ecf72a91b4f\") " Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.632833 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a755047c-581e-405e-8462-7ecf72a91b4f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a755047c-581e-405e-8462-7ecf72a91b4f" (UID: "a755047c-581e-405e-8462-7ecf72a91b4f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.633831 4672 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a755047c-581e-405e-8462-7ecf72a91b4f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.635911 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a755047c-581e-405e-8462-7ecf72a91b4f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a755047c-581e-405e-8462-7ecf72a91b4f" (UID: "a755047c-581e-405e-8462-7ecf72a91b4f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.657682 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-scripts" (OuterVolumeSpecName: "scripts") pod "a755047c-581e-405e-8462-7ecf72a91b4f" (UID: "a755047c-581e-405e-8462-7ecf72a91b4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.691593 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a755047c-581e-405e-8462-7ecf72a91b4f-kube-api-access-79hhv" (OuterVolumeSpecName: "kube-api-access-79hhv") pod "a755047c-581e-405e-8462-7ecf72a91b4f" (UID: "a755047c-581e-405e-8462-7ecf72a91b4f"). InnerVolumeSpecName "kube-api-access-79hhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.747777 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a755047c-581e-405e-8462-7ecf72a91b4f" (UID: "a755047c-581e-405e-8462-7ecf72a91b4f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.754786 4672 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a755047c-581e-405e-8462-7ecf72a91b4f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.754813 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79hhv\" (UniqueName: \"kubernetes.io/projected/a755047c-581e-405e-8462-7ecf72a91b4f-kube-api-access-79hhv\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.754824 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.754832 4672 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.765107 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-config-data" (OuterVolumeSpecName: "config-data") pod "a755047c-581e-405e-8462-7ecf72a91b4f" (UID: "a755047c-581e-405e-8462-7ecf72a91b4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.795737 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a755047c-581e-405e-8462-7ecf72a91b4f" (UID: "a755047c-581e-405e-8462-7ecf72a91b4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.856626 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.856658 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a755047c-581e-405e-8462-7ecf72a91b4f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.931417 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a755047c-581e-405e-8462-7ecf72a91b4f","Type":"ContainerDied","Data":"a93a0610456a26da92c203024b3ed7466deaa0a1a310f945e7fa65b6e135d3bb"} Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.931469 4672 scope.go:117] "RemoveContainer" containerID="7cb500201551e59be96467c69397dab4e27072e5d775438113e88bb3ff612973" Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.931675 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.952881 4672 scope.go:117] "RemoveContainer" containerID="2aba0115e62ef264c72031a89bf72debbddccc82d2c4f3272141b73f59ef4e02" Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.976630 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.984301 4672 scope.go:117] "RemoveContainer" containerID="d617f79f4270742038e283cc39573504cae0fd5cbec6f2386e05eb65f11d2f11" Feb 17 16:25:24 crc kubenswrapper[4672]: I0217 16:25:24.994253 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.008710 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:25 crc kubenswrapper[4672]: E0217 16:25:25.010983 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a755047c-581e-405e-8462-7ecf72a91b4f" containerName="sg-core" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.011006 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="a755047c-581e-405e-8462-7ecf72a91b4f" containerName="sg-core" Feb 17 16:25:25 crc kubenswrapper[4672]: E0217 16:25:25.011017 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a755047c-581e-405e-8462-7ecf72a91b4f" containerName="ceilometer-notification-agent" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.011023 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="a755047c-581e-405e-8462-7ecf72a91b4f" containerName="ceilometer-notification-agent" Feb 17 16:25:25 crc kubenswrapper[4672]: E0217 16:25:25.011036 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a755047c-581e-405e-8462-7ecf72a91b4f" containerName="ceilometer-central-agent" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.011043 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="a755047c-581e-405e-8462-7ecf72a91b4f" containerName="ceilometer-central-agent" Feb 17 16:25:25 crc kubenswrapper[4672]: E0217 16:25:25.011124 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a755047c-581e-405e-8462-7ecf72a91b4f" containerName="proxy-httpd" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.011133 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="a755047c-581e-405e-8462-7ecf72a91b4f" containerName="proxy-httpd" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.011326 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="a755047c-581e-405e-8462-7ecf72a91b4f" containerName="ceilometer-central-agent" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.011338 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="a755047c-581e-405e-8462-7ecf72a91b4f" containerName="ceilometer-notification-agent" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.011351 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="a755047c-581e-405e-8462-7ecf72a91b4f" containerName="proxy-httpd" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.011376 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="a755047c-581e-405e-8462-7ecf72a91b4f" containerName="sg-core" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.012054 4672 scope.go:117] "RemoveContainer" containerID="b03168ff9f03453acbd9d374b8fe6ad044dd6cbe88d6fa85bf70a41f8f56dbeb" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.013844 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.017990 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.019537 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.019600 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.060658 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87b9cd39-fe22-48c7-af4e-c17be1205506-run-httpd\") pod \"ceilometer-0\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " pod="openstack/ceilometer-0" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.060737 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " pod="openstack/ceilometer-0" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.060821 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p5w2\" (UniqueName: \"kubernetes.io/projected/87b9cd39-fe22-48c7-af4e-c17be1205506-kube-api-access-8p5w2\") pod \"ceilometer-0\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " pod="openstack/ceilometer-0" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.060841 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " pod="openstack/ceilometer-0" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.060876 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-scripts\") pod \"ceilometer-0\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " pod="openstack/ceilometer-0" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.060923 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87b9cd39-fe22-48c7-af4e-c17be1205506-log-httpd\") pod \"ceilometer-0\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " pod="openstack/ceilometer-0" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.060943 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-config-data\") pod \"ceilometer-0\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " pod="openstack/ceilometer-0" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.162776 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " pod="openstack/ceilometer-0" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.162879 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p5w2\" (UniqueName: \"kubernetes.io/projected/87b9cd39-fe22-48c7-af4e-c17be1205506-kube-api-access-8p5w2\") pod \"ceilometer-0\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " pod="openstack/ceilometer-0" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.162903 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " pod="openstack/ceilometer-0" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.162936 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-scripts\") pod \"ceilometer-0\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " pod="openstack/ceilometer-0" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.163378 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87b9cd39-fe22-48c7-af4e-c17be1205506-log-httpd\") pod \"ceilometer-0\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " pod="openstack/ceilometer-0" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.163415 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-config-data\") pod \"ceilometer-0\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " pod="openstack/ceilometer-0" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.163541 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87b9cd39-fe22-48c7-af4e-c17be1205506-run-httpd\") pod \"ceilometer-0\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " pod="openstack/ceilometer-0" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.163905 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87b9cd39-fe22-48c7-af4e-c17be1205506-log-httpd\") pod \"ceilometer-0\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " pod="openstack/ceilometer-0" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.163917 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87b9cd39-fe22-48c7-af4e-c17be1205506-run-httpd\") pod \"ceilometer-0\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " pod="openstack/ceilometer-0" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.166744 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " pod="openstack/ceilometer-0" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.174712 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " pod="openstack/ceilometer-0" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.175226 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-config-data\") pod \"ceilometer-0\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " pod="openstack/ceilometer-0" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.175387 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-scripts\") pod \"ceilometer-0\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " pod="openstack/ceilometer-0" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.178393 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p5w2\" (UniqueName: \"kubernetes.io/projected/87b9cd39-fe22-48c7-af4e-c17be1205506-kube-api-access-8p5w2\") pod \"ceilometer-0\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " pod="openstack/ceilometer-0" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.346400 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.799380 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:25 crc kubenswrapper[4672]: W0217 16:25:25.802676 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87b9cd39_fe22_48c7_af4e_c17be1205506.slice/crio-ea4f71dd8c845cf04f6ef5dc8e69f64887a490e90cca88293396d0fd472a2dbb WatchSource:0}: Error finding container ea4f71dd8c845cf04f6ef5dc8e69f64887a490e90cca88293396d0fd472a2dbb: Status 404 returned error can't find the container with id ea4f71dd8c845cf04f6ef5dc8e69f64887a490e90cca88293396d0fd472a2dbb Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.942253 4672 generic.go:334] "Generic (PLEG): container finished" podID="348b9f8c-3534-40ae-9a6d-989fd1db076d" containerID="74545cce8d094ef3f457ff7851b63c1ecf4112b386b93e028a52cbbbe186b9d0" exitCode=0 Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.942358 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-g7n24" event={"ID":"348b9f8c-3534-40ae-9a6d-989fd1db076d","Type":"ContainerDied","Data":"74545cce8d094ef3f457ff7851b63c1ecf4112b386b93e028a52cbbbe186b9d0"} Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.958630 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a755047c-581e-405e-8462-7ecf72a91b4f" path="/var/lib/kubelet/pods/a755047c-581e-405e-8462-7ecf72a91b4f/volumes" Feb 17 16:25:25 crc kubenswrapper[4672]: I0217 16:25:25.959526 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87b9cd39-fe22-48c7-af4e-c17be1205506","Type":"ContainerStarted","Data":"ea4f71dd8c845cf04f6ef5dc8e69f64887a490e90cca88293396d0fd472a2dbb"} Feb 17 16:25:26 crc kubenswrapper[4672]: I0217 16:25:26.972373 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87b9cd39-fe22-48c7-af4e-c17be1205506","Type":"ContainerStarted","Data":"e0a86cd3551a3fba4aaa408bf011dd96453eea04c3416f7d0371b5667ad685ae"} Feb 17 16:25:27 crc kubenswrapper[4672]: I0217 16:25:27.483659 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-g7n24" Feb 17 16:25:27 crc kubenswrapper[4672]: I0217 16:25:27.525983 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/348b9f8c-3534-40ae-9a6d-989fd1db076d-config-data\") pod \"348b9f8c-3534-40ae-9a6d-989fd1db076d\" (UID: \"348b9f8c-3534-40ae-9a6d-989fd1db076d\") " Feb 17 16:25:27 crc kubenswrapper[4672]: I0217 16:25:27.526043 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/348b9f8c-3534-40ae-9a6d-989fd1db076d-combined-ca-bundle\") pod \"348b9f8c-3534-40ae-9a6d-989fd1db076d\" (UID: \"348b9f8c-3534-40ae-9a6d-989fd1db076d\") " Feb 17 16:25:27 crc kubenswrapper[4672]: I0217 16:25:27.526133 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw9p6\" (UniqueName: \"kubernetes.io/projected/348b9f8c-3534-40ae-9a6d-989fd1db076d-kube-api-access-pw9p6\") pod \"348b9f8c-3534-40ae-9a6d-989fd1db076d\" (UID: \"348b9f8c-3534-40ae-9a6d-989fd1db076d\") " Feb 17 16:25:27 crc kubenswrapper[4672]: I0217 16:25:27.526192 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/348b9f8c-3534-40ae-9a6d-989fd1db076d-scripts\") pod \"348b9f8c-3534-40ae-9a6d-989fd1db076d\" (UID: \"348b9f8c-3534-40ae-9a6d-989fd1db076d\") " Feb 17 16:25:27 crc kubenswrapper[4672]: I0217 16:25:27.535858 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/348b9f8c-3534-40ae-9a6d-989fd1db076d-kube-api-access-pw9p6" (OuterVolumeSpecName: "kube-api-access-pw9p6") pod "348b9f8c-3534-40ae-9a6d-989fd1db076d" (UID: "348b9f8c-3534-40ae-9a6d-989fd1db076d"). InnerVolumeSpecName "kube-api-access-pw9p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:27 crc kubenswrapper[4672]: I0217 16:25:27.536809 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/348b9f8c-3534-40ae-9a6d-989fd1db076d-scripts" (OuterVolumeSpecName: "scripts") pod "348b9f8c-3534-40ae-9a6d-989fd1db076d" (UID: "348b9f8c-3534-40ae-9a6d-989fd1db076d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:27 crc kubenswrapper[4672]: I0217 16:25:27.558245 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/348b9f8c-3534-40ae-9a6d-989fd1db076d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "348b9f8c-3534-40ae-9a6d-989fd1db076d" (UID: "348b9f8c-3534-40ae-9a6d-989fd1db076d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:27 crc kubenswrapper[4672]: I0217 16:25:27.565973 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:25:27 crc kubenswrapper[4672]: I0217 16:25:27.566039 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:25:27 crc kubenswrapper[4672]: I0217 16:25:27.566875 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/348b9f8c-3534-40ae-9a6d-989fd1db076d-config-data" (OuterVolumeSpecName: "config-data") pod "348b9f8c-3534-40ae-9a6d-989fd1db076d" (UID: "348b9f8c-3534-40ae-9a6d-989fd1db076d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:27 crc kubenswrapper[4672]: I0217 16:25:27.628530 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw9p6\" (UniqueName: \"kubernetes.io/projected/348b9f8c-3534-40ae-9a6d-989fd1db076d-kube-api-access-pw9p6\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:27 crc kubenswrapper[4672]: I0217 16:25:27.628562 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/348b9f8c-3534-40ae-9a6d-989fd1db076d-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:27 crc kubenswrapper[4672]: I0217 16:25:27.628572 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/348b9f8c-3534-40ae-9a6d-989fd1db076d-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:27 crc kubenswrapper[4672]: I0217 16:25:27.628581 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/348b9f8c-3534-40ae-9a6d-989fd1db076d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:27 crc kubenswrapper[4672]: I0217 16:25:27.981356 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-g7n24" event={"ID":"348b9f8c-3534-40ae-9a6d-989fd1db076d","Type":"ContainerDied","Data":"c4b7124de8e1df51c4618b80c1dd3b8fc67d81ecc4cfea39bd849ae8ff1c4af0"} Feb 17 16:25:27 crc kubenswrapper[4672]: I0217 16:25:27.981421 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4b7124de8e1df51c4618b80c1dd3b8fc67d81ecc4cfea39bd849ae8ff1c4af0" Feb 17 16:25:27 crc kubenswrapper[4672]: I0217 16:25:27.981379 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-g7n24" Feb 17 16:25:27 crc kubenswrapper[4672]: I0217 16:25:27.983667 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87b9cd39-fe22-48c7-af4e-c17be1205506","Type":"ContainerStarted","Data":"8fb8b1ef49cb2b193a0927789536df4d8e7070f1f50e4faa57a3e3b051f355e6"} Feb 17 16:25:27 crc kubenswrapper[4672]: I0217 16:25:27.983713 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87b9cd39-fe22-48c7-af4e-c17be1205506","Type":"ContainerStarted","Data":"e74948ac1f52e0b4b2d69e8b128b8f787cc0e4e75b01f1f1b36d718b82e2b2b4"} Feb 17 16:25:28 crc kubenswrapper[4672]: I0217 16:25:28.080571 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:25:28 crc kubenswrapper[4672]: E0217 16:25:28.081254 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348b9f8c-3534-40ae-9a6d-989fd1db076d" containerName="nova-cell0-conductor-db-sync" Feb 17 16:25:28 crc kubenswrapper[4672]: I0217 16:25:28.081267 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="348b9f8c-3534-40ae-9a6d-989fd1db076d" containerName="nova-cell0-conductor-db-sync" Feb 17 16:25:28 crc kubenswrapper[4672]: I0217 16:25:28.081487 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="348b9f8c-3534-40ae-9a6d-989fd1db076d" containerName="nova-cell0-conductor-db-sync" Feb 17 16:25:28 crc kubenswrapper[4672]: I0217 16:25:28.082178 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 16:25:28 crc kubenswrapper[4672]: I0217 16:25:28.084915 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-l88jt" Feb 17 16:25:28 crc kubenswrapper[4672]: I0217 16:25:28.087637 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 16:25:28 crc kubenswrapper[4672]: I0217 16:25:28.105332 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:25:28 crc kubenswrapper[4672]: I0217 16:25:28.139431 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csh2j\" (UniqueName: \"kubernetes.io/projected/62967d67-22e1-454a-a73c-a1d3fe95d08c-kube-api-access-csh2j\") pod \"nova-cell0-conductor-0\" (UID: \"62967d67-22e1-454a-a73c-a1d3fe95d08c\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:25:28 crc kubenswrapper[4672]: I0217 16:25:28.139504 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62967d67-22e1-454a-a73c-a1d3fe95d08c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"62967d67-22e1-454a-a73c-a1d3fe95d08c\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:25:28 crc kubenswrapper[4672]: I0217 16:25:28.139588 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62967d67-22e1-454a-a73c-a1d3fe95d08c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"62967d67-22e1-454a-a73c-a1d3fe95d08c\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:25:28 crc kubenswrapper[4672]: I0217 16:25:28.242131 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csh2j\" (UniqueName: \"kubernetes.io/projected/62967d67-22e1-454a-a73c-a1d3fe95d08c-kube-api-access-csh2j\") pod \"nova-cell0-conductor-0\" (UID: \"62967d67-22e1-454a-a73c-a1d3fe95d08c\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:25:28 crc kubenswrapper[4672]: I0217 16:25:28.242427 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62967d67-22e1-454a-a73c-a1d3fe95d08c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"62967d67-22e1-454a-a73c-a1d3fe95d08c\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:25:28 crc kubenswrapper[4672]: I0217 16:25:28.242556 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62967d67-22e1-454a-a73c-a1d3fe95d08c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"62967d67-22e1-454a-a73c-a1d3fe95d08c\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:25:28 crc kubenswrapper[4672]: I0217 16:25:28.247689 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62967d67-22e1-454a-a73c-a1d3fe95d08c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"62967d67-22e1-454a-a73c-a1d3fe95d08c\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:25:28 crc kubenswrapper[4672]: I0217 16:25:28.247789 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62967d67-22e1-454a-a73c-a1d3fe95d08c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"62967d67-22e1-454a-a73c-a1d3fe95d08c\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:25:28 crc kubenswrapper[4672]: I0217 16:25:28.262765 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csh2j\" (UniqueName: \"kubernetes.io/projected/62967d67-22e1-454a-a73c-a1d3fe95d08c-kube-api-access-csh2j\") pod \"nova-cell0-conductor-0\" (UID: \"62967d67-22e1-454a-a73c-a1d3fe95d08c\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:25:28 crc kubenswrapper[4672]: I0217 16:25:28.395196 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:28 crc kubenswrapper[4672]: I0217 16:25:28.405165 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 16:25:28 crc kubenswrapper[4672]: I0217 16:25:28.865305 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:25:28 crc kubenswrapper[4672]: I0217 16:25:28.999276 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"62967d67-22e1-454a-a73c-a1d3fe95d08c","Type":"ContainerStarted","Data":"ec66139edad6d55abb076deda453d68f90ca6fd577fb9b7c2729fb22ed653396"} Feb 17 16:25:30 crc kubenswrapper[4672]: I0217 16:25:30.010475 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"62967d67-22e1-454a-a73c-a1d3fe95d08c","Type":"ContainerStarted","Data":"0fc511499a23f7d4d37561937b295389434fd3ac031f155ae2880b189d73fc10"} Feb 17 16:25:30 crc kubenswrapper[4672]: I0217 16:25:30.011400 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 17 16:25:30 crc kubenswrapper[4672]: I0217 16:25:30.015372 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87b9cd39-fe22-48c7-af4e-c17be1205506","Type":"ContainerStarted","Data":"00f32ce23e287500bf5dc8c81897bd77d1ac7b91839946f7e5a343316a032554"} Feb 17 16:25:30 crc kubenswrapper[4672]: I0217 16:25:30.015615 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87b9cd39-fe22-48c7-af4e-c17be1205506" containerName="ceilometer-central-agent" containerID="cri-o://e0a86cd3551a3fba4aaa408bf011dd96453eea04c3416f7d0371b5667ad685ae" gracePeriod=30 Feb 17 16:25:30 crc kubenswrapper[4672]: I0217 16:25:30.015665 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87b9cd39-fe22-48c7-af4e-c17be1205506" containerName="ceilometer-notification-agent" containerID="cri-o://e74948ac1f52e0b4b2d69e8b128b8f787cc0e4e75b01f1f1b36d718b82e2b2b4" gracePeriod=30 Feb 17 16:25:30 crc kubenswrapper[4672]: I0217 16:25:30.015617 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:25:30 crc kubenswrapper[4672]: I0217 16:25:30.015671 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87b9cd39-fe22-48c7-af4e-c17be1205506" containerName="sg-core" containerID="cri-o://8fb8b1ef49cb2b193a0927789536df4d8e7070f1f50e4faa57a3e3b051f355e6" gracePeriod=30 Feb 17 16:25:30 crc kubenswrapper[4672]: I0217 16:25:30.015673 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87b9cd39-fe22-48c7-af4e-c17be1205506" containerName="proxy-httpd" containerID="cri-o://00f32ce23e287500bf5dc8c81897bd77d1ac7b91839946f7e5a343316a032554" gracePeriod=30 Feb 17 16:25:30 crc kubenswrapper[4672]: I0217 16:25:30.038792 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.0387682639999998 podStartE2EDuration="2.038768264s" podCreationTimestamp="2026-02-17 16:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:30.0359722 +0000 UTC m=+1338.790060982" watchObservedRunningTime="2026-02-17 16:25:30.038768264 +0000 UTC m=+1338.792857006" Feb 17 16:25:30 crc kubenswrapper[4672]: I0217 16:25:30.064908 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.97938085 podStartE2EDuration="6.064889243s" podCreationTimestamp="2026-02-17 16:25:24 +0000 UTC" firstStartedPulling="2026-02-17 16:25:25.806032486 +0000 UTC m=+1334.560121218" lastFinishedPulling="2026-02-17 16:25:28.891540879 +0000 UTC m=+1337.645629611" observedRunningTime="2026-02-17 16:25:30.059801408 +0000 UTC m=+1338.813890150" watchObservedRunningTime="2026-02-17 16:25:30.064889243 +0000 UTC m=+1338.818977985" Feb 17 16:25:31 crc kubenswrapper[4672]: I0217 16:25:31.037669 4672 generic.go:334] "Generic (PLEG): container finished" podID="87b9cd39-fe22-48c7-af4e-c17be1205506" containerID="00f32ce23e287500bf5dc8c81897bd77d1ac7b91839946f7e5a343316a032554" exitCode=0 Feb 17 16:25:31 crc kubenswrapper[4672]: I0217 16:25:31.037714 4672 generic.go:334] "Generic (PLEG): container finished" podID="87b9cd39-fe22-48c7-af4e-c17be1205506" containerID="8fb8b1ef49cb2b193a0927789536df4d8e7070f1f50e4faa57a3e3b051f355e6" exitCode=2 Feb 17 16:25:31 crc kubenswrapper[4672]: I0217 16:25:31.037733 4672 generic.go:334] "Generic (PLEG): container finished" podID="87b9cd39-fe22-48c7-af4e-c17be1205506" containerID="e74948ac1f52e0b4b2d69e8b128b8f787cc0e4e75b01f1f1b36d718b82e2b2b4" exitCode=0 Feb 17 16:25:31 crc kubenswrapper[4672]: I0217 16:25:31.039015 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87b9cd39-fe22-48c7-af4e-c17be1205506","Type":"ContainerDied","Data":"00f32ce23e287500bf5dc8c81897bd77d1ac7b91839946f7e5a343316a032554"} Feb 17 16:25:31 crc kubenswrapper[4672]: I0217 16:25:31.039057 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87b9cd39-fe22-48c7-af4e-c17be1205506","Type":"ContainerDied","Data":"8fb8b1ef49cb2b193a0927789536df4d8e7070f1f50e4faa57a3e3b051f355e6"} Feb 17 16:25:31 crc kubenswrapper[4672]: I0217 16:25:31.039080 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87b9cd39-fe22-48c7-af4e-c17be1205506","Type":"ContainerDied","Data":"e74948ac1f52e0b4b2d69e8b128b8f787cc0e4e75b01f1f1b36d718b82e2b2b4"} Feb 17 16:25:34 crc kubenswrapper[4672]: I0217 16:25:34.695015 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:34 crc kubenswrapper[4672]: I0217 16:25:34.800319 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-combined-ca-bundle\") pod \"87b9cd39-fe22-48c7-af4e-c17be1205506\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " Feb 17 16:25:34 crc kubenswrapper[4672]: I0217 16:25:34.800392 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87b9cd39-fe22-48c7-af4e-c17be1205506-run-httpd\") pod \"87b9cd39-fe22-48c7-af4e-c17be1205506\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " Feb 17 16:25:34 crc kubenswrapper[4672]: I0217 16:25:34.800554 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p5w2\" (UniqueName: \"kubernetes.io/projected/87b9cd39-fe22-48c7-af4e-c17be1205506-kube-api-access-8p5w2\") pod \"87b9cd39-fe22-48c7-af4e-c17be1205506\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " Feb 17 16:25:34 crc kubenswrapper[4672]: I0217 16:25:34.800585 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-scripts\") pod \"87b9cd39-fe22-48c7-af4e-c17be1205506\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " Feb 17 16:25:34 crc kubenswrapper[4672]: I0217 16:25:34.800809 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87b9cd39-fe22-48c7-af4e-c17be1205506-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "87b9cd39-fe22-48c7-af4e-c17be1205506" (UID: "87b9cd39-fe22-48c7-af4e-c17be1205506"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:34 crc kubenswrapper[4672]: I0217 16:25:34.801596 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-config-data\") pod \"87b9cd39-fe22-48c7-af4e-c17be1205506\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " Feb 17 16:25:34 crc kubenswrapper[4672]: I0217 16:25:34.801624 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-sg-core-conf-yaml\") pod \"87b9cd39-fe22-48c7-af4e-c17be1205506\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " Feb 17 16:25:34 crc kubenswrapper[4672]: I0217 16:25:34.801660 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87b9cd39-fe22-48c7-af4e-c17be1205506-log-httpd\") pod \"87b9cd39-fe22-48c7-af4e-c17be1205506\" (UID: \"87b9cd39-fe22-48c7-af4e-c17be1205506\") " Feb 17 16:25:34 crc kubenswrapper[4672]: I0217 16:25:34.802271 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87b9cd39-fe22-48c7-af4e-c17be1205506-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "87b9cd39-fe22-48c7-af4e-c17be1205506" (UID: "87b9cd39-fe22-48c7-af4e-c17be1205506"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:34 crc kubenswrapper[4672]: I0217 16:25:34.802579 4672 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87b9cd39-fe22-48c7-af4e-c17be1205506-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:34 crc kubenswrapper[4672]: I0217 16:25:34.802596 4672 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87b9cd39-fe22-48c7-af4e-c17be1205506-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:34 crc kubenswrapper[4672]: I0217 16:25:34.806895 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-scripts" (OuterVolumeSpecName: "scripts") pod "87b9cd39-fe22-48c7-af4e-c17be1205506" (UID: "87b9cd39-fe22-48c7-af4e-c17be1205506"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:34 crc kubenswrapper[4672]: I0217 16:25:34.807130 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b9cd39-fe22-48c7-af4e-c17be1205506-kube-api-access-8p5w2" (OuterVolumeSpecName: "kube-api-access-8p5w2") pod "87b9cd39-fe22-48c7-af4e-c17be1205506" (UID: "87b9cd39-fe22-48c7-af4e-c17be1205506"). InnerVolumeSpecName "kube-api-access-8p5w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:34 crc kubenswrapper[4672]: I0217 16:25:34.835682 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "87b9cd39-fe22-48c7-af4e-c17be1205506" (UID: "87b9cd39-fe22-48c7-af4e-c17be1205506"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:34 crc kubenswrapper[4672]: I0217 16:25:34.897939 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87b9cd39-fe22-48c7-af4e-c17be1205506" (UID: "87b9cd39-fe22-48c7-af4e-c17be1205506"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:34 crc kubenswrapper[4672]: I0217 16:25:34.901877 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-config-data" (OuterVolumeSpecName: "config-data") pod "87b9cd39-fe22-48c7-af4e-c17be1205506" (UID: "87b9cd39-fe22-48c7-af4e-c17be1205506"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:34 crc kubenswrapper[4672]: I0217 16:25:34.904314 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:34 crc kubenswrapper[4672]: I0217 16:25:34.904352 4672 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:34 crc kubenswrapper[4672]: I0217 16:25:34.904367 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:34 crc kubenswrapper[4672]: I0217 16:25:34.904381 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p5w2\" (UniqueName: \"kubernetes.io/projected/87b9cd39-fe22-48c7-af4e-c17be1205506-kube-api-access-8p5w2\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:34 crc kubenswrapper[4672]: I0217 16:25:34.904394 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b9cd39-fe22-48c7-af4e-c17be1205506-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.083430 4672 generic.go:334] "Generic (PLEG): container finished" podID="87b9cd39-fe22-48c7-af4e-c17be1205506" containerID="e0a86cd3551a3fba4aaa408bf011dd96453eea04c3416f7d0371b5667ad685ae" exitCode=0 Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.083942 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87b9cd39-fe22-48c7-af4e-c17be1205506","Type":"ContainerDied","Data":"e0a86cd3551a3fba4aaa408bf011dd96453eea04c3416f7d0371b5667ad685ae"} Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.083984 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87b9cd39-fe22-48c7-af4e-c17be1205506","Type":"ContainerDied","Data":"ea4f71dd8c845cf04f6ef5dc8e69f64887a490e90cca88293396d0fd472a2dbb"} Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.084016 4672 scope.go:117] "RemoveContainer" containerID="00f32ce23e287500bf5dc8c81897bd77d1ac7b91839946f7e5a343316a032554" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.084207 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.125915 4672 scope.go:117] "RemoveContainer" containerID="8fb8b1ef49cb2b193a0927789536df4d8e7070f1f50e4faa57a3e3b051f355e6" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.143876 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:35 crc kubenswrapper[4672]: E0217 16:25:35.158662 4672 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87b9cd39_fe22_48c7_af4e_c17be1205506.slice\": RecentStats: unable to find data in memory cache]" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.160026 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.172954 4672 scope.go:117] "RemoveContainer" containerID="e74948ac1f52e0b4b2d69e8b128b8f787cc0e4e75b01f1f1b36d718b82e2b2b4" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.173101 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:35 crc kubenswrapper[4672]: E0217 16:25:35.173637 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b9cd39-fe22-48c7-af4e-c17be1205506" containerName="ceilometer-notification-agent" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.173657 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b9cd39-fe22-48c7-af4e-c17be1205506" containerName="ceilometer-notification-agent" Feb 17 16:25:35 crc kubenswrapper[4672]: E0217 16:25:35.173674 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b9cd39-fe22-48c7-af4e-c17be1205506" containerName="proxy-httpd" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.173681 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b9cd39-fe22-48c7-af4e-c17be1205506" containerName="proxy-httpd" Feb 17 16:25:35 crc kubenswrapper[4672]: E0217 16:25:35.173710 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b9cd39-fe22-48c7-af4e-c17be1205506" containerName="ceilometer-central-agent" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.173717 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b9cd39-fe22-48c7-af4e-c17be1205506" containerName="ceilometer-central-agent" Feb 17 16:25:35 crc kubenswrapper[4672]: E0217 16:25:35.173730 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b9cd39-fe22-48c7-af4e-c17be1205506" containerName="sg-core" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.173736 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b9cd39-fe22-48c7-af4e-c17be1205506" containerName="sg-core" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.173913 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b9cd39-fe22-48c7-af4e-c17be1205506" containerName="proxy-httpd" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.173931 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b9cd39-fe22-48c7-af4e-c17be1205506" containerName="ceilometer-central-agent" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.173945 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b9cd39-fe22-48c7-af4e-c17be1205506" containerName="ceilometer-notification-agent" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.173957 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b9cd39-fe22-48c7-af4e-c17be1205506" containerName="sg-core" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.177158 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.179432 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.182376 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.183475 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.208697 4672 scope.go:117] "RemoveContainer" containerID="e0a86cd3551a3fba4aaa408bf011dd96453eea04c3416f7d0371b5667ad685ae" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.235590 4672 scope.go:117] "RemoveContainer" containerID="00f32ce23e287500bf5dc8c81897bd77d1ac7b91839946f7e5a343316a032554" Feb 17 16:25:35 crc kubenswrapper[4672]: E0217 16:25:35.236189 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00f32ce23e287500bf5dc8c81897bd77d1ac7b91839946f7e5a343316a032554\": container with ID starting with 00f32ce23e287500bf5dc8c81897bd77d1ac7b91839946f7e5a343316a032554 not found: ID does not exist" containerID="00f32ce23e287500bf5dc8c81897bd77d1ac7b91839946f7e5a343316a032554" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.236316 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00f32ce23e287500bf5dc8c81897bd77d1ac7b91839946f7e5a343316a032554"} err="failed to get container status \"00f32ce23e287500bf5dc8c81897bd77d1ac7b91839946f7e5a343316a032554\": rpc error: code = NotFound desc = could not find container \"00f32ce23e287500bf5dc8c81897bd77d1ac7b91839946f7e5a343316a032554\": container with ID starting with 00f32ce23e287500bf5dc8c81897bd77d1ac7b91839946f7e5a343316a032554 not found: ID does not exist" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.236409 4672 scope.go:117] "RemoveContainer" containerID="8fb8b1ef49cb2b193a0927789536df4d8e7070f1f50e4faa57a3e3b051f355e6" Feb 17 16:25:35 crc kubenswrapper[4672]: E0217 16:25:35.236978 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fb8b1ef49cb2b193a0927789536df4d8e7070f1f50e4faa57a3e3b051f355e6\": container with ID starting with 8fb8b1ef49cb2b193a0927789536df4d8e7070f1f50e4faa57a3e3b051f355e6 not found: ID does not exist" containerID="8fb8b1ef49cb2b193a0927789536df4d8e7070f1f50e4faa57a3e3b051f355e6" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.237006 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fb8b1ef49cb2b193a0927789536df4d8e7070f1f50e4faa57a3e3b051f355e6"} err="failed to get container status \"8fb8b1ef49cb2b193a0927789536df4d8e7070f1f50e4faa57a3e3b051f355e6\": rpc error: code = NotFound desc = could not find container \"8fb8b1ef49cb2b193a0927789536df4d8e7070f1f50e4faa57a3e3b051f355e6\": container with ID starting with 8fb8b1ef49cb2b193a0927789536df4d8e7070f1f50e4faa57a3e3b051f355e6 not found: ID does not exist" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.237019 4672 scope.go:117] "RemoveContainer" containerID="e74948ac1f52e0b4b2d69e8b128b8f787cc0e4e75b01f1f1b36d718b82e2b2b4" Feb 17 16:25:35 crc kubenswrapper[4672]: E0217 16:25:35.237228 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e74948ac1f52e0b4b2d69e8b128b8f787cc0e4e75b01f1f1b36d718b82e2b2b4\": container with ID starting with e74948ac1f52e0b4b2d69e8b128b8f787cc0e4e75b01f1f1b36d718b82e2b2b4 not found: ID does not exist" containerID="e74948ac1f52e0b4b2d69e8b128b8f787cc0e4e75b01f1f1b36d718b82e2b2b4" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.237342 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e74948ac1f52e0b4b2d69e8b128b8f787cc0e4e75b01f1f1b36d718b82e2b2b4"} err="failed to get container status \"e74948ac1f52e0b4b2d69e8b128b8f787cc0e4e75b01f1f1b36d718b82e2b2b4\": rpc error: code = NotFound desc = could not find container \"e74948ac1f52e0b4b2d69e8b128b8f787cc0e4e75b01f1f1b36d718b82e2b2b4\": container with ID starting with e74948ac1f52e0b4b2d69e8b128b8f787cc0e4e75b01f1f1b36d718b82e2b2b4 not found: ID does not exist" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.237430 4672 scope.go:117] "RemoveContainer" containerID="e0a86cd3551a3fba4aaa408bf011dd96453eea04c3416f7d0371b5667ad685ae" Feb 17 16:25:35 crc kubenswrapper[4672]: E0217 16:25:35.237837 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0a86cd3551a3fba4aaa408bf011dd96453eea04c3416f7d0371b5667ad685ae\": container with ID starting with e0a86cd3551a3fba4aaa408bf011dd96453eea04c3416f7d0371b5667ad685ae not found: ID does not exist" containerID="e0a86cd3551a3fba4aaa408bf011dd96453eea04c3416f7d0371b5667ad685ae" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.237879 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0a86cd3551a3fba4aaa408bf011dd96453eea04c3416f7d0371b5667ad685ae"} err="failed to get container status \"e0a86cd3551a3fba4aaa408bf011dd96453eea04c3416f7d0371b5667ad685ae\": rpc error: code = NotFound desc = could not find container \"e0a86cd3551a3fba4aaa408bf011dd96453eea04c3416f7d0371b5667ad685ae\": container with ID starting with e0a86cd3551a3fba4aaa408bf011dd96453eea04c3416f7d0371b5667ad685ae not found: ID does not exist" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.312007 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-run-httpd\") pod \"ceilometer-0\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.312085 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-log-httpd\") pod \"ceilometer-0\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.312123 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-scripts\") pod \"ceilometer-0\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.312183 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.312199 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9zzw\" (UniqueName: \"kubernetes.io/projected/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-kube-api-access-z9zzw\") pod \"ceilometer-0\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.312218 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.312246 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-config-data\") pod \"ceilometer-0\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.413839 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-run-httpd\") pod \"ceilometer-0\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.413883 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-log-httpd\") pod \"ceilometer-0\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.413915 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-scripts\") pod \"ceilometer-0\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.413970 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.413988 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9zzw\" (UniqueName: \"kubernetes.io/projected/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-kube-api-access-z9zzw\") pod \"ceilometer-0\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.414009 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.414033 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-config-data\") pod \"ceilometer-0\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.418985 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-config-data\") pod \"ceilometer-0\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.419261 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-run-httpd\") pod \"ceilometer-0\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.419452 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-log-httpd\") pod \"ceilometer-0\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.422311 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-scripts\") pod \"ceilometer-0\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.425806 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.426191 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.438017 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9zzw\" (UniqueName: \"kubernetes.io/projected/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-kube-api-access-z9zzw\") pod \"ceilometer-0\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.502460 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:25:35 crc kubenswrapper[4672]: I0217 16:25:35.976616 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b9cd39-fe22-48c7-af4e-c17be1205506" path="/var/lib/kubelet/pods/87b9cd39-fe22-48c7-af4e-c17be1205506/volumes" Feb 17 16:25:36 crc kubenswrapper[4672]: I0217 16:25:36.005737 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:25:36 crc kubenswrapper[4672]: I0217 16:25:36.097028 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efa123df-bae8-4cdf-aa2b-cc425f4be0ff","Type":"ContainerStarted","Data":"b563e345db894df346cf66325d0ad0a846f1709f8d4ba0b167287d1cb754c415"} Feb 17 16:25:37 crc kubenswrapper[4672]: I0217 16:25:37.116493 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efa123df-bae8-4cdf-aa2b-cc425f4be0ff","Type":"ContainerStarted","Data":"c21460ed62d7ff1d837d52d7de1c007e2ef3b33f19fb6ecd52bcafe7222a193d"} Feb 17 16:25:38 crc kubenswrapper[4672]: I0217 16:25:38.127114 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efa123df-bae8-4cdf-aa2b-cc425f4be0ff","Type":"ContainerStarted","Data":"1d070aec6ea1d9387b41610394a85ee80856321a47da867b016ebe3e7d6cf9ab"} Feb 17 16:25:38 crc kubenswrapper[4672]: I0217 16:25:38.128887 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efa123df-bae8-4cdf-aa2b-cc425f4be0ff","Type":"ContainerStarted","Data":"763dbb3aebc3aa9e1c574c33f845a65ec8313c095d5e09552d05b001ea10d6cf"} Feb 17 16:25:38 crc kubenswrapper[4672]: I0217 16:25:38.454034 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 17 16:25:38 crc kubenswrapper[4672]: I0217 16:25:38.910784 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-c5jfb"] Feb 17 16:25:38 crc kubenswrapper[4672]: I0217 16:25:38.912619 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c5jfb" Feb 17 16:25:38 crc kubenswrapper[4672]: I0217 16:25:38.915209 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 17 16:25:38 crc kubenswrapper[4672]: I0217 16:25:38.915531 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 17 16:25:38 crc kubenswrapper[4672]: I0217 16:25:38.932020 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-c5jfb"] Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.016783 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-c5jfb\" (UID: \"d89089a9-0ddc-4c81-a639-dd9dcf7e9163\") " pod="openstack/nova-cell0-cell-mapping-c5jfb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.016898 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-scripts\") pod \"nova-cell0-cell-mapping-c5jfb\" (UID: \"d89089a9-0ddc-4c81-a639-dd9dcf7e9163\") " pod="openstack/nova-cell0-cell-mapping-c5jfb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.016937 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-config-data\") pod \"nova-cell0-cell-mapping-c5jfb\" (UID: \"d89089a9-0ddc-4c81-a639-dd9dcf7e9163\") " pod="openstack/nova-cell0-cell-mapping-c5jfb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.016997 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr85k\" (UniqueName: \"kubernetes.io/projected/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-kube-api-access-dr85k\") pod \"nova-cell0-cell-mapping-c5jfb\" (UID: \"d89089a9-0ddc-4c81-a639-dd9dcf7e9163\") " pod="openstack/nova-cell0-cell-mapping-c5jfb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.037693 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.038866 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.041583 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.084671 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.118333 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r77pc\" (UniqueName: \"kubernetes.io/projected/8d0299a6-8cb1-4a31-89ba-1bb27adb34d1-kube-api-access-r77pc\") pod \"nova-scheduler-0\" (UID: \"8d0299a6-8cb1-4a31-89ba-1bb27adb34d1\") " pod="openstack/nova-scheduler-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.118383 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr85k\" (UniqueName: \"kubernetes.io/projected/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-kube-api-access-dr85k\") pod \"nova-cell0-cell-mapping-c5jfb\" (UID: \"d89089a9-0ddc-4c81-a639-dd9dcf7e9163\") " pod="openstack/nova-cell0-cell-mapping-c5jfb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.118468 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-c5jfb\" (UID: \"d89089a9-0ddc-4c81-a639-dd9dcf7e9163\") " pod="openstack/nova-cell0-cell-mapping-c5jfb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.118541 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d0299a6-8cb1-4a31-89ba-1bb27adb34d1-config-data\") pod \"nova-scheduler-0\" (UID: \"8d0299a6-8cb1-4a31-89ba-1bb27adb34d1\") " pod="openstack/nova-scheduler-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.118615 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d0299a6-8cb1-4a31-89ba-1bb27adb34d1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8d0299a6-8cb1-4a31-89ba-1bb27adb34d1\") " pod="openstack/nova-scheduler-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.118633 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-scripts\") pod \"nova-cell0-cell-mapping-c5jfb\" (UID: \"d89089a9-0ddc-4c81-a639-dd9dcf7e9163\") " pod="openstack/nova-cell0-cell-mapping-c5jfb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.118689 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-config-data\") pod \"nova-cell0-cell-mapping-c5jfb\" (UID: \"d89089a9-0ddc-4c81-a639-dd9dcf7e9163\") " pod="openstack/nova-cell0-cell-mapping-c5jfb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.125893 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-config-data\") pod \"nova-cell0-cell-mapping-c5jfb\" (UID: \"d89089a9-0ddc-4c81-a639-dd9dcf7e9163\") " pod="openstack/nova-cell0-cell-mapping-c5jfb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.128164 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-c5jfb\" (UID: \"d89089a9-0ddc-4c81-a639-dd9dcf7e9163\") " pod="openstack/nova-cell0-cell-mapping-c5jfb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.134154 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-scripts\") pod \"nova-cell0-cell-mapping-c5jfb\" (UID: \"d89089a9-0ddc-4c81-a639-dd9dcf7e9163\") " pod="openstack/nova-cell0-cell-mapping-c5jfb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.150576 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.152384 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.153353 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr85k\" (UniqueName: \"kubernetes.io/projected/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-kube-api-access-dr85k\") pod \"nova-cell0-cell-mapping-c5jfb\" (UID: \"d89089a9-0ddc-4c81-a639-dd9dcf7e9163\") " pod="openstack/nova-cell0-cell-mapping-c5jfb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.155302 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.225692 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r77pc\" (UniqueName: \"kubernetes.io/projected/8d0299a6-8cb1-4a31-89ba-1bb27adb34d1-kube-api-access-r77pc\") pod \"nova-scheduler-0\" (UID: \"8d0299a6-8cb1-4a31-89ba-1bb27adb34d1\") " pod="openstack/nova-scheduler-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.225841 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d0299a6-8cb1-4a31-89ba-1bb27adb34d1-config-data\") pod \"nova-scheduler-0\" (UID: \"8d0299a6-8cb1-4a31-89ba-1bb27adb34d1\") " pod="openstack/nova-scheduler-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.225901 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d0299a6-8cb1-4a31-89ba-1bb27adb34d1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8d0299a6-8cb1-4a31-89ba-1bb27adb34d1\") " pod="openstack/nova-scheduler-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.231207 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c5jfb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.240230 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.240310 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d0299a6-8cb1-4a31-89ba-1bb27adb34d1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8d0299a6-8cb1-4a31-89ba-1bb27adb34d1\") " pod="openstack/nova-scheduler-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.248074 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d0299a6-8cb1-4a31-89ba-1bb27adb34d1-config-data\") pod \"nova-scheduler-0\" (UID: \"8d0299a6-8cb1-4a31-89ba-1bb27adb34d1\") " pod="openstack/nova-scheduler-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.295058 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r77pc\" (UniqueName: \"kubernetes.io/projected/8d0299a6-8cb1-4a31-89ba-1bb27adb34d1-kube-api-access-r77pc\") pod \"nova-scheduler-0\" (UID: \"8d0299a6-8cb1-4a31-89ba-1bb27adb34d1\") " pod="openstack/nova-scheduler-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.325975 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.328531 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8012bab4-6bda-4be8-b98b-4c46b99201e4-config-data\") pod \"nova-api-0\" (UID: \"8012bab4-6bda-4be8-b98b-4c46b99201e4\") " pod="openstack/nova-api-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.328597 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4dn4\" (UniqueName: \"kubernetes.io/projected/8012bab4-6bda-4be8-b98b-4c46b99201e4-kube-api-access-n4dn4\") pod \"nova-api-0\" (UID: \"8012bab4-6bda-4be8-b98b-4c46b99201e4\") " pod="openstack/nova-api-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.328633 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8012bab4-6bda-4be8-b98b-4c46b99201e4-logs\") pod \"nova-api-0\" (UID: \"8012bab4-6bda-4be8-b98b-4c46b99201e4\") " pod="openstack/nova-api-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.328727 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8012bab4-6bda-4be8-b98b-4c46b99201e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8012bab4-6bda-4be8-b98b-4c46b99201e4\") " pod="openstack/nova-api-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.341474 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.346780 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.359963 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.368983 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.433273 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgvcm\" (UniqueName: \"kubernetes.io/projected/81df8d25-8781-4cf7-ad12-ce90fb01aa1e-kube-api-access-jgvcm\") pod \"nova-cell1-novncproxy-0\" (UID: \"81df8d25-8781-4cf7-ad12-ce90fb01aa1e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.433351 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81df8d25-8781-4cf7-ad12-ce90fb01aa1e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"81df8d25-8781-4cf7-ad12-ce90fb01aa1e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.433382 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8012bab4-6bda-4be8-b98b-4c46b99201e4-config-data\") pod \"nova-api-0\" (UID: \"8012bab4-6bda-4be8-b98b-4c46b99201e4\") " pod="openstack/nova-api-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.433428 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4dn4\" (UniqueName: \"kubernetes.io/projected/8012bab4-6bda-4be8-b98b-4c46b99201e4-kube-api-access-n4dn4\") pod \"nova-api-0\" (UID: \"8012bab4-6bda-4be8-b98b-4c46b99201e4\") " pod="openstack/nova-api-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.433460 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8012bab4-6bda-4be8-b98b-4c46b99201e4-logs\") pod \"nova-api-0\" (UID: \"8012bab4-6bda-4be8-b98b-4c46b99201e4\") " pod="openstack/nova-api-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.433593 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81df8d25-8781-4cf7-ad12-ce90fb01aa1e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"81df8d25-8781-4cf7-ad12-ce90fb01aa1e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.433620 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8012bab4-6bda-4be8-b98b-4c46b99201e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8012bab4-6bda-4be8-b98b-4c46b99201e4\") " pod="openstack/nova-api-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.438557 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8012bab4-6bda-4be8-b98b-4c46b99201e4-logs\") pod \"nova-api-0\" (UID: \"8012bab4-6bda-4be8-b98b-4c46b99201e4\") " pod="openstack/nova-api-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.438747 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.440577 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.445060 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.445129 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8012bab4-6bda-4be8-b98b-4c46b99201e4-config-data\") pod \"nova-api-0\" (UID: \"8012bab4-6bda-4be8-b98b-4c46b99201e4\") " pod="openstack/nova-api-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.446063 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8012bab4-6bda-4be8-b98b-4c46b99201e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8012bab4-6bda-4be8-b98b-4c46b99201e4\") " pod="openstack/nova-api-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.454772 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.466056 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4dn4\" (UniqueName: \"kubernetes.io/projected/8012bab4-6bda-4be8-b98b-4c46b99201e4-kube-api-access-n4dn4\") pod \"nova-api-0\" (UID: \"8012bab4-6bda-4be8-b98b-4c46b99201e4\") " pod="openstack/nova-api-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.474580 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d578b86f9-6qmnb"] Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.476595 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.503279 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d578b86f9-6qmnb"] Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.536999 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tktdf\" (UniqueName: \"kubernetes.io/projected/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-kube-api-access-tktdf\") pod \"nova-metadata-0\" (UID: \"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9\") " pod="openstack/nova-metadata-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.537049 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgvcm\" (UniqueName: \"kubernetes.io/projected/81df8d25-8781-4cf7-ad12-ce90fb01aa1e-kube-api-access-jgvcm\") pod \"nova-cell1-novncproxy-0\" (UID: \"81df8d25-8781-4cf7-ad12-ce90fb01aa1e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.537101 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81df8d25-8781-4cf7-ad12-ce90fb01aa1e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"81df8d25-8781-4cf7-ad12-ce90fb01aa1e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.537142 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-config\") pod \"dnsmasq-dns-5d578b86f9-6qmnb\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.537159 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-ovsdbserver-nb\") pod \"dnsmasq-dns-5d578b86f9-6qmnb\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.537192 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-dns-swift-storage-0\") pod \"dnsmasq-dns-5d578b86f9-6qmnb\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.537217 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd5sz\" (UniqueName: \"kubernetes.io/projected/faa984a6-743c-4fec-a4d5-0555ad87604d-kube-api-access-fd5sz\") pod \"dnsmasq-dns-5d578b86f9-6qmnb\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.537243 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-config-data\") pod \"nova-metadata-0\" (UID: \"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9\") " pod="openstack/nova-metadata-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.537266 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9\") " pod="openstack/nova-metadata-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.537301 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-dns-svc\") pod \"dnsmasq-dns-5d578b86f9-6qmnb\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.537321 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-logs\") pod \"nova-metadata-0\" (UID: \"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9\") " pod="openstack/nova-metadata-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.537376 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81df8d25-8781-4cf7-ad12-ce90fb01aa1e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"81df8d25-8781-4cf7-ad12-ce90fb01aa1e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.537397 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-ovsdbserver-sb\") pod \"dnsmasq-dns-5d578b86f9-6qmnb\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.547044 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81df8d25-8781-4cf7-ad12-ce90fb01aa1e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"81df8d25-8781-4cf7-ad12-ce90fb01aa1e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.547501 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81df8d25-8781-4cf7-ad12-ce90fb01aa1e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"81df8d25-8781-4cf7-ad12-ce90fb01aa1e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.554685 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgvcm\" (UniqueName: \"kubernetes.io/projected/81df8d25-8781-4cf7-ad12-ce90fb01aa1e-kube-api-access-jgvcm\") pod \"nova-cell1-novncproxy-0\" (UID: \"81df8d25-8781-4cf7-ad12-ce90fb01aa1e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.639667 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-ovsdbserver-sb\") pod \"dnsmasq-dns-5d578b86f9-6qmnb\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.640011 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tktdf\" (UniqueName: \"kubernetes.io/projected/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-kube-api-access-tktdf\") pod \"nova-metadata-0\" (UID: \"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9\") " pod="openstack/nova-metadata-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.640072 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-config\") pod \"dnsmasq-dns-5d578b86f9-6qmnb\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.640089 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-ovsdbserver-nb\") pod \"dnsmasq-dns-5d578b86f9-6qmnb\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.640117 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-dns-swift-storage-0\") pod \"dnsmasq-dns-5d578b86f9-6qmnb\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.640142 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd5sz\" (UniqueName: \"kubernetes.io/projected/faa984a6-743c-4fec-a4d5-0555ad87604d-kube-api-access-fd5sz\") pod \"dnsmasq-dns-5d578b86f9-6qmnb\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.640167 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-config-data\") pod \"nova-metadata-0\" (UID: \"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9\") " pod="openstack/nova-metadata-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.640192 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9\") " pod="openstack/nova-metadata-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.640223 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-dns-svc\") pod \"dnsmasq-dns-5d578b86f9-6qmnb\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.640246 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-logs\") pod \"nova-metadata-0\" (UID: \"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9\") " pod="openstack/nova-metadata-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.640657 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-logs\") pod \"nova-metadata-0\" (UID: \"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9\") " pod="openstack/nova-metadata-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.646672 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-ovsdbserver-sb\") pod \"dnsmasq-dns-5d578b86f9-6qmnb\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.648890 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-ovsdbserver-nb\") pod \"dnsmasq-dns-5d578b86f9-6qmnb\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.648901 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-dns-svc\") pod \"dnsmasq-dns-5d578b86f9-6qmnb\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.649308 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-config\") pod \"dnsmasq-dns-5d578b86f9-6qmnb\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.657698 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-config-data\") pod \"nova-metadata-0\" (UID: \"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9\") " pod="openstack/nova-metadata-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.662452 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tktdf\" (UniqueName: \"kubernetes.io/projected/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-kube-api-access-tktdf\") pod \"nova-metadata-0\" (UID: \"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9\") " pod="openstack/nova-metadata-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.665453 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9\") " pod="openstack/nova-metadata-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.666013 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-dns-swift-storage-0\") pod \"dnsmasq-dns-5d578b86f9-6qmnb\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.691232 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd5sz\" (UniqueName: \"kubernetes.io/projected/faa984a6-743c-4fec-a4d5-0555ad87604d-kube-api-access-fd5sz\") pod \"dnsmasq-dns-5d578b86f9-6qmnb\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.718393 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.733187 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.767083 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:25:39 crc kubenswrapper[4672]: I0217 16:25:39.825688 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.014641 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-c5jfb"] Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.115915 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.125153 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4jq6r"] Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.137721 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4jq6r"] Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.137812 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4jq6r" Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.145284 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.154035 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.204658 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8d0299a6-8cb1-4a31-89ba-1bb27adb34d1","Type":"ContainerStarted","Data":"7da6f902553e9c9900548d3ccad279045b6c7b668c71dbdb03309866fc0198f1"} Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.211920 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efa123df-bae8-4cdf-aa2b-cc425f4be0ff","Type":"ContainerStarted","Data":"5a2c65a5e75413a4ef56a14d7a367111503642497a8f5b9c8024db288fc3c809"} Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.212058 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.214252 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c5jfb" event={"ID":"d89089a9-0ddc-4c81-a639-dd9dcf7e9163","Type":"ContainerStarted","Data":"43eca27d58428f15146baed091ac26a14dd8f2fb9e38f540d2ab0cbd9f3a2d9f"} Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.245389 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.197820996 podStartE2EDuration="5.2453733s" podCreationTimestamp="2026-02-17 16:25:35 +0000 UTC" firstStartedPulling="2026-02-17 16:25:36.017563382 +0000 UTC m=+1344.771652154" lastFinishedPulling="2026-02-17 16:25:39.065115726 +0000 UTC m=+1347.819204458" observedRunningTime="2026-02-17 16:25:40.227433047 +0000 UTC m=+1348.981521789" watchObservedRunningTime="2026-02-17 16:25:40.2453733 +0000 UTC m=+1348.999462032" Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.260415 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/032644e0-8b08-4138-8e14-aee003b214d2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4jq6r\" (UID: \"032644e0-8b08-4138-8e14-aee003b214d2\") " pod="openstack/nova-cell1-conductor-db-sync-4jq6r" Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.260527 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/032644e0-8b08-4138-8e14-aee003b214d2-scripts\") pod \"nova-cell1-conductor-db-sync-4jq6r\" (UID: \"032644e0-8b08-4138-8e14-aee003b214d2\") " pod="openstack/nova-cell1-conductor-db-sync-4jq6r" Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.260615 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnr9f\" (UniqueName: \"kubernetes.io/projected/032644e0-8b08-4138-8e14-aee003b214d2-kube-api-access-qnr9f\") pod \"nova-cell1-conductor-db-sync-4jq6r\" (UID: \"032644e0-8b08-4138-8e14-aee003b214d2\") " pod="openstack/nova-cell1-conductor-db-sync-4jq6r" Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.260682 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/032644e0-8b08-4138-8e14-aee003b214d2-config-data\") pod \"nova-cell1-conductor-db-sync-4jq6r\" (UID: \"032644e0-8b08-4138-8e14-aee003b214d2\") " pod="openstack/nova-cell1-conductor-db-sync-4jq6r" Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.363090 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/032644e0-8b08-4138-8e14-aee003b214d2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4jq6r\" (UID: \"032644e0-8b08-4138-8e14-aee003b214d2\") " pod="openstack/nova-cell1-conductor-db-sync-4jq6r" Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.363149 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/032644e0-8b08-4138-8e14-aee003b214d2-scripts\") pod \"nova-cell1-conductor-db-sync-4jq6r\" (UID: \"032644e0-8b08-4138-8e14-aee003b214d2\") " pod="openstack/nova-cell1-conductor-db-sync-4jq6r" Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.363228 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnr9f\" (UniqueName: \"kubernetes.io/projected/032644e0-8b08-4138-8e14-aee003b214d2-kube-api-access-qnr9f\") pod \"nova-cell1-conductor-db-sync-4jq6r\" (UID: \"032644e0-8b08-4138-8e14-aee003b214d2\") " pod="openstack/nova-cell1-conductor-db-sync-4jq6r" Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.363279 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/032644e0-8b08-4138-8e14-aee003b214d2-config-data\") pod \"nova-cell1-conductor-db-sync-4jq6r\" (UID: \"032644e0-8b08-4138-8e14-aee003b214d2\") " pod="openstack/nova-cell1-conductor-db-sync-4jq6r" Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.368978 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/032644e0-8b08-4138-8e14-aee003b214d2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4jq6r\" (UID: \"032644e0-8b08-4138-8e14-aee003b214d2\") " pod="openstack/nova-cell1-conductor-db-sync-4jq6r" Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.378086 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/032644e0-8b08-4138-8e14-aee003b214d2-config-data\") pod \"nova-cell1-conductor-db-sync-4jq6r\" (UID: \"032644e0-8b08-4138-8e14-aee003b214d2\") " pod="openstack/nova-cell1-conductor-db-sync-4jq6r" Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.380543 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnr9f\" (UniqueName: \"kubernetes.io/projected/032644e0-8b08-4138-8e14-aee003b214d2-kube-api-access-qnr9f\") pod \"nova-cell1-conductor-db-sync-4jq6r\" (UID: \"032644e0-8b08-4138-8e14-aee003b214d2\") " pod="openstack/nova-cell1-conductor-db-sync-4jq6r" Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.400115 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/032644e0-8b08-4138-8e14-aee003b214d2-scripts\") pod \"nova-cell1-conductor-db-sync-4jq6r\" (UID: \"032644e0-8b08-4138-8e14-aee003b214d2\") " pod="openstack/nova-cell1-conductor-db-sync-4jq6r" Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.569116 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4jq6r" Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.628623 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:25:40 crc kubenswrapper[4672]: W0217 16:25:40.630994 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81df8d25_8781_4cf7_ad12_ce90fb01aa1e.slice/crio-2ec260169a2347ad090462f514d67443e561b88c39e6abed482ccf2f1c7576e7 WatchSource:0}: Error finding container 2ec260169a2347ad090462f514d67443e561b88c39e6abed482ccf2f1c7576e7: Status 404 returned error can't find the container with id 2ec260169a2347ad090462f514d67443e561b88c39e6abed482ccf2f1c7576e7 Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.639265 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.666734 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d578b86f9-6qmnb"] Feb 17 16:25:40 crc kubenswrapper[4672]: I0217 16:25:40.677211 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:25:41 crc kubenswrapper[4672]: I0217 16:25:41.150681 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4jq6r"] Feb 17 16:25:41 crc kubenswrapper[4672]: I0217 16:25:41.256401 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"81df8d25-8781-4cf7-ad12-ce90fb01aa1e","Type":"ContainerStarted","Data":"2ec260169a2347ad090462f514d67443e561b88c39e6abed482ccf2f1c7576e7"} Feb 17 16:25:41 crc kubenswrapper[4672]: I0217 16:25:41.258792 4672 generic.go:334] "Generic (PLEG): container finished" podID="faa984a6-743c-4fec-a4d5-0555ad87604d" containerID="180881eb289bbe97e38ff94ae68f8729879812f26f108ba62576825b4f8fbc5c" exitCode=0 Feb 17 16:25:41 crc kubenswrapper[4672]: I0217 16:25:41.258872 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" event={"ID":"faa984a6-743c-4fec-a4d5-0555ad87604d","Type":"ContainerDied","Data":"180881eb289bbe97e38ff94ae68f8729879812f26f108ba62576825b4f8fbc5c"} Feb 17 16:25:41 crc kubenswrapper[4672]: I0217 16:25:41.258919 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" event={"ID":"faa984a6-743c-4fec-a4d5-0555ad87604d","Type":"ContainerStarted","Data":"41701d7258daa2ddf2901d6ec81141a9dd616a89578da12ce649c237a95e176e"} Feb 17 16:25:41 crc kubenswrapper[4672]: I0217 16:25:41.261904 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4jq6r" event={"ID":"032644e0-8b08-4138-8e14-aee003b214d2","Type":"ContainerStarted","Data":"d4cf4985e73f3e1c89eb34af1d610017f5eafbb9902792d50b9992bc1e8d5307"} Feb 17 16:25:41 crc kubenswrapper[4672]: I0217 16:25:41.264812 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9","Type":"ContainerStarted","Data":"31fdc54fdd2bb146796876f8092573187641dafc8a73a4a6267955ad17f35b2e"} Feb 17 16:25:41 crc kubenswrapper[4672]: I0217 16:25:41.268992 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8012bab4-6bda-4be8-b98b-4c46b99201e4","Type":"ContainerStarted","Data":"99cd568c6fb7f49087bea8dec8b2ace36cfc1e5302e5bf84f1cee755ddf2babb"} Feb 17 16:25:41 crc kubenswrapper[4672]: I0217 16:25:41.282073 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c5jfb" event={"ID":"d89089a9-0ddc-4c81-a639-dd9dcf7e9163","Type":"ContainerStarted","Data":"b493038888d06908685ef6a56b380cf4b3ea8e5e5b0673760d178413b0c5d528"} Feb 17 16:25:41 crc kubenswrapper[4672]: I0217 16:25:41.336646 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-c5jfb" podStartSLOduration=3.336629989 podStartE2EDuration="3.336629989s" podCreationTimestamp="2026-02-17 16:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:41.310217113 +0000 UTC m=+1350.064305855" watchObservedRunningTime="2026-02-17 16:25:41.336629989 +0000 UTC m=+1350.090718721" Feb 17 16:25:42 crc kubenswrapper[4672]: I0217 16:25:42.308091 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" event={"ID":"faa984a6-743c-4fec-a4d5-0555ad87604d","Type":"ContainerStarted","Data":"17ae1503b23ad7c97e271069d0bd671a2a3d8afcd910c089bd61074dd120f9bc"} Feb 17 16:25:42 crc kubenswrapper[4672]: I0217 16:25:42.308366 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:25:42 crc kubenswrapper[4672]: I0217 16:25:42.317052 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4jq6r" event={"ID":"032644e0-8b08-4138-8e14-aee003b214d2","Type":"ContainerStarted","Data":"8f2a24d95a39e2bdc52a59a549c1d20dc5cd9223153269c654366b9b645808b5"} Feb 17 16:25:42 crc kubenswrapper[4672]: I0217 16:25:42.349384 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" podStartSLOduration=3.3493653979999998 podStartE2EDuration="3.349365398s" podCreationTimestamp="2026-02-17 16:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:42.341539852 +0000 UTC m=+1351.095628584" watchObservedRunningTime="2026-02-17 16:25:42.349365398 +0000 UTC m=+1351.103454130" Feb 17 16:25:42 crc kubenswrapper[4672]: I0217 16:25:42.366935 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-4jq6r" podStartSLOduration=2.366915991 podStartE2EDuration="2.366915991s" podCreationTimestamp="2026-02-17 16:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:42.360373369 +0000 UTC m=+1351.114462101" watchObservedRunningTime="2026-02-17 16:25:42.366915991 +0000 UTC m=+1351.121004713" Feb 17 16:25:42 crc kubenswrapper[4672]: I0217 16:25:42.899147 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:25:42 crc kubenswrapper[4672]: I0217 16:25:42.907729 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:25:49 crc kubenswrapper[4672]: I0217 16:25:49.828400 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:25:49 crc kubenswrapper[4672]: I0217 16:25:49.937044 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76d4d7c9b7-6prnv"] Feb 17 16:25:49 crc kubenswrapper[4672]: I0217 16:25:49.937540 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" podUID="8a2e45a5-f1bd-4f5b-82e6-f98168ece99a" containerName="dnsmasq-dns" containerID="cri-o://80910acdf1a097539503d5da5240eb78e6e4a94151a2cf7fda013b1c43c03528" gracePeriod=10 Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.355292 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.414141 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-dns-swift-storage-0\") pod \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.414196 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-dns-svc\") pod \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.414291 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5gv9\" (UniqueName: \"kubernetes.io/projected/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-kube-api-access-t5gv9\") pod \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.414359 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-ovsdbserver-nb\") pod \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.414429 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-config\") pod \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.414529 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-ovsdbserver-sb\") pod \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\" (UID: \"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a\") " Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.420659 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-kube-api-access-t5gv9" (OuterVolumeSpecName: "kube-api-access-t5gv9") pod "8a2e45a5-f1bd-4f5b-82e6-f98168ece99a" (UID: "8a2e45a5-f1bd-4f5b-82e6-f98168ece99a"). InnerVolumeSpecName "kube-api-access-t5gv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.428739 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8d0299a6-8cb1-4a31-89ba-1bb27adb34d1","Type":"ContainerStarted","Data":"69d3cf0397eec94371f2a2242e32dc23fff5846097644cc3bb57d40c45253f1a"} Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.431767 4672 generic.go:334] "Generic (PLEG): container finished" podID="8a2e45a5-f1bd-4f5b-82e6-f98168ece99a" containerID="80910acdf1a097539503d5da5240eb78e6e4a94151a2cf7fda013b1c43c03528" exitCode=0 Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.431874 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" event={"ID":"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a","Type":"ContainerDied","Data":"80910acdf1a097539503d5da5240eb78e6e4a94151a2cf7fda013b1c43c03528"} Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.431898 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" event={"ID":"8a2e45a5-f1bd-4f5b-82e6-f98168ece99a","Type":"ContainerDied","Data":"4dbc418dede794706fca8c8ba8177efa88e7a3668bc123ec4547ce757e08ae06"} Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.431913 4672 scope.go:117] "RemoveContainer" containerID="80910acdf1a097539503d5da5240eb78e6e4a94151a2cf7fda013b1c43c03528" Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.432065 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76d4d7c9b7-6prnv" Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.437860 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"81df8d25-8781-4cf7-ad12-ce90fb01aa1e","Type":"ContainerStarted","Data":"4bf47ccec39e63a8aa7b9b0365cf3fbacc9ec8137987972ea2594327c683378c"} Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.438160 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="81df8d25-8781-4cf7-ad12-ce90fb01aa1e" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://4bf47ccec39e63a8aa7b9b0365cf3fbacc9ec8137987972ea2594327c683378c" gracePeriod=30 Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.446567 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.119711277 podStartE2EDuration="12.446552017s" podCreationTimestamp="2026-02-17 16:25:39 +0000 UTC" firstStartedPulling="2026-02-17 16:25:40.176654219 +0000 UTC m=+1348.930742951" lastFinishedPulling="2026-02-17 16:25:48.503494959 +0000 UTC m=+1357.257583691" observedRunningTime="2026-02-17 16:25:51.444980376 +0000 UTC m=+1360.199069108" watchObservedRunningTime="2026-02-17 16:25:51.446552017 +0000 UTC m=+1360.200640749" Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.458251 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9","Type":"ContainerStarted","Data":"d3371ee6d5a9f34eeb4c4774f85a56ac095c44e30b5991d469f93500165cf281"} Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.496576 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8a2e45a5-f1bd-4f5b-82e6-f98168ece99a" (UID: "8a2e45a5-f1bd-4f5b-82e6-f98168ece99a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.498553 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8a2e45a5-f1bd-4f5b-82e6-f98168ece99a" (UID: "8a2e45a5-f1bd-4f5b-82e6-f98168ece99a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.508722 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8a2e45a5-f1bd-4f5b-82e6-f98168ece99a" (UID: "8a2e45a5-f1bd-4f5b-82e6-f98168ece99a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.514955 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8a2e45a5-f1bd-4f5b-82e6-f98168ece99a" (UID: "8a2e45a5-f1bd-4f5b-82e6-f98168ece99a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.515850 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.515868 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.515877 4672 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.515886 4672 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.515895 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5gv9\" (UniqueName: \"kubernetes.io/projected/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-kube-api-access-t5gv9\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.522415 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-config" (OuterVolumeSpecName: "config") pod "8a2e45a5-f1bd-4f5b-82e6-f98168ece99a" (UID: "8a2e45a5-f1bd-4f5b-82e6-f98168ece99a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.617783 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.619201 4672 scope.go:117] "RemoveContainer" containerID="daddb4d78fbf72310ed73e7caa36ff0ac12e74c7ae0cefdcc0f7d05c4cb2f929" Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.702604 4672 scope.go:117] "RemoveContainer" containerID="80910acdf1a097539503d5da5240eb78e6e4a94151a2cf7fda013b1c43c03528" Feb 17 16:25:51 crc kubenswrapper[4672]: E0217 16:25:51.703305 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80910acdf1a097539503d5da5240eb78e6e4a94151a2cf7fda013b1c43c03528\": container with ID starting with 80910acdf1a097539503d5da5240eb78e6e4a94151a2cf7fda013b1c43c03528 not found: ID does not exist" containerID="80910acdf1a097539503d5da5240eb78e6e4a94151a2cf7fda013b1c43c03528" Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.703352 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80910acdf1a097539503d5da5240eb78e6e4a94151a2cf7fda013b1c43c03528"} err="failed to get container status \"80910acdf1a097539503d5da5240eb78e6e4a94151a2cf7fda013b1c43c03528\": rpc error: code = NotFound desc = could not find container \"80910acdf1a097539503d5da5240eb78e6e4a94151a2cf7fda013b1c43c03528\": container with ID starting with 80910acdf1a097539503d5da5240eb78e6e4a94151a2cf7fda013b1c43c03528 not found: ID does not exist" Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.703380 4672 scope.go:117] "RemoveContainer" containerID="daddb4d78fbf72310ed73e7caa36ff0ac12e74c7ae0cefdcc0f7d05c4cb2f929" Feb 17 16:25:51 crc kubenswrapper[4672]: E0217 16:25:51.703796 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daddb4d78fbf72310ed73e7caa36ff0ac12e74c7ae0cefdcc0f7d05c4cb2f929\": container with ID starting with daddb4d78fbf72310ed73e7caa36ff0ac12e74c7ae0cefdcc0f7d05c4cb2f929 not found: ID does not exist" containerID="daddb4d78fbf72310ed73e7caa36ff0ac12e74c7ae0cefdcc0f7d05c4cb2f929" Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.703838 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daddb4d78fbf72310ed73e7caa36ff0ac12e74c7ae0cefdcc0f7d05c4cb2f929"} err="failed to get container status \"daddb4d78fbf72310ed73e7caa36ff0ac12e74c7ae0cefdcc0f7d05c4cb2f929\": rpc error: code = NotFound desc = could not find container \"daddb4d78fbf72310ed73e7caa36ff0ac12e74c7ae0cefdcc0f7d05c4cb2f929\": container with ID starting with daddb4d78fbf72310ed73e7caa36ff0ac12e74c7ae0cefdcc0f7d05c4cb2f929 not found: ID does not exist" Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.765338 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.893532378 podStartE2EDuration="12.765314091s" podCreationTimestamp="2026-02-17 16:25:39 +0000 UTC" firstStartedPulling="2026-02-17 16:25:40.633195605 +0000 UTC m=+1349.387284337" lastFinishedPulling="2026-02-17 16:25:48.504977318 +0000 UTC m=+1357.259066050" observedRunningTime="2026-02-17 16:25:51.460842114 +0000 UTC m=+1360.214930846" watchObservedRunningTime="2026-02-17 16:25:51.765314091 +0000 UTC m=+1360.519402823" Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.770404 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76d4d7c9b7-6prnv"] Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.779204 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76d4d7c9b7-6prnv"] Feb 17 16:25:51 crc kubenswrapper[4672]: I0217 16:25:51.985851 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a2e45a5-f1bd-4f5b-82e6-f98168ece99a" path="/var/lib/kubelet/pods/8a2e45a5-f1bd-4f5b-82e6-f98168ece99a/volumes" Feb 17 16:25:52 crc kubenswrapper[4672]: I0217 16:25:52.469258 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9","Type":"ContainerStarted","Data":"abe358f38424710a3a48a492a1281cdacdcc4c1f8d664996e01e07e665c6b3b7"} Feb 17 16:25:52 crc kubenswrapper[4672]: I0217 16:25:52.469813 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3d653c18-77bd-4bbd-9ad3-d2469bfec0c9" containerName="nova-metadata-log" containerID="cri-o://d3371ee6d5a9f34eeb4c4774f85a56ac095c44e30b5991d469f93500165cf281" gracePeriod=30 Feb 17 16:25:52 crc kubenswrapper[4672]: I0217 16:25:52.469901 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3d653c18-77bd-4bbd-9ad3-d2469bfec0c9" containerName="nova-metadata-metadata" containerID="cri-o://abe358f38424710a3a48a492a1281cdacdcc4c1f8d664996e01e07e665c6b3b7" gracePeriod=30 Feb 17 16:25:52 crc kubenswrapper[4672]: I0217 16:25:52.473040 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8012bab4-6bda-4be8-b98b-4c46b99201e4","Type":"ContainerStarted","Data":"9f361a03c53d9f50073ff05525e6ea98f10745f0a26ea97652a9f9aee183f86a"} Feb 17 16:25:52 crc kubenswrapper[4672]: I0217 16:25:52.473278 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8012bab4-6bda-4be8-b98b-4c46b99201e4","Type":"ContainerStarted","Data":"f2b2451a4a4376e9dd0ad26d598659806b0503b00f24bbb839ca9bede738e149"} Feb 17 16:25:52 crc kubenswrapper[4672]: I0217 16:25:52.477039 4672 generic.go:334] "Generic (PLEG): container finished" podID="d89089a9-0ddc-4c81-a639-dd9dcf7e9163" containerID="b493038888d06908685ef6a56b380cf4b3ea8e5e5b0673760d178413b0c5d528" exitCode=0 Feb 17 16:25:52 crc kubenswrapper[4672]: I0217 16:25:52.477098 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c5jfb" event={"ID":"d89089a9-0ddc-4c81-a639-dd9dcf7e9163","Type":"ContainerDied","Data":"b493038888d06908685ef6a56b380cf4b3ea8e5e5b0673760d178413b0c5d528"} Feb 17 16:25:52 crc kubenswrapper[4672]: I0217 16:25:52.493409 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=5.667148852 podStartE2EDuration="13.493394695s" podCreationTimestamp="2026-02-17 16:25:39 +0000 UTC" firstStartedPulling="2026-02-17 16:25:40.685713049 +0000 UTC m=+1349.439801781" lastFinishedPulling="2026-02-17 16:25:48.511958852 +0000 UTC m=+1357.266047624" observedRunningTime="2026-02-17 16:25:52.491230998 +0000 UTC m=+1361.245319730" watchObservedRunningTime="2026-02-17 16:25:52.493394695 +0000 UTC m=+1361.247483417" Feb 17 16:25:52 crc kubenswrapper[4672]: I0217 16:25:52.517170 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.304341112 podStartE2EDuration="13.517148031s" podCreationTimestamp="2026-02-17 16:25:39 +0000 UTC" firstStartedPulling="2026-02-17 16:25:40.685276568 +0000 UTC m=+1349.439365300" lastFinishedPulling="2026-02-17 16:25:50.898083487 +0000 UTC m=+1359.652172219" observedRunningTime="2026-02-17 16:25:52.511965385 +0000 UTC m=+1361.266054137" watchObservedRunningTime="2026-02-17 16:25:52.517148031 +0000 UTC m=+1361.271236763" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.272613 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.470153 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-config-data\") pod \"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9\" (UID: \"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9\") " Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.470290 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-combined-ca-bundle\") pod \"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9\" (UID: \"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9\") " Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.470363 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-logs\") pod \"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9\" (UID: \"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9\") " Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.470408 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tktdf\" (UniqueName: \"kubernetes.io/projected/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-kube-api-access-tktdf\") pod \"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9\" (UID: \"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9\") " Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.472162 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-logs" (OuterVolumeSpecName: "logs") pod "3d653c18-77bd-4bbd-9ad3-d2469bfec0c9" (UID: "3d653c18-77bd-4bbd-9ad3-d2469bfec0c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.476422 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-kube-api-access-tktdf" (OuterVolumeSpecName: "kube-api-access-tktdf") pod "3d653c18-77bd-4bbd-9ad3-d2469bfec0c9" (UID: "3d653c18-77bd-4bbd-9ad3-d2469bfec0c9"). InnerVolumeSpecName "kube-api-access-tktdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.493685 4672 generic.go:334] "Generic (PLEG): container finished" podID="3d653c18-77bd-4bbd-9ad3-d2469bfec0c9" containerID="abe358f38424710a3a48a492a1281cdacdcc4c1f8d664996e01e07e665c6b3b7" exitCode=0 Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.493721 4672 generic.go:334] "Generic (PLEG): container finished" podID="3d653c18-77bd-4bbd-9ad3-d2469bfec0c9" containerID="d3371ee6d5a9f34eeb4c4774f85a56ac095c44e30b5991d469f93500165cf281" exitCode=143 Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.493911 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.494682 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9","Type":"ContainerDied","Data":"abe358f38424710a3a48a492a1281cdacdcc4c1f8d664996e01e07e665c6b3b7"} Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.494718 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9","Type":"ContainerDied","Data":"d3371ee6d5a9f34eeb4c4774f85a56ac095c44e30b5991d469f93500165cf281"} Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.494733 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d653c18-77bd-4bbd-9ad3-d2469bfec0c9","Type":"ContainerDied","Data":"31fdc54fdd2bb146796876f8092573187641dafc8a73a4a6267955ad17f35b2e"} Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.494750 4672 scope.go:117] "RemoveContainer" containerID="abe358f38424710a3a48a492a1281cdacdcc4c1f8d664996e01e07e665c6b3b7" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.503027 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d653c18-77bd-4bbd-9ad3-d2469bfec0c9" (UID: "3d653c18-77bd-4bbd-9ad3-d2469bfec0c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.513246 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-config-data" (OuterVolumeSpecName: "config-data") pod "3d653c18-77bd-4bbd-9ad3-d2469bfec0c9" (UID: "3d653c18-77bd-4bbd-9ad3-d2469bfec0c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.575192 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.575247 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.575268 4672 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.575285 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tktdf\" (UniqueName: \"kubernetes.io/projected/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9-kube-api-access-tktdf\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.617522 4672 scope.go:117] "RemoveContainer" containerID="d3371ee6d5a9f34eeb4c4774f85a56ac095c44e30b5991d469f93500165cf281" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.655135 4672 scope.go:117] "RemoveContainer" containerID="abe358f38424710a3a48a492a1281cdacdcc4c1f8d664996e01e07e665c6b3b7" Feb 17 16:25:53 crc kubenswrapper[4672]: E0217 16:25:53.656229 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abe358f38424710a3a48a492a1281cdacdcc4c1f8d664996e01e07e665c6b3b7\": container with ID starting with abe358f38424710a3a48a492a1281cdacdcc4c1f8d664996e01e07e665c6b3b7 not found: ID does not exist" containerID="abe358f38424710a3a48a492a1281cdacdcc4c1f8d664996e01e07e665c6b3b7" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.656273 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe358f38424710a3a48a492a1281cdacdcc4c1f8d664996e01e07e665c6b3b7"} err="failed to get container status \"abe358f38424710a3a48a492a1281cdacdcc4c1f8d664996e01e07e665c6b3b7\": rpc error: code = NotFound desc = could not find container \"abe358f38424710a3a48a492a1281cdacdcc4c1f8d664996e01e07e665c6b3b7\": container with ID starting with abe358f38424710a3a48a492a1281cdacdcc4c1f8d664996e01e07e665c6b3b7 not found: ID does not exist" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.656303 4672 scope.go:117] "RemoveContainer" containerID="d3371ee6d5a9f34eeb4c4774f85a56ac095c44e30b5991d469f93500165cf281" Feb 17 16:25:53 crc kubenswrapper[4672]: E0217 16:25:53.656642 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3371ee6d5a9f34eeb4c4774f85a56ac095c44e30b5991d469f93500165cf281\": container with ID starting with d3371ee6d5a9f34eeb4c4774f85a56ac095c44e30b5991d469f93500165cf281 not found: ID does not exist" containerID="d3371ee6d5a9f34eeb4c4774f85a56ac095c44e30b5991d469f93500165cf281" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.656680 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3371ee6d5a9f34eeb4c4774f85a56ac095c44e30b5991d469f93500165cf281"} err="failed to get container status \"d3371ee6d5a9f34eeb4c4774f85a56ac095c44e30b5991d469f93500165cf281\": rpc error: code = NotFound desc = could not find container \"d3371ee6d5a9f34eeb4c4774f85a56ac095c44e30b5991d469f93500165cf281\": container with ID starting with d3371ee6d5a9f34eeb4c4774f85a56ac095c44e30b5991d469f93500165cf281 not found: ID does not exist" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.656708 4672 scope.go:117] "RemoveContainer" containerID="abe358f38424710a3a48a492a1281cdacdcc4c1f8d664996e01e07e665c6b3b7" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.662670 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe358f38424710a3a48a492a1281cdacdcc4c1f8d664996e01e07e665c6b3b7"} err="failed to get container status \"abe358f38424710a3a48a492a1281cdacdcc4c1f8d664996e01e07e665c6b3b7\": rpc error: code = NotFound desc = could not find container \"abe358f38424710a3a48a492a1281cdacdcc4c1f8d664996e01e07e665c6b3b7\": container with ID starting with abe358f38424710a3a48a492a1281cdacdcc4c1f8d664996e01e07e665c6b3b7 not found: ID does not exist" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.662710 4672 scope.go:117] "RemoveContainer" containerID="d3371ee6d5a9f34eeb4c4774f85a56ac095c44e30b5991d469f93500165cf281" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.663401 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3371ee6d5a9f34eeb4c4774f85a56ac095c44e30b5991d469f93500165cf281"} err="failed to get container status \"d3371ee6d5a9f34eeb4c4774f85a56ac095c44e30b5991d469f93500165cf281\": rpc error: code = NotFound desc = could not find container \"d3371ee6d5a9f34eeb4c4774f85a56ac095c44e30b5991d469f93500165cf281\": container with ID starting with d3371ee6d5a9f34eeb4c4774f85a56ac095c44e30b5991d469f93500165cf281 not found: ID does not exist" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.839966 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.852498 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.877666 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:25:53 crc kubenswrapper[4672]: E0217 16:25:53.878223 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2e45a5-f1bd-4f5b-82e6-f98168ece99a" containerName="init" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.878242 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2e45a5-f1bd-4f5b-82e6-f98168ece99a" containerName="init" Feb 17 16:25:53 crc kubenswrapper[4672]: E0217 16:25:53.878268 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d653c18-77bd-4bbd-9ad3-d2469bfec0c9" containerName="nova-metadata-log" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.878279 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d653c18-77bd-4bbd-9ad3-d2469bfec0c9" containerName="nova-metadata-log" Feb 17 16:25:53 crc kubenswrapper[4672]: E0217 16:25:53.878292 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2e45a5-f1bd-4f5b-82e6-f98168ece99a" containerName="dnsmasq-dns" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.878299 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2e45a5-f1bd-4f5b-82e6-f98168ece99a" containerName="dnsmasq-dns" Feb 17 16:25:53 crc kubenswrapper[4672]: E0217 16:25:53.878320 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d653c18-77bd-4bbd-9ad3-d2469bfec0c9" containerName="nova-metadata-metadata" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.878327 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d653c18-77bd-4bbd-9ad3-d2469bfec0c9" containerName="nova-metadata-metadata" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.878576 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a2e45a5-f1bd-4f5b-82e6-f98168ece99a" containerName="dnsmasq-dns" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.878598 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d653c18-77bd-4bbd-9ad3-d2469bfec0c9" containerName="nova-metadata-metadata" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.878614 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d653c18-77bd-4bbd-9ad3-d2469bfec0c9" containerName="nova-metadata-log" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.881277 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.883469 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/845c0587-4941-4756-b924-2d6078264b2c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"845c0587-4941-4756-b924-2d6078264b2c\") " pod="openstack/nova-metadata-0" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.883631 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/845c0587-4941-4756-b924-2d6078264b2c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"845c0587-4941-4756-b924-2d6078264b2c\") " pod="openstack/nova-metadata-0" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.883669 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/845c0587-4941-4756-b924-2d6078264b2c-config-data\") pod \"nova-metadata-0\" (UID: \"845c0587-4941-4756-b924-2d6078264b2c\") " pod="openstack/nova-metadata-0" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.883690 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/845c0587-4941-4756-b924-2d6078264b2c-logs\") pod \"nova-metadata-0\" (UID: \"845c0587-4941-4756-b924-2d6078264b2c\") " pod="openstack/nova-metadata-0" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.883745 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2mxc\" (UniqueName: \"kubernetes.io/projected/845c0587-4941-4756-b924-2d6078264b2c-kube-api-access-s2mxc\") pod \"nova-metadata-0\" (UID: \"845c0587-4941-4756-b924-2d6078264b2c\") " pod="openstack/nova-metadata-0" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.885238 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.886122 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.887943 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.966990 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d653c18-77bd-4bbd-9ad3-d2469bfec0c9" path="/var/lib/kubelet/pods/3d653c18-77bd-4bbd-9ad3-d2469bfec0c9/volumes" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.986900 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/845c0587-4941-4756-b924-2d6078264b2c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"845c0587-4941-4756-b924-2d6078264b2c\") " pod="openstack/nova-metadata-0" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.987574 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/845c0587-4941-4756-b924-2d6078264b2c-config-data\") pod \"nova-metadata-0\" (UID: \"845c0587-4941-4756-b924-2d6078264b2c\") " pod="openstack/nova-metadata-0" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.987607 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/845c0587-4941-4756-b924-2d6078264b2c-logs\") pod \"nova-metadata-0\" (UID: \"845c0587-4941-4756-b924-2d6078264b2c\") " pod="openstack/nova-metadata-0" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.987767 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2mxc\" (UniqueName: \"kubernetes.io/projected/845c0587-4941-4756-b924-2d6078264b2c-kube-api-access-s2mxc\") pod \"nova-metadata-0\" (UID: \"845c0587-4941-4756-b924-2d6078264b2c\") " pod="openstack/nova-metadata-0" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.987899 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/845c0587-4941-4756-b924-2d6078264b2c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"845c0587-4941-4756-b924-2d6078264b2c\") " pod="openstack/nova-metadata-0" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.988549 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/845c0587-4941-4756-b924-2d6078264b2c-logs\") pod \"nova-metadata-0\" (UID: \"845c0587-4941-4756-b924-2d6078264b2c\") " pod="openstack/nova-metadata-0" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.992205 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/845c0587-4941-4756-b924-2d6078264b2c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"845c0587-4941-4756-b924-2d6078264b2c\") " pod="openstack/nova-metadata-0" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.992251 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/845c0587-4941-4756-b924-2d6078264b2c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"845c0587-4941-4756-b924-2d6078264b2c\") " pod="openstack/nova-metadata-0" Feb 17 16:25:53 crc kubenswrapper[4672]: I0217 16:25:53.992370 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/845c0587-4941-4756-b924-2d6078264b2c-config-data\") pod \"nova-metadata-0\" (UID: \"845c0587-4941-4756-b924-2d6078264b2c\") " pod="openstack/nova-metadata-0" Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.006860 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2mxc\" (UniqueName: \"kubernetes.io/projected/845c0587-4941-4756-b924-2d6078264b2c-kube-api-access-s2mxc\") pod \"nova-metadata-0\" (UID: \"845c0587-4941-4756-b924-2d6078264b2c\") " pod="openstack/nova-metadata-0" Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.040179 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c5jfb" Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.089118 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-config-data\") pod \"d89089a9-0ddc-4c81-a639-dd9dcf7e9163\" (UID: \"d89089a9-0ddc-4c81-a639-dd9dcf7e9163\") " Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.089207 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-combined-ca-bundle\") pod \"d89089a9-0ddc-4c81-a639-dd9dcf7e9163\" (UID: \"d89089a9-0ddc-4c81-a639-dd9dcf7e9163\") " Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.089526 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr85k\" (UniqueName: \"kubernetes.io/projected/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-kube-api-access-dr85k\") pod \"d89089a9-0ddc-4c81-a639-dd9dcf7e9163\" (UID: \"d89089a9-0ddc-4c81-a639-dd9dcf7e9163\") " Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.089585 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-scripts\") pod \"d89089a9-0ddc-4c81-a639-dd9dcf7e9163\" (UID: \"d89089a9-0ddc-4c81-a639-dd9dcf7e9163\") " Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.093099 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-kube-api-access-dr85k" (OuterVolumeSpecName: "kube-api-access-dr85k") pod "d89089a9-0ddc-4c81-a639-dd9dcf7e9163" (UID: "d89089a9-0ddc-4c81-a639-dd9dcf7e9163"). InnerVolumeSpecName "kube-api-access-dr85k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.093530 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-scripts" (OuterVolumeSpecName: "scripts") pod "d89089a9-0ddc-4c81-a639-dd9dcf7e9163" (UID: "d89089a9-0ddc-4c81-a639-dd9dcf7e9163"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.154657 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-config-data" (OuterVolumeSpecName: "config-data") pod "d89089a9-0ddc-4c81-a639-dd9dcf7e9163" (UID: "d89089a9-0ddc-4c81-a639-dd9dcf7e9163"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.157598 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d89089a9-0ddc-4c81-a639-dd9dcf7e9163" (UID: "d89089a9-0ddc-4c81-a639-dd9dcf7e9163"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.194043 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.194085 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.194109 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr85k\" (UniqueName: \"kubernetes.io/projected/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-kube-api-access-dr85k\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.194119 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d89089a9-0ddc-4c81-a639-dd9dcf7e9163-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.250363 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.360741 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.510901 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c5jfb" event={"ID":"d89089a9-0ddc-4c81-a639-dd9dcf7e9163","Type":"ContainerDied","Data":"43eca27d58428f15146baed091ac26a14dd8f2fb9e38f540d2ab0cbd9f3a2d9f"} Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.510941 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43eca27d58428f15146baed091ac26a14dd8f2fb9e38f540d2ab0cbd9f3a2d9f" Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.510995 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c5jfb" Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.731706 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.732176 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8012bab4-6bda-4be8-b98b-4c46b99201e4" containerName="nova-api-log" containerID="cri-o://f2b2451a4a4376e9dd0ad26d598659806b0503b00f24bbb839ca9bede738e149" gracePeriod=30 Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.732765 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8012bab4-6bda-4be8-b98b-4c46b99201e4" containerName="nova-api-api" containerID="cri-o://9f361a03c53d9f50073ff05525e6ea98f10745f0a26ea97652a9f9aee183f86a" gracePeriod=30 Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.734347 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.761887 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.763089 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8d0299a6-8cb1-4a31-89ba-1bb27adb34d1" containerName="nova-scheduler-scheduler" containerID="cri-o://69d3cf0397eec94371f2a2242e32dc23fff5846097644cc3bb57d40c45253f1a" gracePeriod=30 Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.776858 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:25:54 crc kubenswrapper[4672]: I0217 16:25:54.820948 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.541817 4672 generic.go:334] "Generic (PLEG): container finished" podID="8012bab4-6bda-4be8-b98b-4c46b99201e4" containerID="9f361a03c53d9f50073ff05525e6ea98f10745f0a26ea97652a9f9aee183f86a" exitCode=0 Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.542129 4672 generic.go:334] "Generic (PLEG): container finished" podID="8012bab4-6bda-4be8-b98b-4c46b99201e4" containerID="f2b2451a4a4376e9dd0ad26d598659806b0503b00f24bbb839ca9bede738e149" exitCode=143 Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.542171 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8012bab4-6bda-4be8-b98b-4c46b99201e4","Type":"ContainerDied","Data":"9f361a03c53d9f50073ff05525e6ea98f10745f0a26ea97652a9f9aee183f86a"} Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.542197 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8012bab4-6bda-4be8-b98b-4c46b99201e4","Type":"ContainerDied","Data":"f2b2451a4a4376e9dd0ad26d598659806b0503b00f24bbb839ca9bede738e149"} Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.542205 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8012bab4-6bda-4be8-b98b-4c46b99201e4","Type":"ContainerDied","Data":"99cd568c6fb7f49087bea8dec8b2ace36cfc1e5302e5bf84f1cee755ddf2babb"} Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.542214 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99cd568c6fb7f49087bea8dec8b2ace36cfc1e5302e5bf84f1cee755ddf2babb" Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.544730 4672 generic.go:334] "Generic (PLEG): container finished" podID="8d0299a6-8cb1-4a31-89ba-1bb27adb34d1" containerID="69d3cf0397eec94371f2a2242e32dc23fff5846097644cc3bb57d40c45253f1a" exitCode=0 Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.544783 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8d0299a6-8cb1-4a31-89ba-1bb27adb34d1","Type":"ContainerDied","Data":"69d3cf0397eec94371f2a2242e32dc23fff5846097644cc3bb57d40c45253f1a"} Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.549723 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.552061 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"845c0587-4941-4756-b924-2d6078264b2c","Type":"ContainerStarted","Data":"60dc8e697466a61baf65f124c8713bd32875abe95bc1256b86d56080912285ad"} Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.552114 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"845c0587-4941-4756-b924-2d6078264b2c","Type":"ContainerStarted","Data":"40744341cf5eab09c169fde62f25f1fb9d702a47737ca68d9e17ceaf32818e79"} Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.552127 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"845c0587-4941-4756-b924-2d6078264b2c","Type":"ContainerStarted","Data":"b1c86a4ce16877115c2fe559db64ceb26f88a7556a025d21a2537948ef96af02"} Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.552226 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="845c0587-4941-4756-b924-2d6078264b2c" containerName="nova-metadata-log" containerID="cri-o://40744341cf5eab09c169fde62f25f1fb9d702a47737ca68d9e17ceaf32818e79" gracePeriod=30 Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.552369 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="845c0587-4941-4756-b924-2d6078264b2c" containerName="nova-metadata-metadata" containerID="cri-o://60dc8e697466a61baf65f124c8713bd32875abe95bc1256b86d56080912285ad" gracePeriod=30 Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.596378 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.596359459 podStartE2EDuration="2.596359459s" podCreationTimestamp="2026-02-17 16:25:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:55.584967868 +0000 UTC m=+1364.339056590" watchObservedRunningTime="2026-02-17 16:25:55.596359459 +0000 UTC m=+1364.350448191" Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.733077 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8012bab4-6bda-4be8-b98b-4c46b99201e4-logs\") pod \"8012bab4-6bda-4be8-b98b-4c46b99201e4\" (UID: \"8012bab4-6bda-4be8-b98b-4c46b99201e4\") " Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.733177 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8012bab4-6bda-4be8-b98b-4c46b99201e4-combined-ca-bundle\") pod \"8012bab4-6bda-4be8-b98b-4c46b99201e4\" (UID: \"8012bab4-6bda-4be8-b98b-4c46b99201e4\") " Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.733396 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4dn4\" (UniqueName: \"kubernetes.io/projected/8012bab4-6bda-4be8-b98b-4c46b99201e4-kube-api-access-n4dn4\") pod \"8012bab4-6bda-4be8-b98b-4c46b99201e4\" (UID: \"8012bab4-6bda-4be8-b98b-4c46b99201e4\") " Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.733440 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8012bab4-6bda-4be8-b98b-4c46b99201e4-logs" (OuterVolumeSpecName: "logs") pod "8012bab4-6bda-4be8-b98b-4c46b99201e4" (UID: "8012bab4-6bda-4be8-b98b-4c46b99201e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.733482 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8012bab4-6bda-4be8-b98b-4c46b99201e4-config-data\") pod \"8012bab4-6bda-4be8-b98b-4c46b99201e4\" (UID: \"8012bab4-6bda-4be8-b98b-4c46b99201e4\") " Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.734400 4672 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8012bab4-6bda-4be8-b98b-4c46b99201e4-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.739202 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8012bab4-6bda-4be8-b98b-4c46b99201e4-kube-api-access-n4dn4" (OuterVolumeSpecName: "kube-api-access-n4dn4") pod "8012bab4-6bda-4be8-b98b-4c46b99201e4" (UID: "8012bab4-6bda-4be8-b98b-4c46b99201e4"). InnerVolumeSpecName "kube-api-access-n4dn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.750438 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:25:55 crc kubenswrapper[4672]: E0217 16:25:55.771266 4672 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod845c0587_4941_4756_b924_2d6078264b2c.slice/crio-conmon-40744341cf5eab09c169fde62f25f1fb9d702a47737ca68d9e17ceaf32818e79.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod845c0587_4941_4756_b924_2d6078264b2c.slice/crio-40744341cf5eab09c169fde62f25f1fb9d702a47737ca68d9e17ceaf32818e79.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.775905 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8012bab4-6bda-4be8-b98b-4c46b99201e4-config-data" (OuterVolumeSpecName: "config-data") pod "8012bab4-6bda-4be8-b98b-4c46b99201e4" (UID: "8012bab4-6bda-4be8-b98b-4c46b99201e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.787269 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8012bab4-6bda-4be8-b98b-4c46b99201e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8012bab4-6bda-4be8-b98b-4c46b99201e4" (UID: "8012bab4-6bda-4be8-b98b-4c46b99201e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.835315 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r77pc\" (UniqueName: \"kubernetes.io/projected/8d0299a6-8cb1-4a31-89ba-1bb27adb34d1-kube-api-access-r77pc\") pod \"8d0299a6-8cb1-4a31-89ba-1bb27adb34d1\" (UID: \"8d0299a6-8cb1-4a31-89ba-1bb27adb34d1\") " Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.835377 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d0299a6-8cb1-4a31-89ba-1bb27adb34d1-config-data\") pod \"8d0299a6-8cb1-4a31-89ba-1bb27adb34d1\" (UID: \"8d0299a6-8cb1-4a31-89ba-1bb27adb34d1\") " Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.835446 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d0299a6-8cb1-4a31-89ba-1bb27adb34d1-combined-ca-bundle\") pod \"8d0299a6-8cb1-4a31-89ba-1bb27adb34d1\" (UID: \"8d0299a6-8cb1-4a31-89ba-1bb27adb34d1\") " Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.835731 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8012bab4-6bda-4be8-b98b-4c46b99201e4-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.835743 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8012bab4-6bda-4be8-b98b-4c46b99201e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.835753 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4dn4\" (UniqueName: \"kubernetes.io/projected/8012bab4-6bda-4be8-b98b-4c46b99201e4-kube-api-access-n4dn4\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.839340 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d0299a6-8cb1-4a31-89ba-1bb27adb34d1-kube-api-access-r77pc" (OuterVolumeSpecName: "kube-api-access-r77pc") pod "8d0299a6-8cb1-4a31-89ba-1bb27adb34d1" (UID: "8d0299a6-8cb1-4a31-89ba-1bb27adb34d1"). InnerVolumeSpecName "kube-api-access-r77pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.864102 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d0299a6-8cb1-4a31-89ba-1bb27adb34d1-config-data" (OuterVolumeSpecName: "config-data") pod "8d0299a6-8cb1-4a31-89ba-1bb27adb34d1" (UID: "8d0299a6-8cb1-4a31-89ba-1bb27adb34d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.870727 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d0299a6-8cb1-4a31-89ba-1bb27adb34d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d0299a6-8cb1-4a31-89ba-1bb27adb34d1" (UID: "8d0299a6-8cb1-4a31-89ba-1bb27adb34d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.938571 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r77pc\" (UniqueName: \"kubernetes.io/projected/8d0299a6-8cb1-4a31-89ba-1bb27adb34d1-kube-api-access-r77pc\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.938616 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d0299a6-8cb1-4a31-89ba-1bb27adb34d1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:55 crc kubenswrapper[4672]: I0217 16:25:55.938634 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d0299a6-8cb1-4a31-89ba-1bb27adb34d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.562300 4672 generic.go:334] "Generic (PLEG): container finished" podID="845c0587-4941-4756-b924-2d6078264b2c" containerID="40744341cf5eab09c169fde62f25f1fb9d702a47737ca68d9e17ceaf32818e79" exitCode=143 Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.563612 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"845c0587-4941-4756-b924-2d6078264b2c","Type":"ContainerDied","Data":"40744341cf5eab09c169fde62f25f1fb9d702a47737ca68d9e17ceaf32818e79"} Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.565875 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.566868 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.567352 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8d0299a6-8cb1-4a31-89ba-1bb27adb34d1","Type":"ContainerDied","Data":"7da6f902553e9c9900548d3ccad279045b6c7b668c71dbdb03309866fc0198f1"} Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.567399 4672 scope.go:117] "RemoveContainer" containerID="69d3cf0397eec94371f2a2242e32dc23fff5846097644cc3bb57d40c45253f1a" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.620936 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.644185 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.673581 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.687598 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.693561 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:25:56 crc kubenswrapper[4672]: E0217 16:25:56.694088 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8012bab4-6bda-4be8-b98b-4c46b99201e4" containerName="nova-api-api" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.694108 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="8012bab4-6bda-4be8-b98b-4c46b99201e4" containerName="nova-api-api" Feb 17 16:25:56 crc kubenswrapper[4672]: E0217 16:25:56.694129 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89089a9-0ddc-4c81-a639-dd9dcf7e9163" containerName="nova-manage" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.694136 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89089a9-0ddc-4c81-a639-dd9dcf7e9163" containerName="nova-manage" Feb 17 16:25:56 crc kubenswrapper[4672]: E0217 16:25:56.694150 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d0299a6-8cb1-4a31-89ba-1bb27adb34d1" containerName="nova-scheduler-scheduler" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.694156 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d0299a6-8cb1-4a31-89ba-1bb27adb34d1" containerName="nova-scheduler-scheduler" Feb 17 16:25:56 crc kubenswrapper[4672]: E0217 16:25:56.694179 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8012bab4-6bda-4be8-b98b-4c46b99201e4" containerName="nova-api-log" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.694185 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="8012bab4-6bda-4be8-b98b-4c46b99201e4" containerName="nova-api-log" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.694392 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="8012bab4-6bda-4be8-b98b-4c46b99201e4" containerName="nova-api-log" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.694412 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="8012bab4-6bda-4be8-b98b-4c46b99201e4" containerName="nova-api-api" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.694473 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="d89089a9-0ddc-4c81-a639-dd9dcf7e9163" containerName="nova-manage" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.694489 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d0299a6-8cb1-4a31-89ba-1bb27adb34d1" containerName="nova-scheduler-scheduler" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.695612 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.697554 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.711118 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.741410 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.746107 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.760675 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.763481 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.860225 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gftgw\" (UniqueName: \"kubernetes.io/projected/777c853a-fcc3-4f2a-ad78-32bd1782655a-kube-api-access-gftgw\") pod \"nova-scheduler-0\" (UID: \"777c853a-fcc3-4f2a-ad78-32bd1782655a\") " pod="openstack/nova-scheduler-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.860302 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57hzq\" (UniqueName: \"kubernetes.io/projected/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-kube-api-access-57hzq\") pod \"nova-api-0\" (UID: \"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2\") " pod="openstack/nova-api-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.860351 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/777c853a-fcc3-4f2a-ad78-32bd1782655a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"777c853a-fcc3-4f2a-ad78-32bd1782655a\") " pod="openstack/nova-scheduler-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.860380 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-logs\") pod \"nova-api-0\" (UID: \"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2\") " pod="openstack/nova-api-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.860463 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2\") " pod="openstack/nova-api-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.860708 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-config-data\") pod \"nova-api-0\" (UID: \"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2\") " pod="openstack/nova-api-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.860764 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/777c853a-fcc3-4f2a-ad78-32bd1782655a-config-data\") pod \"nova-scheduler-0\" (UID: \"777c853a-fcc3-4f2a-ad78-32bd1782655a\") " pod="openstack/nova-scheduler-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.962252 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-config-data\") pod \"nova-api-0\" (UID: \"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2\") " pod="openstack/nova-api-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.962325 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/777c853a-fcc3-4f2a-ad78-32bd1782655a-config-data\") pod \"nova-scheduler-0\" (UID: \"777c853a-fcc3-4f2a-ad78-32bd1782655a\") " pod="openstack/nova-scheduler-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.962405 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gftgw\" (UniqueName: \"kubernetes.io/projected/777c853a-fcc3-4f2a-ad78-32bd1782655a-kube-api-access-gftgw\") pod \"nova-scheduler-0\" (UID: \"777c853a-fcc3-4f2a-ad78-32bd1782655a\") " pod="openstack/nova-scheduler-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.962428 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57hzq\" (UniqueName: \"kubernetes.io/projected/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-kube-api-access-57hzq\") pod \"nova-api-0\" (UID: \"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2\") " pod="openstack/nova-api-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.962451 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/777c853a-fcc3-4f2a-ad78-32bd1782655a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"777c853a-fcc3-4f2a-ad78-32bd1782655a\") " pod="openstack/nova-scheduler-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.962478 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-logs\") pod \"nova-api-0\" (UID: \"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2\") " pod="openstack/nova-api-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.962608 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2\") " pod="openstack/nova-api-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.964350 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-logs\") pod \"nova-api-0\" (UID: \"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2\") " pod="openstack/nova-api-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.968218 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/777c853a-fcc3-4f2a-ad78-32bd1782655a-config-data\") pod \"nova-scheduler-0\" (UID: \"777c853a-fcc3-4f2a-ad78-32bd1782655a\") " pod="openstack/nova-scheduler-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.968376 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-config-data\") pod \"nova-api-0\" (UID: \"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2\") " pod="openstack/nova-api-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.972012 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/777c853a-fcc3-4f2a-ad78-32bd1782655a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"777c853a-fcc3-4f2a-ad78-32bd1782655a\") " pod="openstack/nova-scheduler-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.979979 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2\") " pod="openstack/nova-api-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.982451 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gftgw\" (UniqueName: \"kubernetes.io/projected/777c853a-fcc3-4f2a-ad78-32bd1782655a-kube-api-access-gftgw\") pod \"nova-scheduler-0\" (UID: \"777c853a-fcc3-4f2a-ad78-32bd1782655a\") " pod="openstack/nova-scheduler-0" Feb 17 16:25:56 crc kubenswrapper[4672]: I0217 16:25:56.983270 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57hzq\" (UniqueName: \"kubernetes.io/projected/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-kube-api-access-57hzq\") pod \"nova-api-0\" (UID: \"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2\") " pod="openstack/nova-api-0" Feb 17 16:25:57 crc kubenswrapper[4672]: I0217 16:25:57.013000 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:25:57 crc kubenswrapper[4672]: I0217 16:25:57.079010 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:25:57 crc kubenswrapper[4672]: I0217 16:25:57.566621 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:25:57 crc kubenswrapper[4672]: I0217 16:25:57.566919 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:25:57 crc kubenswrapper[4672]: I0217 16:25:57.566963 4672 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:25:57 crc kubenswrapper[4672]: I0217 16:25:57.567685 4672 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1722f428334a1de321c821e299e3526dfaf27650f5a791aad97e83a2cd3ceac4"} pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:25:57 crc kubenswrapper[4672]: I0217 16:25:57.567731 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" containerID="cri-o://1722f428334a1de321c821e299e3526dfaf27650f5a791aad97e83a2cd3ceac4" gracePeriod=600 Feb 17 16:25:57 crc kubenswrapper[4672]: I0217 16:25:57.641583 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:25:57 crc kubenswrapper[4672]: I0217 16:25:57.740533 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:25:57 crc kubenswrapper[4672]: W0217 16:25:57.774075 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod777c853a_fcc3_4f2a_ad78_32bd1782655a.slice/crio-1d039d026d5c855131ab98c21d8096c995e2e8771a9af186544286203af174aa WatchSource:0}: Error finding container 1d039d026d5c855131ab98c21d8096c995e2e8771a9af186544286203af174aa: Status 404 returned error can't find the container with id 1d039d026d5c855131ab98c21d8096c995e2e8771a9af186544286203af174aa Feb 17 16:25:57 crc kubenswrapper[4672]: I0217 16:25:57.957872 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8012bab4-6bda-4be8-b98b-4c46b99201e4" path="/var/lib/kubelet/pods/8012bab4-6bda-4be8-b98b-4c46b99201e4/volumes" Feb 17 16:25:57 crc kubenswrapper[4672]: I0217 16:25:57.959169 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d0299a6-8cb1-4a31-89ba-1bb27adb34d1" path="/var/lib/kubelet/pods/8d0299a6-8cb1-4a31-89ba-1bb27adb34d1/volumes" Feb 17 16:25:58 crc kubenswrapper[4672]: I0217 16:25:58.588247 4672 generic.go:334] "Generic (PLEG): container finished" podID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerID="1722f428334a1de321c821e299e3526dfaf27650f5a791aad97e83a2cd3ceac4" exitCode=0 Feb 17 16:25:58 crc kubenswrapper[4672]: I0217 16:25:58.588330 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerDied","Data":"1722f428334a1de321c821e299e3526dfaf27650f5a791aad97e83a2cd3ceac4"} Feb 17 16:25:58 crc kubenswrapper[4672]: I0217 16:25:58.588584 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerStarted","Data":"788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261"} Feb 17 16:25:58 crc kubenswrapper[4672]: I0217 16:25:58.588604 4672 scope.go:117] "RemoveContainer" containerID="6e5c44fe403356546654090676cb1aa54373e380600ecb186fac59fca3fb0ed3" Feb 17 16:25:58 crc kubenswrapper[4672]: I0217 16:25:58.591327 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"777c853a-fcc3-4f2a-ad78-32bd1782655a","Type":"ContainerStarted","Data":"7175aa643d710da236f5bd7ae8fddec92e4520c0cbb96d90168909a8b501bec0"} Feb 17 16:25:58 crc kubenswrapper[4672]: I0217 16:25:58.591374 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"777c853a-fcc3-4f2a-ad78-32bd1782655a","Type":"ContainerStarted","Data":"1d039d026d5c855131ab98c21d8096c995e2e8771a9af186544286203af174aa"} Feb 17 16:25:58 crc kubenswrapper[4672]: I0217 16:25:58.594962 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2","Type":"ContainerStarted","Data":"81c2e8ed828d72c1dbe0ccc858c831ab2bd5e6c5dcb9d5a1918cd609dd88bf18"} Feb 17 16:25:58 crc kubenswrapper[4672]: I0217 16:25:58.595112 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2","Type":"ContainerStarted","Data":"9b48a1f51ca630c15c2efe92c9d8deec148f33df95627852b56a22c3c0b60531"} Feb 17 16:25:58 crc kubenswrapper[4672]: I0217 16:25:58.595179 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2","Type":"ContainerStarted","Data":"036b8ecab1503d84d6836f27f2f9297a93e2073a281b82789eab137778c1f7e6"} Feb 17 16:25:58 crc kubenswrapper[4672]: I0217 16:25:58.596762 4672 generic.go:334] "Generic (PLEG): container finished" podID="032644e0-8b08-4138-8e14-aee003b214d2" containerID="8f2a24d95a39e2bdc52a59a549c1d20dc5cd9223153269c654366b9b645808b5" exitCode=0 Feb 17 16:25:58 crc kubenswrapper[4672]: I0217 16:25:58.596848 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4jq6r" event={"ID":"032644e0-8b08-4138-8e14-aee003b214d2","Type":"ContainerDied","Data":"8f2a24d95a39e2bdc52a59a549c1d20dc5cd9223153269c654366b9b645808b5"} Feb 17 16:25:58 crc kubenswrapper[4672]: I0217 16:25:58.630042 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.630024035 podStartE2EDuration="2.630024035s" podCreationTimestamp="2026-02-17 16:25:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:58.626238605 +0000 UTC m=+1367.380327357" watchObservedRunningTime="2026-02-17 16:25:58.630024035 +0000 UTC m=+1367.384112757" Feb 17 16:25:58 crc kubenswrapper[4672]: I0217 16:25:58.671051 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.671031926 podStartE2EDuration="2.671031926s" podCreationTimestamp="2026-02-17 16:25:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:25:58.657860179 +0000 UTC m=+1367.411948931" watchObservedRunningTime="2026-02-17 16:25:58.671031926 +0000 UTC m=+1367.425120678" Feb 17 16:25:59 crc kubenswrapper[4672]: I0217 16:25:59.251193 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:25:59 crc kubenswrapper[4672]: I0217 16:25:59.251743 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.085868 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4jq6r" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.230867 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/032644e0-8b08-4138-8e14-aee003b214d2-combined-ca-bundle\") pod \"032644e0-8b08-4138-8e14-aee003b214d2\" (UID: \"032644e0-8b08-4138-8e14-aee003b214d2\") " Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.230932 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/032644e0-8b08-4138-8e14-aee003b214d2-config-data\") pod \"032644e0-8b08-4138-8e14-aee003b214d2\" (UID: \"032644e0-8b08-4138-8e14-aee003b214d2\") " Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.231050 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/032644e0-8b08-4138-8e14-aee003b214d2-scripts\") pod \"032644e0-8b08-4138-8e14-aee003b214d2\" (UID: \"032644e0-8b08-4138-8e14-aee003b214d2\") " Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.231782 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnr9f\" (UniqueName: \"kubernetes.io/projected/032644e0-8b08-4138-8e14-aee003b214d2-kube-api-access-qnr9f\") pod \"032644e0-8b08-4138-8e14-aee003b214d2\" (UID: \"032644e0-8b08-4138-8e14-aee003b214d2\") " Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.240266 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/032644e0-8b08-4138-8e14-aee003b214d2-scripts" (OuterVolumeSpecName: "scripts") pod "032644e0-8b08-4138-8e14-aee003b214d2" (UID: "032644e0-8b08-4138-8e14-aee003b214d2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.255764 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/032644e0-8b08-4138-8e14-aee003b214d2-kube-api-access-qnr9f" (OuterVolumeSpecName: "kube-api-access-qnr9f") pod "032644e0-8b08-4138-8e14-aee003b214d2" (UID: "032644e0-8b08-4138-8e14-aee003b214d2"). InnerVolumeSpecName "kube-api-access-qnr9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.268550 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/032644e0-8b08-4138-8e14-aee003b214d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "032644e0-8b08-4138-8e14-aee003b214d2" (UID: "032644e0-8b08-4138-8e14-aee003b214d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.273646 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/032644e0-8b08-4138-8e14-aee003b214d2-config-data" (OuterVolumeSpecName: "config-data") pod "032644e0-8b08-4138-8e14-aee003b214d2" (UID: "032644e0-8b08-4138-8e14-aee003b214d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.334581 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/032644e0-8b08-4138-8e14-aee003b214d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.334640 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/032644e0-8b08-4138-8e14-aee003b214d2-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.334653 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/032644e0-8b08-4138-8e14-aee003b214d2-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.334668 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnr9f\" (UniqueName: \"kubernetes.io/projected/032644e0-8b08-4138-8e14-aee003b214d2-kube-api-access-qnr9f\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.640960 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4jq6r" event={"ID":"032644e0-8b08-4138-8e14-aee003b214d2","Type":"ContainerDied","Data":"d4cf4985e73f3e1c89eb34af1d610017f5eafbb9902792d50b9992bc1e8d5307"} Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.640999 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4cf4985e73f3e1c89eb34af1d610017f5eafbb9902792d50b9992bc1e8d5307" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.641050 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4jq6r" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.730180 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 16:26:00 crc kubenswrapper[4672]: E0217 16:26:00.730598 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="032644e0-8b08-4138-8e14-aee003b214d2" containerName="nova-cell1-conductor-db-sync" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.730614 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="032644e0-8b08-4138-8e14-aee003b214d2" containerName="nova-cell1-conductor-db-sync" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.730796 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="032644e0-8b08-4138-8e14-aee003b214d2" containerName="nova-cell1-conductor-db-sync" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.731464 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.735567 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.752560 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.844337 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99xkx\" (UniqueName: \"kubernetes.io/projected/58a809fa-9243-47fa-9b98-08932cdef54f-kube-api-access-99xkx\") pod \"nova-cell1-conductor-0\" (UID: \"58a809fa-9243-47fa-9b98-08932cdef54f\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.844733 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a809fa-9243-47fa-9b98-08932cdef54f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"58a809fa-9243-47fa-9b98-08932cdef54f\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.844824 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58a809fa-9243-47fa-9b98-08932cdef54f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"58a809fa-9243-47fa-9b98-08932cdef54f\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.946452 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99xkx\" (UniqueName: \"kubernetes.io/projected/58a809fa-9243-47fa-9b98-08932cdef54f-kube-api-access-99xkx\") pod \"nova-cell1-conductor-0\" (UID: \"58a809fa-9243-47fa-9b98-08932cdef54f\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.946583 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a809fa-9243-47fa-9b98-08932cdef54f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"58a809fa-9243-47fa-9b98-08932cdef54f\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.946623 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58a809fa-9243-47fa-9b98-08932cdef54f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"58a809fa-9243-47fa-9b98-08932cdef54f\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.951915 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a809fa-9243-47fa-9b98-08932cdef54f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"58a809fa-9243-47fa-9b98-08932cdef54f\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.955397 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58a809fa-9243-47fa-9b98-08932cdef54f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"58a809fa-9243-47fa-9b98-08932cdef54f\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:00 crc kubenswrapper[4672]: I0217 16:26:00.966629 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99xkx\" (UniqueName: \"kubernetes.io/projected/58a809fa-9243-47fa-9b98-08932cdef54f-kube-api-access-99xkx\") pod \"nova-cell1-conductor-0\" (UID: \"58a809fa-9243-47fa-9b98-08932cdef54f\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:01 crc kubenswrapper[4672]: I0217 16:26:01.080815 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:01 crc kubenswrapper[4672]: W0217 16:26:01.639687 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58a809fa_9243_47fa_9b98_08932cdef54f.slice/crio-085712274d15acd9c44c23cc69f26f756e144d82109295ae5458c78e6bdbbf0c WatchSource:0}: Error finding container 085712274d15acd9c44c23cc69f26f756e144d82109295ae5458c78e6bdbbf0c: Status 404 returned error can't find the container with id 085712274d15acd9c44c23cc69f26f756e144d82109295ae5458c78e6bdbbf0c Feb 17 16:26:01 crc kubenswrapper[4672]: I0217 16:26:01.640587 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 16:26:01 crc kubenswrapper[4672]: I0217 16:26:01.656580 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"58a809fa-9243-47fa-9b98-08932cdef54f","Type":"ContainerStarted","Data":"085712274d15acd9c44c23cc69f26f756e144d82109295ae5458c78e6bdbbf0c"} Feb 17 16:26:02 crc kubenswrapper[4672]: I0217 16:26:02.079362 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 16:26:02 crc kubenswrapper[4672]: I0217 16:26:02.670190 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"58a809fa-9243-47fa-9b98-08932cdef54f","Type":"ContainerStarted","Data":"c86e43de41e5354d3b9b9a2a20cac3ad3e1103b0fc37d85741e3d5f372aff897"} Feb 17 16:26:02 crc kubenswrapper[4672]: I0217 16:26:02.670758 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:02 crc kubenswrapper[4672]: I0217 16:26:02.699332 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.6993142839999997 podStartE2EDuration="2.699314284s" podCreationTimestamp="2026-02-17 16:26:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:26:02.684189095 +0000 UTC m=+1371.438277827" watchObservedRunningTime="2026-02-17 16:26:02.699314284 +0000 UTC m=+1371.453403016" Feb 17 16:26:05 crc kubenswrapper[4672]: I0217 16:26:05.513898 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 16:26:06 crc kubenswrapper[4672]: I0217 16:26:06.112149 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 17 16:26:07 crc kubenswrapper[4672]: I0217 16:26:07.013275 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:26:07 crc kubenswrapper[4672]: I0217 16:26:07.013357 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:26:07 crc kubenswrapper[4672]: I0217 16:26:07.080158 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 16:26:07 crc kubenswrapper[4672]: I0217 16:26:07.118208 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 16:26:07 crc kubenswrapper[4672]: I0217 16:26:07.795580 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 16:26:08 crc kubenswrapper[4672]: I0217 16:26:08.095688 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cc5ff4ff-66c9-4b35-8cab-96245e66ccb2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.223:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:26:08 crc kubenswrapper[4672]: I0217 16:26:08.095697 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cc5ff4ff-66c9-4b35-8cab-96245e66ccb2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.223:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:26:09 crc kubenswrapper[4672]: I0217 16:26:09.715644 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:26:09 crc kubenswrapper[4672]: I0217 16:26:09.716923 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="0494473e-5e65-47bf-b3a3-6d8c7b27243f" containerName="kube-state-metrics" containerID="cri-o://63bd95b9b89a263cee605e8b05c867038d9f52a76a9efc344fde7fa796684a4b" gracePeriod=30 Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.288795 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.449220 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrvq5\" (UniqueName: \"kubernetes.io/projected/0494473e-5e65-47bf-b3a3-6d8c7b27243f-kube-api-access-wrvq5\") pod \"0494473e-5e65-47bf-b3a3-6d8c7b27243f\" (UID: \"0494473e-5e65-47bf-b3a3-6d8c7b27243f\") " Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.460659 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0494473e-5e65-47bf-b3a3-6d8c7b27243f-kube-api-access-wrvq5" (OuterVolumeSpecName: "kube-api-access-wrvq5") pod "0494473e-5e65-47bf-b3a3-6d8c7b27243f" (UID: "0494473e-5e65-47bf-b3a3-6d8c7b27243f"). InnerVolumeSpecName "kube-api-access-wrvq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.552152 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrvq5\" (UniqueName: \"kubernetes.io/projected/0494473e-5e65-47bf-b3a3-6d8c7b27243f-kube-api-access-wrvq5\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.805111 4672 generic.go:334] "Generic (PLEG): container finished" podID="0494473e-5e65-47bf-b3a3-6d8c7b27243f" containerID="63bd95b9b89a263cee605e8b05c867038d9f52a76a9efc344fde7fa796684a4b" exitCode=2 Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.805157 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0494473e-5e65-47bf-b3a3-6d8c7b27243f","Type":"ContainerDied","Data":"63bd95b9b89a263cee605e8b05c867038d9f52a76a9efc344fde7fa796684a4b"} Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.805184 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0494473e-5e65-47bf-b3a3-6d8c7b27243f","Type":"ContainerDied","Data":"160a5dcd6c50e13d075b639333cde720e6d5debf6bc30c633f6cb619b7a51a7f"} Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.805201 4672 scope.go:117] "RemoveContainer" containerID="63bd95b9b89a263cee605e8b05c867038d9f52a76a9efc344fde7fa796684a4b" Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.805212 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.839021 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.841252 4672 scope.go:117] "RemoveContainer" containerID="63bd95b9b89a263cee605e8b05c867038d9f52a76a9efc344fde7fa796684a4b" Feb 17 16:26:10 crc kubenswrapper[4672]: E0217 16:26:10.844258 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63bd95b9b89a263cee605e8b05c867038d9f52a76a9efc344fde7fa796684a4b\": container with ID starting with 63bd95b9b89a263cee605e8b05c867038d9f52a76a9efc344fde7fa796684a4b not found: ID does not exist" containerID="63bd95b9b89a263cee605e8b05c867038d9f52a76a9efc344fde7fa796684a4b" Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.844301 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63bd95b9b89a263cee605e8b05c867038d9f52a76a9efc344fde7fa796684a4b"} err="failed to get container status \"63bd95b9b89a263cee605e8b05c867038d9f52a76a9efc344fde7fa796684a4b\": rpc error: code = NotFound desc = could not find container \"63bd95b9b89a263cee605e8b05c867038d9f52a76a9efc344fde7fa796684a4b\": container with ID starting with 63bd95b9b89a263cee605e8b05c867038d9f52a76a9efc344fde7fa796684a4b not found: ID does not exist" Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.852595 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.863103 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:26:10 crc kubenswrapper[4672]: E0217 16:26:10.863480 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0494473e-5e65-47bf-b3a3-6d8c7b27243f" containerName="kube-state-metrics" Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.863496 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="0494473e-5e65-47bf-b3a3-6d8c7b27243f" containerName="kube-state-metrics" Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.863701 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="0494473e-5e65-47bf-b3a3-6d8c7b27243f" containerName="kube-state-metrics" Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.864385 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.868971 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.868990 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.882403 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.962448 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fedb9fdf-2db2-4982-8136-b432cecd1f88-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fedb9fdf-2db2-4982-8136-b432cecd1f88\") " pod="openstack/kube-state-metrics-0" Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.962555 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fedb9fdf-2db2-4982-8136-b432cecd1f88-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fedb9fdf-2db2-4982-8136-b432cecd1f88\") " pod="openstack/kube-state-metrics-0" Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.962623 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhd8n\" (UniqueName: \"kubernetes.io/projected/fedb9fdf-2db2-4982-8136-b432cecd1f88-kube-api-access-zhd8n\") pod \"kube-state-metrics-0\" (UID: \"fedb9fdf-2db2-4982-8136-b432cecd1f88\") " pod="openstack/kube-state-metrics-0" Feb 17 16:26:10 crc kubenswrapper[4672]: I0217 16:26:10.962771 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fedb9fdf-2db2-4982-8136-b432cecd1f88-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fedb9fdf-2db2-4982-8136-b432cecd1f88\") " pod="openstack/kube-state-metrics-0" Feb 17 16:26:11 crc kubenswrapper[4672]: I0217 16:26:11.065367 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fedb9fdf-2db2-4982-8136-b432cecd1f88-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fedb9fdf-2db2-4982-8136-b432cecd1f88\") " pod="openstack/kube-state-metrics-0" Feb 17 16:26:11 crc kubenswrapper[4672]: I0217 16:26:11.065449 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fedb9fdf-2db2-4982-8136-b432cecd1f88-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fedb9fdf-2db2-4982-8136-b432cecd1f88\") " pod="openstack/kube-state-metrics-0" Feb 17 16:26:11 crc kubenswrapper[4672]: I0217 16:26:11.065621 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhd8n\" (UniqueName: \"kubernetes.io/projected/fedb9fdf-2db2-4982-8136-b432cecd1f88-kube-api-access-zhd8n\") pod \"kube-state-metrics-0\" (UID: \"fedb9fdf-2db2-4982-8136-b432cecd1f88\") " pod="openstack/kube-state-metrics-0" Feb 17 16:26:11 crc kubenswrapper[4672]: I0217 16:26:11.065688 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fedb9fdf-2db2-4982-8136-b432cecd1f88-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fedb9fdf-2db2-4982-8136-b432cecd1f88\") " pod="openstack/kube-state-metrics-0" Feb 17 16:26:11 crc kubenswrapper[4672]: I0217 16:26:11.071259 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fedb9fdf-2db2-4982-8136-b432cecd1f88-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fedb9fdf-2db2-4982-8136-b432cecd1f88\") " pod="openstack/kube-state-metrics-0" Feb 17 16:26:11 crc kubenswrapper[4672]: I0217 16:26:11.073538 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fedb9fdf-2db2-4982-8136-b432cecd1f88-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fedb9fdf-2db2-4982-8136-b432cecd1f88\") " pod="openstack/kube-state-metrics-0" Feb 17 16:26:11 crc kubenswrapper[4672]: I0217 16:26:11.078297 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fedb9fdf-2db2-4982-8136-b432cecd1f88-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fedb9fdf-2db2-4982-8136-b432cecd1f88\") " pod="openstack/kube-state-metrics-0" Feb 17 16:26:11 crc kubenswrapper[4672]: I0217 16:26:11.091229 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhd8n\" (UniqueName: \"kubernetes.io/projected/fedb9fdf-2db2-4982-8136-b432cecd1f88-kube-api-access-zhd8n\") pod \"kube-state-metrics-0\" (UID: \"fedb9fdf-2db2-4982-8136-b432cecd1f88\") " pod="openstack/kube-state-metrics-0" Feb 17 16:26:11 crc kubenswrapper[4672]: I0217 16:26:11.254950 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:26:11 crc kubenswrapper[4672]: I0217 16:26:11.737215 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:26:11 crc kubenswrapper[4672]: W0217 16:26:11.742350 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfedb9fdf_2db2_4982_8136_b432cecd1f88.slice/crio-2f23446d563f1e9f2c719187915570191998f9142d29282e7ff3bfab79d6f0e1 WatchSource:0}: Error finding container 2f23446d563f1e9f2c719187915570191998f9142d29282e7ff3bfab79d6f0e1: Status 404 returned error can't find the container with id 2f23446d563f1e9f2c719187915570191998f9142d29282e7ff3bfab79d6f0e1 Feb 17 16:26:11 crc kubenswrapper[4672]: I0217 16:26:11.795352 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:11 crc kubenswrapper[4672]: I0217 16:26:11.795630 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efa123df-bae8-4cdf-aa2b-cc425f4be0ff" containerName="ceilometer-central-agent" containerID="cri-o://c21460ed62d7ff1d837d52d7de1c007e2ef3b33f19fb6ecd52bcafe7222a193d" gracePeriod=30 Feb 17 16:26:11 crc kubenswrapper[4672]: I0217 16:26:11.795674 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efa123df-bae8-4cdf-aa2b-cc425f4be0ff" containerName="proxy-httpd" containerID="cri-o://5a2c65a5e75413a4ef56a14d7a367111503642497a8f5b9c8024db288fc3c809" gracePeriod=30 Feb 17 16:26:11 crc kubenswrapper[4672]: I0217 16:26:11.795715 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efa123df-bae8-4cdf-aa2b-cc425f4be0ff" containerName="sg-core" containerID="cri-o://1d070aec6ea1d9387b41610394a85ee80856321a47da867b016ebe3e7d6cf9ab" gracePeriod=30 Feb 17 16:26:11 crc kubenswrapper[4672]: I0217 16:26:11.796082 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efa123df-bae8-4cdf-aa2b-cc425f4be0ff" containerName="ceilometer-notification-agent" containerID="cri-o://763dbb3aebc3aa9e1c574c33f845a65ec8313c095d5e09552d05b001ea10d6cf" gracePeriod=30 Feb 17 16:26:11 crc kubenswrapper[4672]: I0217 16:26:11.820364 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fedb9fdf-2db2-4982-8136-b432cecd1f88","Type":"ContainerStarted","Data":"2f23446d563f1e9f2c719187915570191998f9142d29282e7ff3bfab79d6f0e1"} Feb 17 16:26:11 crc kubenswrapper[4672]: I0217 16:26:11.962692 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0494473e-5e65-47bf-b3a3-6d8c7b27243f" path="/var/lib/kubelet/pods/0494473e-5e65-47bf-b3a3-6d8c7b27243f/volumes" Feb 17 16:26:12 crc kubenswrapper[4672]: I0217 16:26:12.832921 4672 generic.go:334] "Generic (PLEG): container finished" podID="efa123df-bae8-4cdf-aa2b-cc425f4be0ff" containerID="5a2c65a5e75413a4ef56a14d7a367111503642497a8f5b9c8024db288fc3c809" exitCode=0 Feb 17 16:26:12 crc kubenswrapper[4672]: I0217 16:26:12.833235 4672 generic.go:334] "Generic (PLEG): container finished" podID="efa123df-bae8-4cdf-aa2b-cc425f4be0ff" containerID="1d070aec6ea1d9387b41610394a85ee80856321a47da867b016ebe3e7d6cf9ab" exitCode=2 Feb 17 16:26:12 crc kubenswrapper[4672]: I0217 16:26:12.833251 4672 generic.go:334] "Generic (PLEG): container finished" podID="efa123df-bae8-4cdf-aa2b-cc425f4be0ff" containerID="c21460ed62d7ff1d837d52d7de1c007e2ef3b33f19fb6ecd52bcafe7222a193d" exitCode=0 Feb 17 16:26:12 crc kubenswrapper[4672]: I0217 16:26:12.832996 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efa123df-bae8-4cdf-aa2b-cc425f4be0ff","Type":"ContainerDied","Data":"5a2c65a5e75413a4ef56a14d7a367111503642497a8f5b9c8024db288fc3c809"} Feb 17 16:26:12 crc kubenswrapper[4672]: I0217 16:26:12.833384 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efa123df-bae8-4cdf-aa2b-cc425f4be0ff","Type":"ContainerDied","Data":"1d070aec6ea1d9387b41610394a85ee80856321a47da867b016ebe3e7d6cf9ab"} Feb 17 16:26:12 crc kubenswrapper[4672]: I0217 16:26:12.833408 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efa123df-bae8-4cdf-aa2b-cc425f4be0ff","Type":"ContainerDied","Data":"c21460ed62d7ff1d837d52d7de1c007e2ef3b33f19fb6ecd52bcafe7222a193d"} Feb 17 16:26:12 crc kubenswrapper[4672]: I0217 16:26:12.836654 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fedb9fdf-2db2-4982-8136-b432cecd1f88","Type":"ContainerStarted","Data":"1a4cc233e83fd25ce2ebab719f9c1bc6dca18592de4d9f950ff085bf3cf793f6"} Feb 17 16:26:12 crc kubenswrapper[4672]: I0217 16:26:12.836849 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 17 16:26:12 crc kubenswrapper[4672]: I0217 16:26:12.863681 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.411835635 podStartE2EDuration="2.863659056s" podCreationTimestamp="2026-02-17 16:26:10 +0000 UTC" firstStartedPulling="2026-02-17 16:26:11.745392105 +0000 UTC m=+1380.499480837" lastFinishedPulling="2026-02-17 16:26:12.197215526 +0000 UTC m=+1380.951304258" observedRunningTime="2026-02-17 16:26:12.856698522 +0000 UTC m=+1381.610787254" watchObservedRunningTime="2026-02-17 16:26:12.863659056 +0000 UTC m=+1381.617747828" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.585474 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.637226 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-scripts\") pod \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.637290 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9zzw\" (UniqueName: \"kubernetes.io/projected/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-kube-api-access-z9zzw\") pod \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.637394 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-log-httpd\") pod \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.637460 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-sg-core-conf-yaml\") pod \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.637485 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-run-httpd\") pod \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.637552 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-config-data\") pod \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.637679 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-combined-ca-bundle\") pod \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\" (UID: \"efa123df-bae8-4cdf-aa2b-cc425f4be0ff\") " Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.637929 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "efa123df-bae8-4cdf-aa2b-cc425f4be0ff" (UID: "efa123df-bae8-4cdf-aa2b-cc425f4be0ff"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.638240 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "efa123df-bae8-4cdf-aa2b-cc425f4be0ff" (UID: "efa123df-bae8-4cdf-aa2b-cc425f4be0ff"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.638374 4672 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.638398 4672 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.654855 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-kube-api-access-z9zzw" (OuterVolumeSpecName: "kube-api-access-z9zzw") pod "efa123df-bae8-4cdf-aa2b-cc425f4be0ff" (UID: "efa123df-bae8-4cdf-aa2b-cc425f4be0ff"). InnerVolumeSpecName "kube-api-access-z9zzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.657828 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-scripts" (OuterVolumeSpecName: "scripts") pod "efa123df-bae8-4cdf-aa2b-cc425f4be0ff" (UID: "efa123df-bae8-4cdf-aa2b-cc425f4be0ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.713318 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "efa123df-bae8-4cdf-aa2b-cc425f4be0ff" (UID: "efa123df-bae8-4cdf-aa2b-cc425f4be0ff"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.741911 4672 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.741977 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.741990 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9zzw\" (UniqueName: \"kubernetes.io/projected/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-kube-api-access-z9zzw\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.788626 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efa123df-bae8-4cdf-aa2b-cc425f4be0ff" (UID: "efa123df-bae8-4cdf-aa2b-cc425f4be0ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.824601 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-config-data" (OuterVolumeSpecName: "config-data") pod "efa123df-bae8-4cdf-aa2b-cc425f4be0ff" (UID: "efa123df-bae8-4cdf-aa2b-cc425f4be0ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.843453 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.843487 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efa123df-bae8-4cdf-aa2b-cc425f4be0ff-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.869182 4672 generic.go:334] "Generic (PLEG): container finished" podID="efa123df-bae8-4cdf-aa2b-cc425f4be0ff" containerID="763dbb3aebc3aa9e1c574c33f845a65ec8313c095d5e09552d05b001ea10d6cf" exitCode=0 Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.869227 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efa123df-bae8-4cdf-aa2b-cc425f4be0ff","Type":"ContainerDied","Data":"763dbb3aebc3aa9e1c574c33f845a65ec8313c095d5e09552d05b001ea10d6cf"} Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.869254 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efa123df-bae8-4cdf-aa2b-cc425f4be0ff","Type":"ContainerDied","Data":"b563e345db894df346cf66325d0ad0a846f1709f8d4ba0b167287d1cb754c415"} Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.869278 4672 scope.go:117] "RemoveContainer" containerID="5a2c65a5e75413a4ef56a14d7a367111503642497a8f5b9c8024db288fc3c809" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.869417 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.891470 4672 scope.go:117] "RemoveContainer" containerID="1d070aec6ea1d9387b41610394a85ee80856321a47da867b016ebe3e7d6cf9ab" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.909052 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.920107 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.928372 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:14 crc kubenswrapper[4672]: E0217 16:26:14.928782 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa123df-bae8-4cdf-aa2b-cc425f4be0ff" containerName="ceilometer-central-agent" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.928800 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa123df-bae8-4cdf-aa2b-cc425f4be0ff" containerName="ceilometer-central-agent" Feb 17 16:26:14 crc kubenswrapper[4672]: E0217 16:26:14.928809 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa123df-bae8-4cdf-aa2b-cc425f4be0ff" containerName="proxy-httpd" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.928816 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa123df-bae8-4cdf-aa2b-cc425f4be0ff" containerName="proxy-httpd" Feb 17 16:26:14 crc kubenswrapper[4672]: E0217 16:26:14.928849 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa123df-bae8-4cdf-aa2b-cc425f4be0ff" containerName="sg-core" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.928858 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa123df-bae8-4cdf-aa2b-cc425f4be0ff" containerName="sg-core" Feb 17 16:26:14 crc kubenswrapper[4672]: E0217 16:26:14.928876 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa123df-bae8-4cdf-aa2b-cc425f4be0ff" containerName="ceilometer-notification-agent" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.928882 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa123df-bae8-4cdf-aa2b-cc425f4be0ff" containerName="ceilometer-notification-agent" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.929054 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="efa123df-bae8-4cdf-aa2b-cc425f4be0ff" containerName="ceilometer-central-agent" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.929074 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="efa123df-bae8-4cdf-aa2b-cc425f4be0ff" containerName="proxy-httpd" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.929086 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="efa123df-bae8-4cdf-aa2b-cc425f4be0ff" containerName="sg-core" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.929096 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="efa123df-bae8-4cdf-aa2b-cc425f4be0ff" containerName="ceilometer-notification-agent" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.930895 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.934582 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.934605 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.935725 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.939892 4672 scope.go:117] "RemoveContainer" containerID="763dbb3aebc3aa9e1c574c33f845a65ec8313c095d5e09552d05b001ea10d6cf" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.946947 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.973337 4672 scope.go:117] "RemoveContainer" containerID="c21460ed62d7ff1d837d52d7de1c007e2ef3b33f19fb6ecd52bcafe7222a193d" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.993920 4672 scope.go:117] "RemoveContainer" containerID="5a2c65a5e75413a4ef56a14d7a367111503642497a8f5b9c8024db288fc3c809" Feb 17 16:26:14 crc kubenswrapper[4672]: E0217 16:26:14.994632 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a2c65a5e75413a4ef56a14d7a367111503642497a8f5b9c8024db288fc3c809\": container with ID starting with 5a2c65a5e75413a4ef56a14d7a367111503642497a8f5b9c8024db288fc3c809 not found: ID does not exist" containerID="5a2c65a5e75413a4ef56a14d7a367111503642497a8f5b9c8024db288fc3c809" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.994678 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a2c65a5e75413a4ef56a14d7a367111503642497a8f5b9c8024db288fc3c809"} err="failed to get container status \"5a2c65a5e75413a4ef56a14d7a367111503642497a8f5b9c8024db288fc3c809\": rpc error: code = NotFound desc = could not find container \"5a2c65a5e75413a4ef56a14d7a367111503642497a8f5b9c8024db288fc3c809\": container with ID starting with 5a2c65a5e75413a4ef56a14d7a367111503642497a8f5b9c8024db288fc3c809 not found: ID does not exist" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.994707 4672 scope.go:117] "RemoveContainer" containerID="1d070aec6ea1d9387b41610394a85ee80856321a47da867b016ebe3e7d6cf9ab" Feb 17 16:26:14 crc kubenswrapper[4672]: E0217 16:26:14.995112 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d070aec6ea1d9387b41610394a85ee80856321a47da867b016ebe3e7d6cf9ab\": container with ID starting with 1d070aec6ea1d9387b41610394a85ee80856321a47da867b016ebe3e7d6cf9ab not found: ID does not exist" containerID="1d070aec6ea1d9387b41610394a85ee80856321a47da867b016ebe3e7d6cf9ab" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.995171 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d070aec6ea1d9387b41610394a85ee80856321a47da867b016ebe3e7d6cf9ab"} err="failed to get container status \"1d070aec6ea1d9387b41610394a85ee80856321a47da867b016ebe3e7d6cf9ab\": rpc error: code = NotFound desc = could not find container \"1d070aec6ea1d9387b41610394a85ee80856321a47da867b016ebe3e7d6cf9ab\": container with ID starting with 1d070aec6ea1d9387b41610394a85ee80856321a47da867b016ebe3e7d6cf9ab not found: ID does not exist" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.995228 4672 scope.go:117] "RemoveContainer" containerID="763dbb3aebc3aa9e1c574c33f845a65ec8313c095d5e09552d05b001ea10d6cf" Feb 17 16:26:14 crc kubenswrapper[4672]: E0217 16:26:14.995626 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"763dbb3aebc3aa9e1c574c33f845a65ec8313c095d5e09552d05b001ea10d6cf\": container with ID starting with 763dbb3aebc3aa9e1c574c33f845a65ec8313c095d5e09552d05b001ea10d6cf not found: ID does not exist" containerID="763dbb3aebc3aa9e1c574c33f845a65ec8313c095d5e09552d05b001ea10d6cf" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.995654 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"763dbb3aebc3aa9e1c574c33f845a65ec8313c095d5e09552d05b001ea10d6cf"} err="failed to get container status \"763dbb3aebc3aa9e1c574c33f845a65ec8313c095d5e09552d05b001ea10d6cf\": rpc error: code = NotFound desc = could not find container \"763dbb3aebc3aa9e1c574c33f845a65ec8313c095d5e09552d05b001ea10d6cf\": container with ID starting with 763dbb3aebc3aa9e1c574c33f845a65ec8313c095d5e09552d05b001ea10d6cf not found: ID does not exist" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.995676 4672 scope.go:117] "RemoveContainer" containerID="c21460ed62d7ff1d837d52d7de1c007e2ef3b33f19fb6ecd52bcafe7222a193d" Feb 17 16:26:14 crc kubenswrapper[4672]: E0217 16:26:14.995919 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c21460ed62d7ff1d837d52d7de1c007e2ef3b33f19fb6ecd52bcafe7222a193d\": container with ID starting with c21460ed62d7ff1d837d52d7de1c007e2ef3b33f19fb6ecd52bcafe7222a193d not found: ID does not exist" containerID="c21460ed62d7ff1d837d52d7de1c007e2ef3b33f19fb6ecd52bcafe7222a193d" Feb 17 16:26:14 crc kubenswrapper[4672]: I0217 16:26:14.995961 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c21460ed62d7ff1d837d52d7de1c007e2ef3b33f19fb6ecd52bcafe7222a193d"} err="failed to get container status \"c21460ed62d7ff1d837d52d7de1c007e2ef3b33f19fb6ecd52bcafe7222a193d\": rpc error: code = NotFound desc = could not find container \"c21460ed62d7ff1d837d52d7de1c007e2ef3b33f19fb6ecd52bcafe7222a193d\": container with ID starting with c21460ed62d7ff1d837d52d7de1c007e2ef3b33f19fb6ecd52bcafe7222a193d not found: ID does not exist" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.049232 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-log-httpd\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.049315 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.049380 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.049427 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.049471 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-config-data\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.049576 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrfxn\" (UniqueName: \"kubernetes.io/projected/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-kube-api-access-nrfxn\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.049665 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-run-httpd\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.049868 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-scripts\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.152678 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrfxn\" (UniqueName: \"kubernetes.io/projected/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-kube-api-access-nrfxn\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.153449 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-run-httpd\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.153570 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-scripts\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.153751 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-log-httpd\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.153850 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.153938 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.154009 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.154062 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-config-data\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.154253 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-run-httpd\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.155855 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-log-httpd\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.159003 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.159118 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-scripts\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.160845 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.163897 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-config-data\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.164198 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.175577 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrfxn\" (UniqueName: \"kubernetes.io/projected/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-kube-api-access-nrfxn\") pod \"ceilometer-0\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.260385 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.766979 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.880029 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d","Type":"ContainerStarted","Data":"52ae5ffd056e91ce23e680ea8b755aa3e62dc474fbd0f21ce1dbf29a0682243a"} Feb 17 16:26:15 crc kubenswrapper[4672]: I0217 16:26:15.958695 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efa123df-bae8-4cdf-aa2b-cc425f4be0ff" path="/var/lib/kubelet/pods/efa123df-bae8-4cdf-aa2b-cc425f4be0ff/volumes" Feb 17 16:26:16 crc kubenswrapper[4672]: I0217 16:26:16.913070 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d","Type":"ContainerStarted","Data":"1ac811730a11539a00f9396feb7b0d67dea69bb41dbdf002bdf98f91613a160e"} Feb 17 16:26:17 crc kubenswrapper[4672]: I0217 16:26:17.019698 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:26:17 crc kubenswrapper[4672]: I0217 16:26:17.020327 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:26:17 crc kubenswrapper[4672]: I0217 16:26:17.028893 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:26:17 crc kubenswrapper[4672]: I0217 16:26:17.029004 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:26:17 crc kubenswrapper[4672]: I0217 16:26:17.923061 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d","Type":"ContainerStarted","Data":"ab4ce04716181035804507713ea7b15f74830e5d98b6df0bf1ec6460e7305be2"} Feb 17 16:26:17 crc kubenswrapper[4672]: I0217 16:26:17.923319 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:26:17 crc kubenswrapper[4672]: I0217 16:26:17.923331 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d","Type":"ContainerStarted","Data":"b92174dda8b6831ea822668f1e208a0b2d781b6b8baf99ad287015f3ca44a674"} Feb 17 16:26:18 crc kubenswrapper[4672]: I0217 16:26:18.065163 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:26:18 crc kubenswrapper[4672]: I0217 16:26:18.274862 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64c8b5dcc-w87br"] Feb 17 16:26:18 crc kubenswrapper[4672]: I0217 16:26:18.276505 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:26:18 crc kubenswrapper[4672]: I0217 16:26:18.298258 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64c8b5dcc-w87br"] Feb 17 16:26:18 crc kubenswrapper[4672]: I0217 16:26:18.431689 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-ovsdbserver-sb\") pod \"dnsmasq-dns-64c8b5dcc-w87br\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:26:18 crc kubenswrapper[4672]: I0217 16:26:18.431727 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-ovsdbserver-nb\") pod \"dnsmasq-dns-64c8b5dcc-w87br\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:26:18 crc kubenswrapper[4672]: I0217 16:26:18.431751 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-dns-svc\") pod \"dnsmasq-dns-64c8b5dcc-w87br\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:26:18 crc kubenswrapper[4672]: I0217 16:26:18.432102 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-dns-swift-storage-0\") pod \"dnsmasq-dns-64c8b5dcc-w87br\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:26:18 crc kubenswrapper[4672]: I0217 16:26:18.432274 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-config\") pod \"dnsmasq-dns-64c8b5dcc-w87br\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:26:18 crc kubenswrapper[4672]: I0217 16:26:18.432436 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv9vf\" (UniqueName: \"kubernetes.io/projected/8520f022-ba09-48ff-a7e7-1d8f55225a69-kube-api-access-mv9vf\") pod \"dnsmasq-dns-64c8b5dcc-w87br\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:26:18 crc kubenswrapper[4672]: I0217 16:26:18.534562 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-ovsdbserver-nb\") pod \"dnsmasq-dns-64c8b5dcc-w87br\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:26:18 crc kubenswrapper[4672]: I0217 16:26:18.534611 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-dns-svc\") pod \"dnsmasq-dns-64c8b5dcc-w87br\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:26:18 crc kubenswrapper[4672]: I0217 16:26:18.534705 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-dns-swift-storage-0\") pod \"dnsmasq-dns-64c8b5dcc-w87br\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:26:18 crc kubenswrapper[4672]: I0217 16:26:18.534736 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-config\") pod \"dnsmasq-dns-64c8b5dcc-w87br\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:26:18 crc kubenswrapper[4672]: I0217 16:26:18.534778 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv9vf\" (UniqueName: \"kubernetes.io/projected/8520f022-ba09-48ff-a7e7-1d8f55225a69-kube-api-access-mv9vf\") pod \"dnsmasq-dns-64c8b5dcc-w87br\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:26:18 crc kubenswrapper[4672]: I0217 16:26:18.534848 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-ovsdbserver-sb\") pod \"dnsmasq-dns-64c8b5dcc-w87br\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:26:18 crc kubenswrapper[4672]: I0217 16:26:18.535670 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-ovsdbserver-nb\") pod \"dnsmasq-dns-64c8b5dcc-w87br\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:26:18 crc kubenswrapper[4672]: I0217 16:26:18.535803 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-config\") pod \"dnsmasq-dns-64c8b5dcc-w87br\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:26:18 crc kubenswrapper[4672]: I0217 16:26:18.535808 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-dns-svc\") pod \"dnsmasq-dns-64c8b5dcc-w87br\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:26:18 crc kubenswrapper[4672]: I0217 16:26:18.535813 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-dns-swift-storage-0\") pod \"dnsmasq-dns-64c8b5dcc-w87br\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:26:18 crc kubenswrapper[4672]: I0217 16:26:18.536001 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-ovsdbserver-sb\") pod \"dnsmasq-dns-64c8b5dcc-w87br\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:26:18 crc kubenswrapper[4672]: I0217 16:26:18.557944 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv9vf\" (UniqueName: \"kubernetes.io/projected/8520f022-ba09-48ff-a7e7-1d8f55225a69-kube-api-access-mv9vf\") pod \"dnsmasq-dns-64c8b5dcc-w87br\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:26:18 crc kubenswrapper[4672]: I0217 16:26:18.599069 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:26:19 crc kubenswrapper[4672]: I0217 16:26:19.120450 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64c8b5dcc-w87br"] Feb 17 16:26:19 crc kubenswrapper[4672]: I0217 16:26:19.931791 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:19 crc kubenswrapper[4672]: I0217 16:26:19.954941 4672 generic.go:334] "Generic (PLEG): container finished" podID="8520f022-ba09-48ff-a7e7-1d8f55225a69" containerID="9d2cd1a9d544ccf31401dca8e1f81b63310c7e5351247a6afd9010822a403768" exitCode=0 Feb 17 16:26:19 crc kubenswrapper[4672]: I0217 16:26:19.958620 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:26:19 crc kubenswrapper[4672]: I0217 16:26:19.958706 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d","Type":"ContainerStarted","Data":"ce01d49db3d16ed01b65719581fc59b566f9c1cb02dc0b256250fb6f84b497cf"} Feb 17 16:26:19 crc kubenswrapper[4672]: I0217 16:26:19.958764 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" event={"ID":"8520f022-ba09-48ff-a7e7-1d8f55225a69","Type":"ContainerDied","Data":"9d2cd1a9d544ccf31401dca8e1f81b63310c7e5351247a6afd9010822a403768"} Feb 17 16:26:19 crc kubenswrapper[4672]: I0217 16:26:19.958820 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" event={"ID":"8520f022-ba09-48ff-a7e7-1d8f55225a69","Type":"ContainerStarted","Data":"fdd58ec5ec2fd730b7d0e96c83c01fd3f08f68b5f2c86e1d72a961028b2a4fc3"} Feb 17 16:26:19 crc kubenswrapper[4672]: I0217 16:26:19.983494 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.8855190630000003 podStartE2EDuration="5.983476945s" podCreationTimestamp="2026-02-17 16:26:14 +0000 UTC" firstStartedPulling="2026-02-17 16:26:15.778272213 +0000 UTC m=+1384.532360945" lastFinishedPulling="2026-02-17 16:26:18.876230095 +0000 UTC m=+1387.630318827" observedRunningTime="2026-02-17 16:26:19.978427782 +0000 UTC m=+1388.732516514" watchObservedRunningTime="2026-02-17 16:26:19.983476945 +0000 UTC m=+1388.737565677" Feb 17 16:26:20 crc kubenswrapper[4672]: I0217 16:26:20.764563 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:26:20 crc kubenswrapper[4672]: I0217 16:26:20.965745 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" event={"ID":"8520f022-ba09-48ff-a7e7-1d8f55225a69","Type":"ContainerStarted","Data":"58d7610309c352aa467eb7bd43abeaa2b6b8cfd8fd57f238ee68c2a23b7f67eb"} Feb 17 16:26:20 crc kubenswrapper[4672]: I0217 16:26:20.965977 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cc5ff4ff-66c9-4b35-8cab-96245e66ccb2" containerName="nova-api-log" containerID="cri-o://9b48a1f51ca630c15c2efe92c9d8deec148f33df95627852b56a22c3c0b60531" gracePeriod=30 Feb 17 16:26:20 crc kubenswrapper[4672]: I0217 16:26:20.966009 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cc5ff4ff-66c9-4b35-8cab-96245e66ccb2" containerName="nova-api-api" containerID="cri-o://81c2e8ed828d72c1dbe0ccc858c831ab2bd5e6c5dcb9d5a1918cd609dd88bf18" gracePeriod=30 Feb 17 16:26:20 crc kubenswrapper[4672]: I0217 16:26:20.966140 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" containerName="sg-core" containerID="cri-o://ab4ce04716181035804507713ea7b15f74830e5d98b6df0bf1ec6460e7305be2" gracePeriod=30 Feb 17 16:26:20 crc kubenswrapper[4672]: I0217 16:26:20.966165 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" containerName="proxy-httpd" containerID="cri-o://ce01d49db3d16ed01b65719581fc59b566f9c1cb02dc0b256250fb6f84b497cf" gracePeriod=30 Feb 17 16:26:20 crc kubenswrapper[4672]: I0217 16:26:20.966205 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" containerName="ceilometer-notification-agent" containerID="cri-o://b92174dda8b6831ea822668f1e208a0b2d781b6b8baf99ad287015f3ca44a674" gracePeriod=30 Feb 17 16:26:20 crc kubenswrapper[4672]: I0217 16:26:20.966585 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" containerName="ceilometer-central-agent" containerID="cri-o://1ac811730a11539a00f9396feb7b0d67dea69bb41dbdf002bdf98f91613a160e" gracePeriod=30 Feb 17 16:26:21 crc kubenswrapper[4672]: I0217 16:26:21.002304 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" podStartSLOduration=3.002282194 podStartE2EDuration="3.002282194s" podCreationTimestamp="2026-02-17 16:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:26:20.990212086 +0000 UTC m=+1389.744300818" watchObservedRunningTime="2026-02-17 16:26:21.002282194 +0000 UTC m=+1389.756370926" Feb 17 16:26:21 crc kubenswrapper[4672]: I0217 16:26:21.272478 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 17 16:26:21 crc kubenswrapper[4672]: W0217 16:26:21.488896 4672 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod605a2fdf_d09e_41ce_b8e5_32f5f1d3301d.slice/crio-ce01d49db3d16ed01b65719581fc59b566f9c1cb02dc0b256250fb6f84b497cf.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod605a2fdf_d09e_41ce_b8e5_32f5f1d3301d.slice/crio-ce01d49db3d16ed01b65719581fc59b566f9c1cb02dc0b256250fb6f84b497cf.scope: no such file or directory Feb 17 16:26:21 crc kubenswrapper[4672]: W0217 16:26:21.489151 4672 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8520f022_ba09_48ff_a7e7_1d8f55225a69.slice/crio-conmon-9d2cd1a9d544ccf31401dca8e1f81b63310c7e5351247a6afd9010822a403768.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8520f022_ba09_48ff_a7e7_1d8f55225a69.slice/crio-conmon-9d2cd1a9d544ccf31401dca8e1f81b63310c7e5351247a6afd9010822a403768.scope: no such file or directory Feb 17 16:26:21 crc kubenswrapper[4672]: W0217 16:26:21.489245 4672 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8520f022_ba09_48ff_a7e7_1d8f55225a69.slice/crio-9d2cd1a9d544ccf31401dca8e1f81b63310c7e5351247a6afd9010822a403768.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8520f022_ba09_48ff_a7e7_1d8f55225a69.slice/crio-9d2cd1a9d544ccf31401dca8e1f81b63310c7e5351247a6afd9010822a403768.scope: no such file or directory Feb 17 16:26:21 crc kubenswrapper[4672]: I0217 16:26:21.828417 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:21 crc kubenswrapper[4672]: I0217 16:26:21.908179 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81df8d25-8781-4cf7-ad12-ce90fb01aa1e-combined-ca-bundle\") pod \"81df8d25-8781-4cf7-ad12-ce90fb01aa1e\" (UID: \"81df8d25-8781-4cf7-ad12-ce90fb01aa1e\") " Feb 17 16:26:21 crc kubenswrapper[4672]: I0217 16:26:21.908487 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81df8d25-8781-4cf7-ad12-ce90fb01aa1e-config-data\") pod \"81df8d25-8781-4cf7-ad12-ce90fb01aa1e\" (UID: \"81df8d25-8781-4cf7-ad12-ce90fb01aa1e\") " Feb 17 16:26:21 crc kubenswrapper[4672]: I0217 16:26:21.908534 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgvcm\" (UniqueName: \"kubernetes.io/projected/81df8d25-8781-4cf7-ad12-ce90fb01aa1e-kube-api-access-jgvcm\") pod \"81df8d25-8781-4cf7-ad12-ce90fb01aa1e\" (UID: \"81df8d25-8781-4cf7-ad12-ce90fb01aa1e\") " Feb 17 16:26:21 crc kubenswrapper[4672]: I0217 16:26:21.914862 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81df8d25-8781-4cf7-ad12-ce90fb01aa1e-kube-api-access-jgvcm" (OuterVolumeSpecName: "kube-api-access-jgvcm") pod "81df8d25-8781-4cf7-ad12-ce90fb01aa1e" (UID: "81df8d25-8781-4cf7-ad12-ce90fb01aa1e"). InnerVolumeSpecName "kube-api-access-jgvcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:21 crc kubenswrapper[4672]: I0217 16:26:21.940700 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81df8d25-8781-4cf7-ad12-ce90fb01aa1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81df8d25-8781-4cf7-ad12-ce90fb01aa1e" (UID: "81df8d25-8781-4cf7-ad12-ce90fb01aa1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:21 crc kubenswrapper[4672]: I0217 16:26:21.942351 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81df8d25-8781-4cf7-ad12-ce90fb01aa1e-config-data" (OuterVolumeSpecName: "config-data") pod "81df8d25-8781-4cf7-ad12-ce90fb01aa1e" (UID: "81df8d25-8781-4cf7-ad12-ce90fb01aa1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:21 crc kubenswrapper[4672]: I0217 16:26:21.976025 4672 generic.go:334] "Generic (PLEG): container finished" podID="cc5ff4ff-66c9-4b35-8cab-96245e66ccb2" containerID="9b48a1f51ca630c15c2efe92c9d8deec148f33df95627852b56a22c3c0b60531" exitCode=143 Feb 17 16:26:21 crc kubenswrapper[4672]: I0217 16:26:21.976095 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2","Type":"ContainerDied","Data":"9b48a1f51ca630c15c2efe92c9d8deec148f33df95627852b56a22c3c0b60531"} Feb 17 16:26:21 crc kubenswrapper[4672]: I0217 16:26:21.978062 4672 generic.go:334] "Generic (PLEG): container finished" podID="81df8d25-8781-4cf7-ad12-ce90fb01aa1e" containerID="4bf47ccec39e63a8aa7b9b0365cf3fbacc9ec8137987972ea2594327c683378c" exitCode=137 Feb 17 16:26:21 crc kubenswrapper[4672]: I0217 16:26:21.978121 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"81df8d25-8781-4cf7-ad12-ce90fb01aa1e","Type":"ContainerDied","Data":"4bf47ccec39e63a8aa7b9b0365cf3fbacc9ec8137987972ea2594327c683378c"} Feb 17 16:26:21 crc kubenswrapper[4672]: I0217 16:26:21.978194 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"81df8d25-8781-4cf7-ad12-ce90fb01aa1e","Type":"ContainerDied","Data":"2ec260169a2347ad090462f514d67443e561b88c39e6abed482ccf2f1c7576e7"} Feb 17 16:26:21 crc kubenswrapper[4672]: I0217 16:26:21.978218 4672 scope.go:117] "RemoveContainer" containerID="4bf47ccec39e63a8aa7b9b0365cf3fbacc9ec8137987972ea2594327c683378c" Feb 17 16:26:21 crc kubenswrapper[4672]: I0217 16:26:21.978136 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:21 crc kubenswrapper[4672]: I0217 16:26:21.981132 4672 generic.go:334] "Generic (PLEG): container finished" podID="605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" containerID="ce01d49db3d16ed01b65719581fc59b566f9c1cb02dc0b256250fb6f84b497cf" exitCode=0 Feb 17 16:26:21 crc kubenswrapper[4672]: I0217 16:26:21.981175 4672 generic.go:334] "Generic (PLEG): container finished" podID="605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" containerID="ab4ce04716181035804507713ea7b15f74830e5d98b6df0bf1ec6460e7305be2" exitCode=2 Feb 17 16:26:21 crc kubenswrapper[4672]: I0217 16:26:21.981183 4672 generic.go:334] "Generic (PLEG): container finished" podID="605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" containerID="b92174dda8b6831ea822668f1e208a0b2d781b6b8baf99ad287015f3ca44a674" exitCode=0 Feb 17 16:26:21 crc kubenswrapper[4672]: I0217 16:26:21.981202 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d","Type":"ContainerDied","Data":"ce01d49db3d16ed01b65719581fc59b566f9c1cb02dc0b256250fb6f84b497cf"} Feb 17 16:26:21 crc kubenswrapper[4672]: I0217 16:26:21.981249 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d","Type":"ContainerDied","Data":"ab4ce04716181035804507713ea7b15f74830e5d98b6df0bf1ec6460e7305be2"} Feb 17 16:26:21 crc kubenswrapper[4672]: I0217 16:26:21.981266 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d","Type":"ContainerDied","Data":"b92174dda8b6831ea822668f1e208a0b2d781b6b8baf99ad287015f3ca44a674"} Feb 17 16:26:21 crc kubenswrapper[4672]: I0217 16:26:21.981569 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.004042 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.008389 4672 scope.go:117] "RemoveContainer" containerID="4bf47ccec39e63a8aa7b9b0365cf3fbacc9ec8137987972ea2594327c683378c" Feb 17 16:26:22 crc kubenswrapper[4672]: E0217 16:26:22.008932 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bf47ccec39e63a8aa7b9b0365cf3fbacc9ec8137987972ea2594327c683378c\": container with ID starting with 4bf47ccec39e63a8aa7b9b0365cf3fbacc9ec8137987972ea2594327c683378c not found: ID does not exist" containerID="4bf47ccec39e63a8aa7b9b0365cf3fbacc9ec8137987972ea2594327c683378c" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.008965 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bf47ccec39e63a8aa7b9b0365cf3fbacc9ec8137987972ea2594327c683378c"} err="failed to get container status \"4bf47ccec39e63a8aa7b9b0365cf3fbacc9ec8137987972ea2594327c683378c\": rpc error: code = NotFound desc = could not find container \"4bf47ccec39e63a8aa7b9b0365cf3fbacc9ec8137987972ea2594327c683378c\": container with ID starting with 4bf47ccec39e63a8aa7b9b0365cf3fbacc9ec8137987972ea2594327c683378c not found: ID does not exist" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.010334 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81df8d25-8781-4cf7-ad12-ce90fb01aa1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.010353 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81df8d25-8781-4cf7-ad12-ce90fb01aa1e-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.010363 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgvcm\" (UniqueName: \"kubernetes.io/projected/81df8d25-8781-4cf7-ad12-ce90fb01aa1e-kube-api-access-jgvcm\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.021693 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.030726 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:26:22 crc kubenswrapper[4672]: E0217 16:26:22.031260 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81df8d25-8781-4cf7-ad12-ce90fb01aa1e" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.031280 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="81df8d25-8781-4cf7-ad12-ce90fb01aa1e" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.031479 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="81df8d25-8781-4cf7-ad12-ce90fb01aa1e" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.032260 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.038162 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.038470 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.038708 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.040912 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.111935 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.111982 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.112038 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cvp9\" (UniqueName: \"kubernetes.io/projected/e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa-kube-api-access-6cvp9\") pod \"nova-cell1-novncproxy-0\" (UID: \"e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.112107 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.112166 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.213797 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.213874 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.213989 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.214017 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.214048 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cvp9\" (UniqueName: \"kubernetes.io/projected/e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa-kube-api-access-6cvp9\") pod \"nova-cell1-novncproxy-0\" (UID: \"e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.218224 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.218379 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.218727 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.241130 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.243777 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cvp9\" (UniqueName: \"kubernetes.io/projected/e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa-kube-api-access-6cvp9\") pod \"nova-cell1-novncproxy-0\" (UID: \"e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.351128 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:22 crc kubenswrapper[4672]: I0217 16:26:22.875186 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:26:23 crc kubenswrapper[4672]: I0217 16:26:23.006247 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa","Type":"ContainerStarted","Data":"0b5b427876961799e777f38cf55f9050db37fe84a94040491ee55927dc4d9db7"} Feb 17 16:26:23 crc kubenswrapper[4672]: I0217 16:26:23.165280 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2sfpl"] Feb 17 16:26:23 crc kubenswrapper[4672]: I0217 16:26:23.169056 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2sfpl" Feb 17 16:26:23 crc kubenswrapper[4672]: I0217 16:26:23.179191 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2sfpl"] Feb 17 16:26:23 crc kubenswrapper[4672]: I0217 16:26:23.235087 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bprh\" (UniqueName: \"kubernetes.io/projected/95aa771c-7c3f-40bc-a845-a5b27b7581bd-kube-api-access-4bprh\") pod \"redhat-marketplace-2sfpl\" (UID: \"95aa771c-7c3f-40bc-a845-a5b27b7581bd\") " pod="openshift-marketplace/redhat-marketplace-2sfpl" Feb 17 16:26:23 crc kubenswrapper[4672]: I0217 16:26:23.235154 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95aa771c-7c3f-40bc-a845-a5b27b7581bd-utilities\") pod \"redhat-marketplace-2sfpl\" (UID: \"95aa771c-7c3f-40bc-a845-a5b27b7581bd\") " pod="openshift-marketplace/redhat-marketplace-2sfpl" Feb 17 16:26:23 crc kubenswrapper[4672]: I0217 16:26:23.235317 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95aa771c-7c3f-40bc-a845-a5b27b7581bd-catalog-content\") pod \"redhat-marketplace-2sfpl\" (UID: \"95aa771c-7c3f-40bc-a845-a5b27b7581bd\") " pod="openshift-marketplace/redhat-marketplace-2sfpl" Feb 17 16:26:23 crc kubenswrapper[4672]: I0217 16:26:23.338636 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95aa771c-7c3f-40bc-a845-a5b27b7581bd-catalog-content\") pod \"redhat-marketplace-2sfpl\" (UID: \"95aa771c-7c3f-40bc-a845-a5b27b7581bd\") " pod="openshift-marketplace/redhat-marketplace-2sfpl" Feb 17 16:26:23 crc kubenswrapper[4672]: I0217 16:26:23.338794 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bprh\" (UniqueName: \"kubernetes.io/projected/95aa771c-7c3f-40bc-a845-a5b27b7581bd-kube-api-access-4bprh\") pod \"redhat-marketplace-2sfpl\" (UID: \"95aa771c-7c3f-40bc-a845-a5b27b7581bd\") " pod="openshift-marketplace/redhat-marketplace-2sfpl" Feb 17 16:26:23 crc kubenswrapper[4672]: I0217 16:26:23.338812 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95aa771c-7c3f-40bc-a845-a5b27b7581bd-utilities\") pod \"redhat-marketplace-2sfpl\" (UID: \"95aa771c-7c3f-40bc-a845-a5b27b7581bd\") " pod="openshift-marketplace/redhat-marketplace-2sfpl" Feb 17 16:26:23 crc kubenswrapper[4672]: I0217 16:26:23.339545 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95aa771c-7c3f-40bc-a845-a5b27b7581bd-catalog-content\") pod \"redhat-marketplace-2sfpl\" (UID: \"95aa771c-7c3f-40bc-a845-a5b27b7581bd\") " pod="openshift-marketplace/redhat-marketplace-2sfpl" Feb 17 16:26:23 crc kubenswrapper[4672]: I0217 16:26:23.339624 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95aa771c-7c3f-40bc-a845-a5b27b7581bd-utilities\") pod \"redhat-marketplace-2sfpl\" (UID: \"95aa771c-7c3f-40bc-a845-a5b27b7581bd\") " pod="openshift-marketplace/redhat-marketplace-2sfpl" Feb 17 16:26:23 crc kubenswrapper[4672]: I0217 16:26:23.367686 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bprh\" (UniqueName: \"kubernetes.io/projected/95aa771c-7c3f-40bc-a845-a5b27b7581bd-kube-api-access-4bprh\") pod \"redhat-marketplace-2sfpl\" (UID: \"95aa771c-7c3f-40bc-a845-a5b27b7581bd\") " pod="openshift-marketplace/redhat-marketplace-2sfpl" Feb 17 16:26:23 crc kubenswrapper[4672]: I0217 16:26:23.494004 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2sfpl" Feb 17 16:26:23 crc kubenswrapper[4672]: E0217 16:26:23.757799 4672 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod605a2fdf_d09e_41ce_b8e5_32f5f1d3301d.slice/crio-1ac811730a11539a00f9396feb7b0d67dea69bb41dbdf002bdf98f91613a160e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81df8d25_8781_4cf7_ad12_ce90fb01aa1e.slice/crio-4bf47ccec39e63a8aa7b9b0365cf3fbacc9ec8137987972ea2594327c683378c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc5ff4ff_66c9_4b35_8cab_96245e66ccb2.slice/crio-conmon-9b48a1f51ca630c15c2efe92c9d8deec148f33df95627852b56a22c3c0b60531.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod605a2fdf_d09e_41ce_b8e5_32f5f1d3301d.slice/crio-conmon-1ac811730a11539a00f9396feb7b0d67dea69bb41dbdf002bdf98f91613a160e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod605a2fdf_d09e_41ce_b8e5_32f5f1d3301d.slice/crio-b92174dda8b6831ea822668f1e208a0b2d781b6b8baf99ad287015f3ca44a674.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81df8d25_8781_4cf7_ad12_ce90fb01aa1e.slice/crio-conmon-4bf47ccec39e63a8aa7b9b0365cf3fbacc9ec8137987972ea2594327c683378c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod605a2fdf_d09e_41ce_b8e5_32f5f1d3301d.slice/crio-conmon-ce01d49db3d16ed01b65719581fc59b566f9c1cb02dc0b256250fb6f84b497cf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod605a2fdf_d09e_41ce_b8e5_32f5f1d3301d.slice/crio-ab4ce04716181035804507713ea7b15f74830e5d98b6df0bf1ec6460e7305be2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod605a2fdf_d09e_41ce_b8e5_32f5f1d3301d.slice/crio-conmon-b92174dda8b6831ea822668f1e208a0b2d781b6b8baf99ad287015f3ca44a674.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81df8d25_8781_4cf7_ad12_ce90fb01aa1e.slice/crio-2ec260169a2347ad090462f514d67443e561b88c39e6abed482ccf2f1c7576e7\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81df8d25_8781_4cf7_ad12_ce90fb01aa1e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc5ff4ff_66c9_4b35_8cab_96245e66ccb2.slice/crio-9b48a1f51ca630c15c2efe92c9d8deec148f33df95627852b56a22c3c0b60531.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod605a2fdf_d09e_41ce_b8e5_32f5f1d3301d.slice/crio-conmon-ab4ce04716181035804507713ea7b15f74830e5d98b6df0bf1ec6460e7305be2.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:26:23 crc kubenswrapper[4672]: I0217 16:26:23.951400 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:26:23 crc kubenswrapper[4672]: I0217 16:26:23.960438 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81df8d25-8781-4cf7-ad12-ce90fb01aa1e" path="/var/lib/kubelet/pods/81df8d25-8781-4cf7-ad12-ce90fb01aa1e/volumes" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.023532 4672 generic.go:334] "Generic (PLEG): container finished" podID="605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" containerID="1ac811730a11539a00f9396feb7b0d67dea69bb41dbdf002bdf98f91613a160e" exitCode=0 Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.023629 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d","Type":"ContainerDied","Data":"1ac811730a11539a00f9396feb7b0d67dea69bb41dbdf002bdf98f91613a160e"} Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.023664 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d","Type":"ContainerDied","Data":"52ae5ffd056e91ce23e680ea8b755aa3e62dc474fbd0f21ce1dbf29a0682243a"} Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.023687 4672 scope.go:117] "RemoveContainer" containerID="ce01d49db3d16ed01b65719581fc59b566f9c1cb02dc0b256250fb6f84b497cf" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.023858 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.027285 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa","Type":"ContainerStarted","Data":"43a4e4c805805184b09b7ce6e49ec591bb0b25ebf5efc3b3fc150ef10dba6172"} Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.048658 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.048641505 podStartE2EDuration="2.048641505s" podCreationTimestamp="2026-02-17 16:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:26:24.04690679 +0000 UTC m=+1392.800995522" watchObservedRunningTime="2026-02-17 16:26:24.048641505 +0000 UTC m=+1392.802730237" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.057167 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-ceilometer-tls-certs\") pod \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.057264 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-log-httpd\") pod \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.057309 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-scripts\") pod \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.057351 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrfxn\" (UniqueName: \"kubernetes.io/projected/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-kube-api-access-nrfxn\") pod \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.057428 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-combined-ca-bundle\") pod \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.057496 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-run-httpd\") pod \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.057684 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-config-data\") pod \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.057715 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-sg-core-conf-yaml\") pod \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\" (UID: \"605a2fdf-d09e-41ce-b8e5-32f5f1d3301d\") " Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.058828 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" (UID: "605a2fdf-d09e-41ce-b8e5-32f5f1d3301d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.060072 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" (UID: "605a2fdf-d09e-41ce-b8e5-32f5f1d3301d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.066295 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-kube-api-access-nrfxn" (OuterVolumeSpecName: "kube-api-access-nrfxn") pod "605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" (UID: "605a2fdf-d09e-41ce-b8e5-32f5f1d3301d"). InnerVolumeSpecName "kube-api-access-nrfxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.069254 4672 scope.go:117] "RemoveContainer" containerID="ab4ce04716181035804507713ea7b15f74830e5d98b6df0bf1ec6460e7305be2" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.080618 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-scripts" (OuterVolumeSpecName: "scripts") pod "605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" (UID: "605a2fdf-d09e-41ce-b8e5-32f5f1d3301d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.129721 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" (UID: "605a2fdf-d09e-41ce-b8e5-32f5f1d3301d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.142453 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2sfpl"] Feb 17 16:26:24 crc kubenswrapper[4672]: W0217 16:26:24.142925 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95aa771c_7c3f_40bc_a845_a5b27b7581bd.slice/crio-3ec1687df979e4aec586a6d98b617664f7b772e2ae09f1621665e8bb26677fa3 WatchSource:0}: Error finding container 3ec1687df979e4aec586a6d98b617664f7b772e2ae09f1621665e8bb26677fa3: Status 404 returned error can't find the container with id 3ec1687df979e4aec586a6d98b617664f7b772e2ae09f1621665e8bb26677fa3 Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.153595 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" (UID: "605a2fdf-d09e-41ce-b8e5-32f5f1d3301d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.160922 4672 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.160958 4672 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.160967 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.160976 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrfxn\" (UniqueName: \"kubernetes.io/projected/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-kube-api-access-nrfxn\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.160984 4672 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.160992 4672 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.197372 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" (UID: "605a2fdf-d09e-41ce-b8e5-32f5f1d3301d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.218696 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-config-data" (OuterVolumeSpecName: "config-data") pod "605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" (UID: "605a2fdf-d09e-41ce-b8e5-32f5f1d3301d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.262635 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.262670 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.377130 4672 scope.go:117] "RemoveContainer" containerID="b92174dda8b6831ea822668f1e208a0b2d781b6b8baf99ad287015f3ca44a674" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.427661 4672 scope.go:117] "RemoveContainer" containerID="1ac811730a11539a00f9396feb7b0d67dea69bb41dbdf002bdf98f91613a160e" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.430815 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.460131 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.466422 4672 scope.go:117] "RemoveContainer" containerID="ce01d49db3d16ed01b65719581fc59b566f9c1cb02dc0b256250fb6f84b497cf" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.467610 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:24 crc kubenswrapper[4672]: E0217 16:26:24.468049 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" containerName="ceilometer-notification-agent" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.468073 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" containerName="ceilometer-notification-agent" Feb 17 16:26:24 crc kubenswrapper[4672]: E0217 16:26:24.468090 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" containerName="proxy-httpd" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.468096 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" containerName="proxy-httpd" Feb 17 16:26:24 crc kubenswrapper[4672]: E0217 16:26:24.468121 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" containerName="ceilometer-central-agent" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.468127 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" containerName="ceilometer-central-agent" Feb 17 16:26:24 crc kubenswrapper[4672]: E0217 16:26:24.468152 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" containerName="sg-core" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.468159 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" containerName="sg-core" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.468369 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" containerName="ceilometer-notification-agent" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.468384 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" containerName="proxy-httpd" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.468409 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" containerName="ceilometer-central-agent" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.468426 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" containerName="sg-core" Feb 17 16:26:24 crc kubenswrapper[4672]: E0217 16:26:24.468921 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce01d49db3d16ed01b65719581fc59b566f9c1cb02dc0b256250fb6f84b497cf\": container with ID starting with ce01d49db3d16ed01b65719581fc59b566f9c1cb02dc0b256250fb6f84b497cf not found: ID does not exist" containerID="ce01d49db3d16ed01b65719581fc59b566f9c1cb02dc0b256250fb6f84b497cf" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.468959 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce01d49db3d16ed01b65719581fc59b566f9c1cb02dc0b256250fb6f84b497cf"} err="failed to get container status \"ce01d49db3d16ed01b65719581fc59b566f9c1cb02dc0b256250fb6f84b497cf\": rpc error: code = NotFound desc = could not find container \"ce01d49db3d16ed01b65719581fc59b566f9c1cb02dc0b256250fb6f84b497cf\": container with ID starting with ce01d49db3d16ed01b65719581fc59b566f9c1cb02dc0b256250fb6f84b497cf not found: ID does not exist" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.468991 4672 scope.go:117] "RemoveContainer" containerID="ab4ce04716181035804507713ea7b15f74830e5d98b6df0bf1ec6460e7305be2" Feb 17 16:26:24 crc kubenswrapper[4672]: E0217 16:26:24.469335 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab4ce04716181035804507713ea7b15f74830e5d98b6df0bf1ec6460e7305be2\": container with ID starting with ab4ce04716181035804507713ea7b15f74830e5d98b6df0bf1ec6460e7305be2 not found: ID does not exist" containerID="ab4ce04716181035804507713ea7b15f74830e5d98b6df0bf1ec6460e7305be2" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.469361 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab4ce04716181035804507713ea7b15f74830e5d98b6df0bf1ec6460e7305be2"} err="failed to get container status \"ab4ce04716181035804507713ea7b15f74830e5d98b6df0bf1ec6460e7305be2\": rpc error: code = NotFound desc = could not find container \"ab4ce04716181035804507713ea7b15f74830e5d98b6df0bf1ec6460e7305be2\": container with ID starting with ab4ce04716181035804507713ea7b15f74830e5d98b6df0bf1ec6460e7305be2 not found: ID does not exist" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.469374 4672 scope.go:117] "RemoveContainer" containerID="b92174dda8b6831ea822668f1e208a0b2d781b6b8baf99ad287015f3ca44a674" Feb 17 16:26:24 crc kubenswrapper[4672]: E0217 16:26:24.469559 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b92174dda8b6831ea822668f1e208a0b2d781b6b8baf99ad287015f3ca44a674\": container with ID starting with b92174dda8b6831ea822668f1e208a0b2d781b6b8baf99ad287015f3ca44a674 not found: ID does not exist" containerID="b92174dda8b6831ea822668f1e208a0b2d781b6b8baf99ad287015f3ca44a674" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.469585 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b92174dda8b6831ea822668f1e208a0b2d781b6b8baf99ad287015f3ca44a674"} err="failed to get container status \"b92174dda8b6831ea822668f1e208a0b2d781b6b8baf99ad287015f3ca44a674\": rpc error: code = NotFound desc = could not find container \"b92174dda8b6831ea822668f1e208a0b2d781b6b8baf99ad287015f3ca44a674\": container with ID starting with b92174dda8b6831ea822668f1e208a0b2d781b6b8baf99ad287015f3ca44a674 not found: ID does not exist" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.469602 4672 scope.go:117] "RemoveContainer" containerID="1ac811730a11539a00f9396feb7b0d67dea69bb41dbdf002bdf98f91613a160e" Feb 17 16:26:24 crc kubenswrapper[4672]: E0217 16:26:24.469806 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ac811730a11539a00f9396feb7b0d67dea69bb41dbdf002bdf98f91613a160e\": container with ID starting with 1ac811730a11539a00f9396feb7b0d67dea69bb41dbdf002bdf98f91613a160e not found: ID does not exist" containerID="1ac811730a11539a00f9396feb7b0d67dea69bb41dbdf002bdf98f91613a160e" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.469827 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ac811730a11539a00f9396feb7b0d67dea69bb41dbdf002bdf98f91613a160e"} err="failed to get container status \"1ac811730a11539a00f9396feb7b0d67dea69bb41dbdf002bdf98f91613a160e\": rpc error: code = NotFound desc = could not find container \"1ac811730a11539a00f9396feb7b0d67dea69bb41dbdf002bdf98f91613a160e\": container with ID starting with 1ac811730a11539a00f9396feb7b0d67dea69bb41dbdf002bdf98f91613a160e not found: ID does not exist" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.470187 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.475043 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.475137 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.475406 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.478038 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.569724 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-config-data\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.569812 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.569841 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9bz4\" (UniqueName: \"kubernetes.io/projected/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-kube-api-access-b9bz4\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.569865 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-scripts\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.569959 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.570009 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-log-httpd\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.570051 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.570192 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-run-httpd\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.642422 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.672031 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.672214 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-log-httpd\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.672351 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.672397 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-run-httpd\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.672461 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-config-data\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.672768 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.672852 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9bz4\" (UniqueName: \"kubernetes.io/projected/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-kube-api-access-b9bz4\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.672908 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-scripts\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.673107 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-log-httpd\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.677429 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-run-httpd\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.681816 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.682658 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.689336 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.695641 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-scripts\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.708423 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9bz4\" (UniqueName: \"kubernetes.io/projected/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-kube-api-access-b9bz4\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.716526 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-config-data\") pod \"ceilometer-0\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.774086 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-logs\") pod \"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2\" (UID: \"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2\") " Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.774188 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-combined-ca-bundle\") pod \"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2\" (UID: \"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2\") " Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.774306 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-config-data\") pod \"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2\" (UID: \"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2\") " Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.774524 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57hzq\" (UniqueName: \"kubernetes.io/projected/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-kube-api-access-57hzq\") pod \"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2\" (UID: \"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2\") " Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.774664 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-logs" (OuterVolumeSpecName: "logs") pod "cc5ff4ff-66c9-4b35-8cab-96245e66ccb2" (UID: "cc5ff4ff-66c9-4b35-8cab-96245e66ccb2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.775132 4672 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.778240 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-kube-api-access-57hzq" (OuterVolumeSpecName: "kube-api-access-57hzq") pod "cc5ff4ff-66c9-4b35-8cab-96245e66ccb2" (UID: "cc5ff4ff-66c9-4b35-8cab-96245e66ccb2"). InnerVolumeSpecName "kube-api-access-57hzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.799399 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:26:24 crc kubenswrapper[4672]: E0217 16:26:24.799879 4672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-combined-ca-bundle podName:cc5ff4ff-66c9-4b35-8cab-96245e66ccb2 nodeName:}" failed. No retries permitted until 2026-02-17 16:26:25.29985067 +0000 UTC m=+1394.053939402 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-combined-ca-bundle") pod "cc5ff4ff-66c9-4b35-8cab-96245e66ccb2" (UID: "cc5ff4ff-66c9-4b35-8cab-96245e66ccb2") : error deleting /var/lib/kubelet/pods/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2/volume-subpaths: remove /var/lib/kubelet/pods/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2/volume-subpaths: no such file or directory Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.802912 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-config-data" (OuterVolumeSpecName: "config-data") pod "cc5ff4ff-66c9-4b35-8cab-96245e66ccb2" (UID: "cc5ff4ff-66c9-4b35-8cab-96245e66ccb2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.877637 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57hzq\" (UniqueName: \"kubernetes.io/projected/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-kube-api-access-57hzq\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:24 crc kubenswrapper[4672]: I0217 16:26:24.877683 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.040242 4672 generic.go:334] "Generic (PLEG): container finished" podID="95aa771c-7c3f-40bc-a845-a5b27b7581bd" containerID="e2738aaf5c272aab01bca9dec941a1d69859d9cf18197ac79108e05098f3db1b" exitCode=0 Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.040352 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2sfpl" event={"ID":"95aa771c-7c3f-40bc-a845-a5b27b7581bd","Type":"ContainerDied","Data":"e2738aaf5c272aab01bca9dec941a1d69859d9cf18197ac79108e05098f3db1b"} Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.040525 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2sfpl" event={"ID":"95aa771c-7c3f-40bc-a845-a5b27b7581bd","Type":"ContainerStarted","Data":"3ec1687df979e4aec586a6d98b617664f7b772e2ae09f1621665e8bb26677fa3"} Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.043943 4672 generic.go:334] "Generic (PLEG): container finished" podID="cc5ff4ff-66c9-4b35-8cab-96245e66ccb2" containerID="81c2e8ed828d72c1dbe0ccc858c831ab2bd5e6c5dcb9d5a1918cd609dd88bf18" exitCode=0 Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.044067 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2","Type":"ContainerDied","Data":"81c2e8ed828d72c1dbe0ccc858c831ab2bd5e6c5dcb9d5a1918cd609dd88bf18"} Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.044108 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2","Type":"ContainerDied","Data":"036b8ecab1503d84d6836f27f2f9297a93e2073a281b82789eab137778c1f7e6"} Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.044124 4672 scope.go:117] "RemoveContainer" containerID="81c2e8ed828d72c1dbe0ccc858c831ab2bd5e6c5dcb9d5a1918cd609dd88bf18" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.044237 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.071211 4672 scope.go:117] "RemoveContainer" containerID="9b48a1f51ca630c15c2efe92c9d8deec148f33df95627852b56a22c3c0b60531" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.148201 4672 scope.go:117] "RemoveContainer" containerID="81c2e8ed828d72c1dbe0ccc858c831ab2bd5e6c5dcb9d5a1918cd609dd88bf18" Feb 17 16:26:25 crc kubenswrapper[4672]: E0217 16:26:25.161881 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81c2e8ed828d72c1dbe0ccc858c831ab2bd5e6c5dcb9d5a1918cd609dd88bf18\": container with ID starting with 81c2e8ed828d72c1dbe0ccc858c831ab2bd5e6c5dcb9d5a1918cd609dd88bf18 not found: ID does not exist" containerID="81c2e8ed828d72c1dbe0ccc858c831ab2bd5e6c5dcb9d5a1918cd609dd88bf18" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.161942 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81c2e8ed828d72c1dbe0ccc858c831ab2bd5e6c5dcb9d5a1918cd609dd88bf18"} err="failed to get container status \"81c2e8ed828d72c1dbe0ccc858c831ab2bd5e6c5dcb9d5a1918cd609dd88bf18\": rpc error: code = NotFound desc = could not find container \"81c2e8ed828d72c1dbe0ccc858c831ab2bd5e6c5dcb9d5a1918cd609dd88bf18\": container with ID starting with 81c2e8ed828d72c1dbe0ccc858c831ab2bd5e6c5dcb9d5a1918cd609dd88bf18 not found: ID does not exist" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.161974 4672 scope.go:117] "RemoveContainer" containerID="9b48a1f51ca630c15c2efe92c9d8deec148f33df95627852b56a22c3c0b60531" Feb 17 16:26:25 crc kubenswrapper[4672]: E0217 16:26:25.162495 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b48a1f51ca630c15c2efe92c9d8deec148f33df95627852b56a22c3c0b60531\": container with ID starting with 9b48a1f51ca630c15c2efe92c9d8deec148f33df95627852b56a22c3c0b60531 not found: ID does not exist" containerID="9b48a1f51ca630c15c2efe92c9d8deec148f33df95627852b56a22c3c0b60531" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.162534 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b48a1f51ca630c15c2efe92c9d8deec148f33df95627852b56a22c3c0b60531"} err="failed to get container status \"9b48a1f51ca630c15c2efe92c9d8deec148f33df95627852b56a22c3c0b60531\": rpc error: code = NotFound desc = could not find container \"9b48a1f51ca630c15c2efe92c9d8deec148f33df95627852b56a22c3c0b60531\": container with ID starting with 9b48a1f51ca630c15c2efe92c9d8deec148f33df95627852b56a22c3c0b60531 not found: ID does not exist" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.285971 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.390575 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-combined-ca-bundle\") pod \"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2\" (UID: \"cc5ff4ff-66c9-4b35-8cab-96245e66ccb2\") " Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.395579 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc5ff4ff-66c9-4b35-8cab-96245e66ccb2" (UID: "cc5ff4ff-66c9-4b35-8cab-96245e66ccb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.492755 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.813771 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.826739 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.839691 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:26:25 crc kubenswrapper[4672]: E0217 16:26:25.841081 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5ff4ff-66c9-4b35-8cab-96245e66ccb2" containerName="nova-api-log" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.841101 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5ff4ff-66c9-4b35-8cab-96245e66ccb2" containerName="nova-api-log" Feb 17 16:26:25 crc kubenswrapper[4672]: E0217 16:26:25.841145 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5ff4ff-66c9-4b35-8cab-96245e66ccb2" containerName="nova-api-api" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.841152 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5ff4ff-66c9-4b35-8cab-96245e66ccb2" containerName="nova-api-api" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.841335 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc5ff4ff-66c9-4b35-8cab-96245e66ccb2" containerName="nova-api-log" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.841361 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc5ff4ff-66c9-4b35-8cab-96245e66ccb2" containerName="nova-api-api" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.842439 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.845641 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.845835 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.845946 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.866137 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.934186 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-config-data\") pod \"nova-api-0\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " pod="openstack/nova-api-0" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.934289 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec43dce1-ee53-491f-91d7-8aa70d776a67-logs\") pod \"nova-api-0\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " pod="openstack/nova-api-0" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.934323 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5gp8\" (UniqueName: \"kubernetes.io/projected/ec43dce1-ee53-491f-91d7-8aa70d776a67-kube-api-access-r5gp8\") pod \"nova-api-0\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " pod="openstack/nova-api-0" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.934406 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-public-tls-certs\") pod \"nova-api-0\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " pod="openstack/nova-api-0" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.934450 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " pod="openstack/nova-api-0" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.934592 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " pod="openstack/nova-api-0" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.963220 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="605a2fdf-d09e-41ce-b8e5-32f5f1d3301d" path="/var/lib/kubelet/pods/605a2fdf-d09e-41ce-b8e5-32f5f1d3301d/volumes" Feb 17 16:26:25 crc kubenswrapper[4672]: I0217 16:26:25.964351 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc5ff4ff-66c9-4b35-8cab-96245e66ccb2" path="/var/lib/kubelet/pods/cc5ff4ff-66c9-4b35-8cab-96245e66ccb2/volumes" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.023437 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.036864 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-config-data\") pod \"nova-api-0\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " pod="openstack/nova-api-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.036903 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec43dce1-ee53-491f-91d7-8aa70d776a67-logs\") pod \"nova-api-0\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " pod="openstack/nova-api-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.036926 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5gp8\" (UniqueName: \"kubernetes.io/projected/ec43dce1-ee53-491f-91d7-8aa70d776a67-kube-api-access-r5gp8\") pod \"nova-api-0\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " pod="openstack/nova-api-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.036978 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-public-tls-certs\") pod \"nova-api-0\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " pod="openstack/nova-api-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.037008 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " pod="openstack/nova-api-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.037083 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " pod="openstack/nova-api-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.037305 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec43dce1-ee53-491f-91d7-8aa70d776a67-logs\") pod \"nova-api-0\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " pod="openstack/nova-api-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.047404 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " pod="openstack/nova-api-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.051037 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-public-tls-certs\") pod \"nova-api-0\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " pod="openstack/nova-api-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.051669 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-config-data\") pod \"nova-api-0\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " pod="openstack/nova-api-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.053102 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " pod="openstack/nova-api-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.061940 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5gp8\" (UniqueName: \"kubernetes.io/projected/ec43dce1-ee53-491f-91d7-8aa70d776a67-kube-api-access-r5gp8\") pod \"nova-api-0\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " pod="openstack/nova-api-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.070763 4672 generic.go:334] "Generic (PLEG): container finished" podID="845c0587-4941-4756-b924-2d6078264b2c" containerID="60dc8e697466a61baf65f124c8713bd32875abe95bc1256b86d56080912285ad" exitCode=137 Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.071092 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.071089 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"845c0587-4941-4756-b924-2d6078264b2c","Type":"ContainerDied","Data":"60dc8e697466a61baf65f124c8713bd32875abe95bc1256b86d56080912285ad"} Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.071690 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"845c0587-4941-4756-b924-2d6078264b2c","Type":"ContainerDied","Data":"b1c86a4ce16877115c2fe559db64ceb26f88a7556a025d21a2537948ef96af02"} Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.071714 4672 scope.go:117] "RemoveContainer" containerID="60dc8e697466a61baf65f124c8713bd32875abe95bc1256b86d56080912285ad" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.077390 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89a3e9fb-fe7e-40db-9a8d-d0654e17d835","Type":"ContainerStarted","Data":"c1d29ee5922e7df14b712649cec64d858d8e7f7322d72afdd3eb3afaa56fbeee"} Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.077441 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89a3e9fb-fe7e-40db-9a8d-d0654e17d835","Type":"ContainerStarted","Data":"64ef1a3f627108a8ead8cee18ac936b54e9c31a0811e43805c2f77a73b6caf7b"} Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.080287 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2sfpl" event={"ID":"95aa771c-7c3f-40bc-a845-a5b27b7581bd","Type":"ContainerStarted","Data":"ff8a06fac4c5e081c971bd108e90499d82ce17af0935f568371e3280ae2a710a"} Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.138045 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2mxc\" (UniqueName: \"kubernetes.io/projected/845c0587-4941-4756-b924-2d6078264b2c-kube-api-access-s2mxc\") pod \"845c0587-4941-4756-b924-2d6078264b2c\" (UID: \"845c0587-4941-4756-b924-2d6078264b2c\") " Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.138154 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/845c0587-4941-4756-b924-2d6078264b2c-combined-ca-bundle\") pod \"845c0587-4941-4756-b924-2d6078264b2c\" (UID: \"845c0587-4941-4756-b924-2d6078264b2c\") " Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.138252 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/845c0587-4941-4756-b924-2d6078264b2c-logs\") pod \"845c0587-4941-4756-b924-2d6078264b2c\" (UID: \"845c0587-4941-4756-b924-2d6078264b2c\") " Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.138271 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/845c0587-4941-4756-b924-2d6078264b2c-nova-metadata-tls-certs\") pod \"845c0587-4941-4756-b924-2d6078264b2c\" (UID: \"845c0587-4941-4756-b924-2d6078264b2c\") " Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.138302 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/845c0587-4941-4756-b924-2d6078264b2c-config-data\") pod \"845c0587-4941-4756-b924-2d6078264b2c\" (UID: \"845c0587-4941-4756-b924-2d6078264b2c\") " Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.140077 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/845c0587-4941-4756-b924-2d6078264b2c-logs" (OuterVolumeSpecName: "logs") pod "845c0587-4941-4756-b924-2d6078264b2c" (UID: "845c0587-4941-4756-b924-2d6078264b2c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.145336 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/845c0587-4941-4756-b924-2d6078264b2c-kube-api-access-s2mxc" (OuterVolumeSpecName: "kube-api-access-s2mxc") pod "845c0587-4941-4756-b924-2d6078264b2c" (UID: "845c0587-4941-4756-b924-2d6078264b2c"). InnerVolumeSpecName "kube-api-access-s2mxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.169049 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/845c0587-4941-4756-b924-2d6078264b2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "845c0587-4941-4756-b924-2d6078264b2c" (UID: "845c0587-4941-4756-b924-2d6078264b2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.175604 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.186739 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/845c0587-4941-4756-b924-2d6078264b2c-config-data" (OuterVolumeSpecName: "config-data") pod "845c0587-4941-4756-b924-2d6078264b2c" (UID: "845c0587-4941-4756-b924-2d6078264b2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.199444 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/845c0587-4941-4756-b924-2d6078264b2c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "845c0587-4941-4756-b924-2d6078264b2c" (UID: "845c0587-4941-4756-b924-2d6078264b2c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.240699 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/845c0587-4941-4756-b924-2d6078264b2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.240859 4672 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/845c0587-4941-4756-b924-2d6078264b2c-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.241154 4672 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/845c0587-4941-4756-b924-2d6078264b2c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.241253 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/845c0587-4941-4756-b924-2d6078264b2c-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.241311 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2mxc\" (UniqueName: \"kubernetes.io/projected/845c0587-4941-4756-b924-2d6078264b2c-kube-api-access-s2mxc\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.302381 4672 scope.go:117] "RemoveContainer" containerID="40744341cf5eab09c169fde62f25f1fb9d702a47737ca68d9e17ceaf32818e79" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.321705 4672 scope.go:117] "RemoveContainer" containerID="60dc8e697466a61baf65f124c8713bd32875abe95bc1256b86d56080912285ad" Feb 17 16:26:26 crc kubenswrapper[4672]: E0217 16:26:26.324478 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60dc8e697466a61baf65f124c8713bd32875abe95bc1256b86d56080912285ad\": container with ID starting with 60dc8e697466a61baf65f124c8713bd32875abe95bc1256b86d56080912285ad not found: ID does not exist" containerID="60dc8e697466a61baf65f124c8713bd32875abe95bc1256b86d56080912285ad" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.324547 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60dc8e697466a61baf65f124c8713bd32875abe95bc1256b86d56080912285ad"} err="failed to get container status \"60dc8e697466a61baf65f124c8713bd32875abe95bc1256b86d56080912285ad\": rpc error: code = NotFound desc = could not find container \"60dc8e697466a61baf65f124c8713bd32875abe95bc1256b86d56080912285ad\": container with ID starting with 60dc8e697466a61baf65f124c8713bd32875abe95bc1256b86d56080912285ad not found: ID does not exist" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.324576 4672 scope.go:117] "RemoveContainer" containerID="40744341cf5eab09c169fde62f25f1fb9d702a47737ca68d9e17ceaf32818e79" Feb 17 16:26:26 crc kubenswrapper[4672]: E0217 16:26:26.327575 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40744341cf5eab09c169fde62f25f1fb9d702a47737ca68d9e17ceaf32818e79\": container with ID starting with 40744341cf5eab09c169fde62f25f1fb9d702a47737ca68d9e17ceaf32818e79 not found: ID does not exist" containerID="40744341cf5eab09c169fde62f25f1fb9d702a47737ca68d9e17ceaf32818e79" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.327614 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40744341cf5eab09c169fde62f25f1fb9d702a47737ca68d9e17ceaf32818e79"} err="failed to get container status \"40744341cf5eab09c169fde62f25f1fb9d702a47737ca68d9e17ceaf32818e79\": rpc error: code = NotFound desc = could not find container \"40744341cf5eab09c169fde62f25f1fb9d702a47737ca68d9e17ceaf32818e79\": container with ID starting with 40744341cf5eab09c169fde62f25f1fb9d702a47737ca68d9e17ceaf32818e79 not found: ID does not exist" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.412428 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.443645 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.453845 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:26 crc kubenswrapper[4672]: E0217 16:26:26.454306 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845c0587-4941-4756-b924-2d6078264b2c" containerName="nova-metadata-log" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.454318 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="845c0587-4941-4756-b924-2d6078264b2c" containerName="nova-metadata-log" Feb 17 16:26:26 crc kubenswrapper[4672]: E0217 16:26:26.454336 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845c0587-4941-4756-b924-2d6078264b2c" containerName="nova-metadata-metadata" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.454342 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="845c0587-4941-4756-b924-2d6078264b2c" containerName="nova-metadata-metadata" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.454558 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="845c0587-4941-4756-b924-2d6078264b2c" containerName="nova-metadata-metadata" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.454582 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="845c0587-4941-4756-b924-2d6078264b2c" containerName="nova-metadata-log" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.456081 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.461636 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.461853 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.464935 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.552969 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-logs\") pod \"nova-metadata-0\" (UID: \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\") " pod="openstack/nova-metadata-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.553022 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d8m4\" (UniqueName: \"kubernetes.io/projected/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-kube-api-access-2d8m4\") pod \"nova-metadata-0\" (UID: \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\") " pod="openstack/nova-metadata-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.553048 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-config-data\") pod \"nova-metadata-0\" (UID: \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\") " pod="openstack/nova-metadata-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.553077 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\") " pod="openstack/nova-metadata-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.553134 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\") " pod="openstack/nova-metadata-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.559081 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rppp4"] Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.562457 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rppp4" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.575597 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rppp4"] Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.655008 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cd3485d-3d10-4f38-bf4c-05368c7cd881-utilities\") pod \"redhat-operators-rppp4\" (UID: \"2cd3485d-3d10-4f38-bf4c-05368c7cd881\") " pod="openshift-marketplace/redhat-operators-rppp4" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.655120 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqfwr\" (UniqueName: \"kubernetes.io/projected/2cd3485d-3d10-4f38-bf4c-05368c7cd881-kube-api-access-fqfwr\") pod \"redhat-operators-rppp4\" (UID: \"2cd3485d-3d10-4f38-bf4c-05368c7cd881\") " pod="openshift-marketplace/redhat-operators-rppp4" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.655198 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cd3485d-3d10-4f38-bf4c-05368c7cd881-catalog-content\") pod \"redhat-operators-rppp4\" (UID: \"2cd3485d-3d10-4f38-bf4c-05368c7cd881\") " pod="openshift-marketplace/redhat-operators-rppp4" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.655291 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-logs\") pod \"nova-metadata-0\" (UID: \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\") " pod="openstack/nova-metadata-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.655373 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d8m4\" (UniqueName: \"kubernetes.io/projected/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-kube-api-access-2d8m4\") pod \"nova-metadata-0\" (UID: \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\") " pod="openstack/nova-metadata-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.655405 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-config-data\") pod \"nova-metadata-0\" (UID: \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\") " pod="openstack/nova-metadata-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.655443 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\") " pod="openstack/nova-metadata-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.655580 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\") " pod="openstack/nova-metadata-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.656792 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-logs\") pod \"nova-metadata-0\" (UID: \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\") " pod="openstack/nova-metadata-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.661071 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\") " pod="openstack/nova-metadata-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.663621 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\") " pod="openstack/nova-metadata-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.663836 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-config-data\") pod \"nova-metadata-0\" (UID: \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\") " pod="openstack/nova-metadata-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.675196 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d8m4\" (UniqueName: \"kubernetes.io/projected/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-kube-api-access-2d8m4\") pod \"nova-metadata-0\" (UID: \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\") " pod="openstack/nova-metadata-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.758763 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cd3485d-3d10-4f38-bf4c-05368c7cd881-utilities\") pod \"redhat-operators-rppp4\" (UID: \"2cd3485d-3d10-4f38-bf4c-05368c7cd881\") " pod="openshift-marketplace/redhat-operators-rppp4" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.758825 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqfwr\" (UniqueName: \"kubernetes.io/projected/2cd3485d-3d10-4f38-bf4c-05368c7cd881-kube-api-access-fqfwr\") pod \"redhat-operators-rppp4\" (UID: \"2cd3485d-3d10-4f38-bf4c-05368c7cd881\") " pod="openshift-marketplace/redhat-operators-rppp4" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.758862 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cd3485d-3d10-4f38-bf4c-05368c7cd881-catalog-content\") pod \"redhat-operators-rppp4\" (UID: \"2cd3485d-3d10-4f38-bf4c-05368c7cd881\") " pod="openshift-marketplace/redhat-operators-rppp4" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.759431 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cd3485d-3d10-4f38-bf4c-05368c7cd881-catalog-content\") pod \"redhat-operators-rppp4\" (UID: \"2cd3485d-3d10-4f38-bf4c-05368c7cd881\") " pod="openshift-marketplace/redhat-operators-rppp4" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.759690 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cd3485d-3d10-4f38-bf4c-05368c7cd881-utilities\") pod \"redhat-operators-rppp4\" (UID: \"2cd3485d-3d10-4f38-bf4c-05368c7cd881\") " pod="openshift-marketplace/redhat-operators-rppp4" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.768879 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.784408 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqfwr\" (UniqueName: \"kubernetes.io/projected/2cd3485d-3d10-4f38-bf4c-05368c7cd881-kube-api-access-fqfwr\") pod \"redhat-operators-rppp4\" (UID: \"2cd3485d-3d10-4f38-bf4c-05368c7cd881\") " pod="openshift-marketplace/redhat-operators-rppp4" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.832028 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:26:26 crc kubenswrapper[4672]: I0217 16:26:26.884848 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rppp4" Feb 17 16:26:27 crc kubenswrapper[4672]: I0217 16:26:27.099157 4672 generic.go:334] "Generic (PLEG): container finished" podID="95aa771c-7c3f-40bc-a845-a5b27b7581bd" containerID="ff8a06fac4c5e081c971bd108e90499d82ce17af0935f568371e3280ae2a710a" exitCode=0 Feb 17 16:26:27 crc kubenswrapper[4672]: I0217 16:26:27.099532 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2sfpl" event={"ID":"95aa771c-7c3f-40bc-a845-a5b27b7581bd","Type":"ContainerDied","Data":"ff8a06fac4c5e081c971bd108e90499d82ce17af0935f568371e3280ae2a710a"} Feb 17 16:26:27 crc kubenswrapper[4672]: I0217 16:26:27.107198 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec43dce1-ee53-491f-91d7-8aa70d776a67","Type":"ContainerStarted","Data":"6a086bfbec5f81d7ebbc40f78ab46a5db37eb7e2083269b79b946a93a0615761"} Feb 17 16:26:27 crc kubenswrapper[4672]: I0217 16:26:27.107251 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec43dce1-ee53-491f-91d7-8aa70d776a67","Type":"ContainerStarted","Data":"977d54c10a2be3d3b3c3ba571f3ab04bfa40fa1f81eed22826f64d4a8ad98eed"} Feb 17 16:26:27 crc kubenswrapper[4672]: I0217 16:26:27.132623 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89a3e9fb-fe7e-40db-9a8d-d0654e17d835","Type":"ContainerStarted","Data":"44ccec02fda8ea0a6bd0c733c15471c90c4dd3aee30d603d52ddf8c42a5baf00"} Feb 17 16:26:27 crc kubenswrapper[4672]: I0217 16:26:27.351487 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:27 crc kubenswrapper[4672]: I0217 16:26:27.366269 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:27 crc kubenswrapper[4672]: I0217 16:26:27.512158 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rppp4"] Feb 17 16:26:27 crc kubenswrapper[4672]: I0217 16:26:27.958295 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="845c0587-4941-4756-b924-2d6078264b2c" path="/var/lib/kubelet/pods/845c0587-4941-4756-b924-2d6078264b2c/volumes" Feb 17 16:26:28 crc kubenswrapper[4672]: I0217 16:26:28.142890 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec43dce1-ee53-491f-91d7-8aa70d776a67","Type":"ContainerStarted","Data":"c4c757eee0900ecf11c4a585983c97da3312d12cc18530d0170712e9d216ce75"} Feb 17 16:26:28 crc kubenswrapper[4672]: I0217 16:26:28.146073 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d54e3c9d-9a10-46ee-96e1-5c270ef2197d","Type":"ContainerStarted","Data":"42f9885d0bfaae7ce2d31f0c0048c76fe2383872d4545e9a2ea7d9d2e95c7898"} Feb 17 16:26:28 crc kubenswrapper[4672]: I0217 16:26:28.146107 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d54e3c9d-9a10-46ee-96e1-5c270ef2197d","Type":"ContainerStarted","Data":"5747cb31e96a22d6a6a148a25c99630bf5bc0dd9fbaf3eb1b5770ca051870b89"} Feb 17 16:26:28 crc kubenswrapper[4672]: I0217 16:26:28.146118 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d54e3c9d-9a10-46ee-96e1-5c270ef2197d","Type":"ContainerStarted","Data":"59cbb9f4abdaa1e2688e5581f4a5777b18691748e61efea866d7634347e45a36"} Feb 17 16:26:28 crc kubenswrapper[4672]: I0217 16:26:28.149243 4672 generic.go:334] "Generic (PLEG): container finished" podID="2cd3485d-3d10-4f38-bf4c-05368c7cd881" containerID="d9b1fe55ea1eb2ab5ab32963d8cbcd84460a6afab66ef976a3cad5797ff532f1" exitCode=0 Feb 17 16:26:28 crc kubenswrapper[4672]: I0217 16:26:28.149290 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rppp4" event={"ID":"2cd3485d-3d10-4f38-bf4c-05368c7cd881","Type":"ContainerDied","Data":"d9b1fe55ea1eb2ab5ab32963d8cbcd84460a6afab66ef976a3cad5797ff532f1"} Feb 17 16:26:28 crc kubenswrapper[4672]: I0217 16:26:28.149359 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rppp4" event={"ID":"2cd3485d-3d10-4f38-bf4c-05368c7cd881","Type":"ContainerStarted","Data":"fa7015936ebcde60a444cc907980509cc6f453022923a8c3a1cefa0e2b4b6ac2"} Feb 17 16:26:28 crc kubenswrapper[4672]: I0217 16:26:28.151750 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89a3e9fb-fe7e-40db-9a8d-d0654e17d835","Type":"ContainerStarted","Data":"72ba7ba089b17af40885aea63b4ae50dd243b4633417c2796d5224769c5fde24"} Feb 17 16:26:28 crc kubenswrapper[4672]: I0217 16:26:28.156072 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2sfpl" event={"ID":"95aa771c-7c3f-40bc-a845-a5b27b7581bd","Type":"ContainerStarted","Data":"e1314d4e80a619e5c125f4448aabd5b96137a74dcb13413ce91de594ab81b788"} Feb 17 16:26:28 crc kubenswrapper[4672]: I0217 16:26:28.174242 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.1742269690000002 podStartE2EDuration="3.174226969s" podCreationTimestamp="2026-02-17 16:26:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:26:28.166464224 +0000 UTC m=+1396.920552956" watchObservedRunningTime="2026-02-17 16:26:28.174226969 +0000 UTC m=+1396.928315701" Feb 17 16:26:28 crc kubenswrapper[4672]: I0217 16:26:28.230176 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2sfpl" podStartSLOduration=2.650617679 podStartE2EDuration="5.230156073s" podCreationTimestamp="2026-02-17 16:26:23 +0000 UTC" firstStartedPulling="2026-02-17 16:26:25.041690235 +0000 UTC m=+1393.795778957" lastFinishedPulling="2026-02-17 16:26:27.621228619 +0000 UTC m=+1396.375317351" observedRunningTime="2026-02-17 16:26:28.211247715 +0000 UTC m=+1396.965336447" watchObservedRunningTime="2026-02-17 16:26:28.230156073 +0000 UTC m=+1396.984244805" Feb 17 16:26:28 crc kubenswrapper[4672]: I0217 16:26:28.247845 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.247825339 podStartE2EDuration="2.247825339s" podCreationTimestamp="2026-02-17 16:26:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:26:28.244738498 +0000 UTC m=+1396.998827230" watchObservedRunningTime="2026-02-17 16:26:28.247825339 +0000 UTC m=+1397.001914071" Feb 17 16:26:28 crc kubenswrapper[4672]: I0217 16:26:28.600734 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:26:28 crc kubenswrapper[4672]: I0217 16:26:28.669943 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d578b86f9-6qmnb"] Feb 17 16:26:28 crc kubenswrapper[4672]: I0217 16:26:28.670164 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" podUID="faa984a6-743c-4fec-a4d5-0555ad87604d" containerName="dnsmasq-dns" containerID="cri-o://17ae1503b23ad7c97e271069d0bd671a2a3d8afcd910c089bd61074dd120f9bc" gracePeriod=10 Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.185469 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rppp4" event={"ID":"2cd3485d-3d10-4f38-bf4c-05368c7cd881","Type":"ContainerStarted","Data":"e6fb0b20521ca0a8f284fad9f9c5a2f35aea9c13cc2c9c86bdcc5646b1543026"} Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.191795 4672 generic.go:334] "Generic (PLEG): container finished" podID="faa984a6-743c-4fec-a4d5-0555ad87604d" containerID="17ae1503b23ad7c97e271069d0bd671a2a3d8afcd910c089bd61074dd120f9bc" exitCode=0 Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.191883 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" event={"ID":"faa984a6-743c-4fec-a4d5-0555ad87604d","Type":"ContainerDied","Data":"17ae1503b23ad7c97e271069d0bd671a2a3d8afcd910c089bd61074dd120f9bc"} Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.210311 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89a3e9fb-fe7e-40db-9a8d-d0654e17d835","Type":"ContainerStarted","Data":"208900d77f16c9655f477f14fdc99d4ebabd37d58f006eb99a0f70204c4002c0"} Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.211858 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.243333 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.989556593 podStartE2EDuration="5.243312753s" podCreationTimestamp="2026-02-17 16:26:24 +0000 UTC" firstStartedPulling="2026-02-17 16:26:25.281641241 +0000 UTC m=+1394.035729973" lastFinishedPulling="2026-02-17 16:26:28.535397401 +0000 UTC m=+1397.289486133" observedRunningTime="2026-02-17 16:26:29.230692461 +0000 UTC m=+1397.984781193" watchObservedRunningTime="2026-02-17 16:26:29.243312753 +0000 UTC m=+1397.997401485" Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.396091 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.561452 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-ovsdbserver-nb\") pod \"faa984a6-743c-4fec-a4d5-0555ad87604d\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.561505 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-config\") pod \"faa984a6-743c-4fec-a4d5-0555ad87604d\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.561557 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-dns-swift-storage-0\") pod \"faa984a6-743c-4fec-a4d5-0555ad87604d\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.561632 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd5sz\" (UniqueName: \"kubernetes.io/projected/faa984a6-743c-4fec-a4d5-0555ad87604d-kube-api-access-fd5sz\") pod \"faa984a6-743c-4fec-a4d5-0555ad87604d\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.562328 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-dns-svc\") pod \"faa984a6-743c-4fec-a4d5-0555ad87604d\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.562523 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-ovsdbserver-sb\") pod \"faa984a6-743c-4fec-a4d5-0555ad87604d\" (UID: \"faa984a6-743c-4fec-a4d5-0555ad87604d\") " Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.588484 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa984a6-743c-4fec-a4d5-0555ad87604d-kube-api-access-fd5sz" (OuterVolumeSpecName: "kube-api-access-fd5sz") pod "faa984a6-743c-4fec-a4d5-0555ad87604d" (UID: "faa984a6-743c-4fec-a4d5-0555ad87604d"). InnerVolumeSpecName "kube-api-access-fd5sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.628331 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "faa984a6-743c-4fec-a4d5-0555ad87604d" (UID: "faa984a6-743c-4fec-a4d5-0555ad87604d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.637690 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "faa984a6-743c-4fec-a4d5-0555ad87604d" (UID: "faa984a6-743c-4fec-a4d5-0555ad87604d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.648395 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "faa984a6-743c-4fec-a4d5-0555ad87604d" (UID: "faa984a6-743c-4fec-a4d5-0555ad87604d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.650102 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-config" (OuterVolumeSpecName: "config") pod "faa984a6-743c-4fec-a4d5-0555ad87604d" (UID: "faa984a6-743c-4fec-a4d5-0555ad87604d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.664912 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.664956 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.664966 4672 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.664975 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd5sz\" (UniqueName: \"kubernetes.io/projected/faa984a6-743c-4fec-a4d5-0555ad87604d-kube-api-access-fd5sz\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.664986 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.670166 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "faa984a6-743c-4fec-a4d5-0555ad87604d" (UID: "faa984a6-743c-4fec-a4d5-0555ad87604d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:26:29 crc kubenswrapper[4672]: I0217 16:26:29.767186 4672 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/faa984a6-743c-4fec-a4d5-0555ad87604d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:30 crc kubenswrapper[4672]: I0217 16:26:30.221646 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" event={"ID":"faa984a6-743c-4fec-a4d5-0555ad87604d","Type":"ContainerDied","Data":"41701d7258daa2ddf2901d6ec81141a9dd616a89578da12ce649c237a95e176e"} Feb 17 16:26:30 crc kubenswrapper[4672]: I0217 16:26:30.221971 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d578b86f9-6qmnb" Feb 17 16:26:30 crc kubenswrapper[4672]: I0217 16:26:30.222597 4672 scope.go:117] "RemoveContainer" containerID="17ae1503b23ad7c97e271069d0bd671a2a3d8afcd910c089bd61074dd120f9bc" Feb 17 16:26:30 crc kubenswrapper[4672]: I0217 16:26:30.250127 4672 scope.go:117] "RemoveContainer" containerID="180881eb289bbe97e38ff94ae68f8729879812f26f108ba62576825b4f8fbc5c" Feb 17 16:26:30 crc kubenswrapper[4672]: I0217 16:26:30.291199 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d578b86f9-6qmnb"] Feb 17 16:26:30 crc kubenswrapper[4672]: I0217 16:26:30.307312 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d578b86f9-6qmnb"] Feb 17 16:26:31 crc kubenswrapper[4672]: I0217 16:26:31.832847 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:26:31 crc kubenswrapper[4672]: I0217 16:26:31.833196 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:26:31 crc kubenswrapper[4672]: I0217 16:26:31.968349 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faa984a6-743c-4fec-a4d5-0555ad87604d" path="/var/lib/kubelet/pods/faa984a6-743c-4fec-a4d5-0555ad87604d/volumes" Feb 17 16:26:32 crc kubenswrapper[4672]: I0217 16:26:32.258867 4672 generic.go:334] "Generic (PLEG): container finished" podID="2cd3485d-3d10-4f38-bf4c-05368c7cd881" containerID="e6fb0b20521ca0a8f284fad9f9c5a2f35aea9c13cc2c9c86bdcc5646b1543026" exitCode=0 Feb 17 16:26:32 crc kubenswrapper[4672]: I0217 16:26:32.258908 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rppp4" event={"ID":"2cd3485d-3d10-4f38-bf4c-05368c7cd881","Type":"ContainerDied","Data":"e6fb0b20521ca0a8f284fad9f9c5a2f35aea9c13cc2c9c86bdcc5646b1543026"} Feb 17 16:26:32 crc kubenswrapper[4672]: I0217 16:26:32.352239 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:32 crc kubenswrapper[4672]: I0217 16:26:32.380310 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.280210 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rppp4" event={"ID":"2cd3485d-3d10-4f38-bf4c-05368c7cd881","Type":"ContainerStarted","Data":"3591fe829bc7122dc4787dce38f3291f756a461c191f5f0b753779bab931e1ee"} Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.308687 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.331599 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rppp4" podStartSLOduration=2.817004333 podStartE2EDuration="7.331561011s" podCreationTimestamp="2026-02-17 16:26:26 +0000 UTC" firstStartedPulling="2026-02-17 16:26:28.151295434 +0000 UTC m=+1396.905384166" lastFinishedPulling="2026-02-17 16:26:32.665852102 +0000 UTC m=+1401.419940844" observedRunningTime="2026-02-17 16:26:33.31256561 +0000 UTC m=+1402.066654352" watchObservedRunningTime="2026-02-17 16:26:33.331561011 +0000 UTC m=+1402.085649813" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.494993 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2sfpl" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.495230 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2sfpl" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.534457 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-87krq"] Feb 17 16:26:33 crc kubenswrapper[4672]: E0217 16:26:33.534960 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa984a6-743c-4fec-a4d5-0555ad87604d" containerName="dnsmasq-dns" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.534979 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa984a6-743c-4fec-a4d5-0555ad87604d" containerName="dnsmasq-dns" Feb 17 16:26:33 crc kubenswrapper[4672]: E0217 16:26:33.535003 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa984a6-743c-4fec-a4d5-0555ad87604d" containerName="init" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.535010 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa984a6-743c-4fec-a4d5-0555ad87604d" containerName="init" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.535191 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa984a6-743c-4fec-a4d5-0555ad87604d" containerName="dnsmasq-dns" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.535985 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-87krq" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.538071 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.538242 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.565658 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-87krq"] Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.570686 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2sfpl" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.655856 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6b2f22c-613e-4774-b353-a90ff22bfba3-config-data\") pod \"nova-cell1-cell-mapping-87krq\" (UID: \"b6b2f22c-613e-4774-b353-a90ff22bfba3\") " pod="openstack/nova-cell1-cell-mapping-87krq" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.655967 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6b2f22c-613e-4774-b353-a90ff22bfba3-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-87krq\" (UID: \"b6b2f22c-613e-4774-b353-a90ff22bfba3\") " pod="openstack/nova-cell1-cell-mapping-87krq" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.656003 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6b2f22c-613e-4774-b353-a90ff22bfba3-scripts\") pod \"nova-cell1-cell-mapping-87krq\" (UID: \"b6b2f22c-613e-4774-b353-a90ff22bfba3\") " pod="openstack/nova-cell1-cell-mapping-87krq" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.656033 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqsvr\" (UniqueName: \"kubernetes.io/projected/b6b2f22c-613e-4774-b353-a90ff22bfba3-kube-api-access-wqsvr\") pod \"nova-cell1-cell-mapping-87krq\" (UID: \"b6b2f22c-613e-4774-b353-a90ff22bfba3\") " pod="openstack/nova-cell1-cell-mapping-87krq" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.757980 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6b2f22c-613e-4774-b353-a90ff22bfba3-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-87krq\" (UID: \"b6b2f22c-613e-4774-b353-a90ff22bfba3\") " pod="openstack/nova-cell1-cell-mapping-87krq" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.758060 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6b2f22c-613e-4774-b353-a90ff22bfba3-scripts\") pod \"nova-cell1-cell-mapping-87krq\" (UID: \"b6b2f22c-613e-4774-b353-a90ff22bfba3\") " pod="openstack/nova-cell1-cell-mapping-87krq" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.758098 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqsvr\" (UniqueName: \"kubernetes.io/projected/b6b2f22c-613e-4774-b353-a90ff22bfba3-kube-api-access-wqsvr\") pod \"nova-cell1-cell-mapping-87krq\" (UID: \"b6b2f22c-613e-4774-b353-a90ff22bfba3\") " pod="openstack/nova-cell1-cell-mapping-87krq" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.758188 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6b2f22c-613e-4774-b353-a90ff22bfba3-config-data\") pod \"nova-cell1-cell-mapping-87krq\" (UID: \"b6b2f22c-613e-4774-b353-a90ff22bfba3\") " pod="openstack/nova-cell1-cell-mapping-87krq" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.766012 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6b2f22c-613e-4774-b353-a90ff22bfba3-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-87krq\" (UID: \"b6b2f22c-613e-4774-b353-a90ff22bfba3\") " pod="openstack/nova-cell1-cell-mapping-87krq" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.767010 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6b2f22c-613e-4774-b353-a90ff22bfba3-config-data\") pod \"nova-cell1-cell-mapping-87krq\" (UID: \"b6b2f22c-613e-4774-b353-a90ff22bfba3\") " pod="openstack/nova-cell1-cell-mapping-87krq" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.784262 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqsvr\" (UniqueName: \"kubernetes.io/projected/b6b2f22c-613e-4774-b353-a90ff22bfba3-kube-api-access-wqsvr\") pod \"nova-cell1-cell-mapping-87krq\" (UID: \"b6b2f22c-613e-4774-b353-a90ff22bfba3\") " pod="openstack/nova-cell1-cell-mapping-87krq" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.784584 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6b2f22c-613e-4774-b353-a90ff22bfba3-scripts\") pod \"nova-cell1-cell-mapping-87krq\" (UID: \"b6b2f22c-613e-4774-b353-a90ff22bfba3\") " pod="openstack/nova-cell1-cell-mapping-87krq" Feb 17 16:26:33 crc kubenswrapper[4672]: I0217 16:26:33.850566 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-87krq" Feb 17 16:26:34 crc kubenswrapper[4672]: I0217 16:26:34.428005 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2sfpl" Feb 17 16:26:34 crc kubenswrapper[4672]: I0217 16:26:34.562009 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-87krq"] Feb 17 16:26:35 crc kubenswrapper[4672]: I0217 16:26:35.338718 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-87krq" event={"ID":"b6b2f22c-613e-4774-b353-a90ff22bfba3","Type":"ContainerStarted","Data":"c9c5ab7c921496df8eca632d56907825bf43bb3644be5f77f64d5cb3bd894d99"} Feb 17 16:26:35 crc kubenswrapper[4672]: I0217 16:26:35.339479 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-87krq" event={"ID":"b6b2f22c-613e-4774-b353-a90ff22bfba3","Type":"ContainerStarted","Data":"4fcf1bbe588c21fb933fc3416566bc618b4f79e13b5bc4ba50f8823b0d6f7dda"} Feb 17 16:26:35 crc kubenswrapper[4672]: I0217 16:26:35.349686 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2sfpl"] Feb 17 16:26:35 crc kubenswrapper[4672]: I0217 16:26:35.364267 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-87krq" podStartSLOduration=2.364248269 podStartE2EDuration="2.364248269s" podCreationTimestamp="2026-02-17 16:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:26:35.35819014 +0000 UTC m=+1404.112278932" watchObservedRunningTime="2026-02-17 16:26:35.364248269 +0000 UTC m=+1404.118337001" Feb 17 16:26:36 crc kubenswrapper[4672]: I0217 16:26:36.176160 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:26:36 crc kubenswrapper[4672]: I0217 16:26:36.176209 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:26:36 crc kubenswrapper[4672]: I0217 16:26:36.348551 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2sfpl" podUID="95aa771c-7c3f-40bc-a845-a5b27b7581bd" containerName="registry-server" containerID="cri-o://e1314d4e80a619e5c125f4448aabd5b96137a74dcb13413ce91de594ab81b788" gracePeriod=2 Feb 17 16:26:36 crc kubenswrapper[4672]: I0217 16:26:36.834118 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:26:36 crc kubenswrapper[4672]: I0217 16:26:36.834151 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:26:36 crc kubenswrapper[4672]: I0217 16:26:36.886151 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rppp4" Feb 17 16:26:36 crc kubenswrapper[4672]: I0217 16:26:36.886467 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rppp4" Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.016859 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2sfpl" Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.168063 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bprh\" (UniqueName: \"kubernetes.io/projected/95aa771c-7c3f-40bc-a845-a5b27b7581bd-kube-api-access-4bprh\") pod \"95aa771c-7c3f-40bc-a845-a5b27b7581bd\" (UID: \"95aa771c-7c3f-40bc-a845-a5b27b7581bd\") " Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.168141 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95aa771c-7c3f-40bc-a845-a5b27b7581bd-utilities\") pod \"95aa771c-7c3f-40bc-a845-a5b27b7581bd\" (UID: \"95aa771c-7c3f-40bc-a845-a5b27b7581bd\") " Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.168323 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95aa771c-7c3f-40bc-a845-a5b27b7581bd-catalog-content\") pod \"95aa771c-7c3f-40bc-a845-a5b27b7581bd\" (UID: \"95aa771c-7c3f-40bc-a845-a5b27b7581bd\") " Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.168978 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95aa771c-7c3f-40bc-a845-a5b27b7581bd-utilities" (OuterVolumeSpecName: "utilities") pod "95aa771c-7c3f-40bc-a845-a5b27b7581bd" (UID: "95aa771c-7c3f-40bc-a845-a5b27b7581bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.198797 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ec43dce1-ee53-491f-91d7-8aa70d776a67" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.232:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.199221 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ec43dce1-ee53-491f-91d7-8aa70d776a67" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.232:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.200787 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95aa771c-7c3f-40bc-a845-a5b27b7581bd-kube-api-access-4bprh" (OuterVolumeSpecName: "kube-api-access-4bprh") pod "95aa771c-7c3f-40bc-a845-a5b27b7581bd" (UID: "95aa771c-7c3f-40bc-a845-a5b27b7581bd"). InnerVolumeSpecName "kube-api-access-4bprh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.217790 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95aa771c-7c3f-40bc-a845-a5b27b7581bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95aa771c-7c3f-40bc-a845-a5b27b7581bd" (UID: "95aa771c-7c3f-40bc-a845-a5b27b7581bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.224366 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95aa771c-7c3f-40bc-a845-a5b27b7581bd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.224445 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bprh\" (UniqueName: \"kubernetes.io/projected/95aa771c-7c3f-40bc-a845-a5b27b7581bd-kube-api-access-4bprh\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.224465 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95aa771c-7c3f-40bc-a845-a5b27b7581bd-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.360494 4672 generic.go:334] "Generic (PLEG): container finished" podID="95aa771c-7c3f-40bc-a845-a5b27b7581bd" containerID="e1314d4e80a619e5c125f4448aabd5b96137a74dcb13413ce91de594ab81b788" exitCode=0 Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.360563 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2sfpl" event={"ID":"95aa771c-7c3f-40bc-a845-a5b27b7581bd","Type":"ContainerDied","Data":"e1314d4e80a619e5c125f4448aabd5b96137a74dcb13413ce91de594ab81b788"} Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.360586 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2sfpl" Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.360616 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2sfpl" event={"ID":"95aa771c-7c3f-40bc-a845-a5b27b7581bd","Type":"ContainerDied","Data":"3ec1687df979e4aec586a6d98b617664f7b772e2ae09f1621665e8bb26677fa3"} Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.360636 4672 scope.go:117] "RemoveContainer" containerID="e1314d4e80a619e5c125f4448aabd5b96137a74dcb13413ce91de594ab81b788" Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.405731 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2sfpl"] Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.409563 4672 scope.go:117] "RemoveContainer" containerID="ff8a06fac4c5e081c971bd108e90499d82ce17af0935f568371e3280ae2a710a" Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.417828 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2sfpl"] Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.439316 4672 scope.go:117] "RemoveContainer" containerID="e2738aaf5c272aab01bca9dec941a1d69859d9cf18197ac79108e05098f3db1b" Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.488082 4672 scope.go:117] "RemoveContainer" containerID="e1314d4e80a619e5c125f4448aabd5b96137a74dcb13413ce91de594ab81b788" Feb 17 16:26:37 crc kubenswrapper[4672]: E0217 16:26:37.488812 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1314d4e80a619e5c125f4448aabd5b96137a74dcb13413ce91de594ab81b788\": container with ID starting with e1314d4e80a619e5c125f4448aabd5b96137a74dcb13413ce91de594ab81b788 not found: ID does not exist" containerID="e1314d4e80a619e5c125f4448aabd5b96137a74dcb13413ce91de594ab81b788" Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.488864 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1314d4e80a619e5c125f4448aabd5b96137a74dcb13413ce91de594ab81b788"} err="failed to get container status \"e1314d4e80a619e5c125f4448aabd5b96137a74dcb13413ce91de594ab81b788\": rpc error: code = NotFound desc = could not find container \"e1314d4e80a619e5c125f4448aabd5b96137a74dcb13413ce91de594ab81b788\": container with ID starting with e1314d4e80a619e5c125f4448aabd5b96137a74dcb13413ce91de594ab81b788 not found: ID does not exist" Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.488900 4672 scope.go:117] "RemoveContainer" containerID="ff8a06fac4c5e081c971bd108e90499d82ce17af0935f568371e3280ae2a710a" Feb 17 16:26:37 crc kubenswrapper[4672]: E0217 16:26:37.489290 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff8a06fac4c5e081c971bd108e90499d82ce17af0935f568371e3280ae2a710a\": container with ID starting with ff8a06fac4c5e081c971bd108e90499d82ce17af0935f568371e3280ae2a710a not found: ID does not exist" containerID="ff8a06fac4c5e081c971bd108e90499d82ce17af0935f568371e3280ae2a710a" Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.489320 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff8a06fac4c5e081c971bd108e90499d82ce17af0935f568371e3280ae2a710a"} err="failed to get container status \"ff8a06fac4c5e081c971bd108e90499d82ce17af0935f568371e3280ae2a710a\": rpc error: code = NotFound desc = could not find container \"ff8a06fac4c5e081c971bd108e90499d82ce17af0935f568371e3280ae2a710a\": container with ID starting with ff8a06fac4c5e081c971bd108e90499d82ce17af0935f568371e3280ae2a710a not found: ID does not exist" Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.489339 4672 scope.go:117] "RemoveContainer" containerID="e2738aaf5c272aab01bca9dec941a1d69859d9cf18197ac79108e05098f3db1b" Feb 17 16:26:37 crc kubenswrapper[4672]: E0217 16:26:37.489738 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2738aaf5c272aab01bca9dec941a1d69859d9cf18197ac79108e05098f3db1b\": container with ID starting with e2738aaf5c272aab01bca9dec941a1d69859d9cf18197ac79108e05098f3db1b not found: ID does not exist" containerID="e2738aaf5c272aab01bca9dec941a1d69859d9cf18197ac79108e05098f3db1b" Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.489774 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2738aaf5c272aab01bca9dec941a1d69859d9cf18197ac79108e05098f3db1b"} err="failed to get container status \"e2738aaf5c272aab01bca9dec941a1d69859d9cf18197ac79108e05098f3db1b\": rpc error: code = NotFound desc = could not find container \"e2738aaf5c272aab01bca9dec941a1d69859d9cf18197ac79108e05098f3db1b\": container with ID starting with e2738aaf5c272aab01bca9dec941a1d69859d9cf18197ac79108e05098f3db1b not found: ID does not exist" Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.894628 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d54e3c9d-9a10-46ee-96e1-5c270ef2197d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.233:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.894648 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d54e3c9d-9a10-46ee-96e1-5c270ef2197d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.233:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.956717 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95aa771c-7c3f-40bc-a845-a5b27b7581bd" path="/var/lib/kubelet/pods/95aa771c-7c3f-40bc-a845-a5b27b7581bd/volumes" Feb 17 16:26:37 crc kubenswrapper[4672]: I0217 16:26:37.963284 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rppp4" podUID="2cd3485d-3d10-4f38-bf4c-05368c7cd881" containerName="registry-server" probeResult="failure" output=< Feb 17 16:26:37 crc kubenswrapper[4672]: timeout: failed to connect service ":50051" within 1s Feb 17 16:26:37 crc kubenswrapper[4672]: > Feb 17 16:26:40 crc kubenswrapper[4672]: I0217 16:26:40.394424 4672 generic.go:334] "Generic (PLEG): container finished" podID="b6b2f22c-613e-4774-b353-a90ff22bfba3" containerID="c9c5ab7c921496df8eca632d56907825bf43bb3644be5f77f64d5cb3bd894d99" exitCode=0 Feb 17 16:26:40 crc kubenswrapper[4672]: I0217 16:26:40.394608 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-87krq" event={"ID":"b6b2f22c-613e-4774-b353-a90ff22bfba3","Type":"ContainerDied","Data":"c9c5ab7c921496df8eca632d56907825bf43bb3644be5f77f64d5cb3bd894d99"} Feb 17 16:26:41 crc kubenswrapper[4672]: I0217 16:26:41.958924 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-87krq" Feb 17 16:26:42 crc kubenswrapper[4672]: I0217 16:26:42.064579 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6b2f22c-613e-4774-b353-a90ff22bfba3-combined-ca-bundle\") pod \"b6b2f22c-613e-4774-b353-a90ff22bfba3\" (UID: \"b6b2f22c-613e-4774-b353-a90ff22bfba3\") " Feb 17 16:26:42 crc kubenswrapper[4672]: I0217 16:26:42.064695 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqsvr\" (UniqueName: \"kubernetes.io/projected/b6b2f22c-613e-4774-b353-a90ff22bfba3-kube-api-access-wqsvr\") pod \"b6b2f22c-613e-4774-b353-a90ff22bfba3\" (UID: \"b6b2f22c-613e-4774-b353-a90ff22bfba3\") " Feb 17 16:26:42 crc kubenswrapper[4672]: I0217 16:26:42.064813 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6b2f22c-613e-4774-b353-a90ff22bfba3-scripts\") pod \"b6b2f22c-613e-4774-b353-a90ff22bfba3\" (UID: \"b6b2f22c-613e-4774-b353-a90ff22bfba3\") " Feb 17 16:26:42 crc kubenswrapper[4672]: I0217 16:26:42.064958 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6b2f22c-613e-4774-b353-a90ff22bfba3-config-data\") pod \"b6b2f22c-613e-4774-b353-a90ff22bfba3\" (UID: \"b6b2f22c-613e-4774-b353-a90ff22bfba3\") " Feb 17 16:26:42 crc kubenswrapper[4672]: I0217 16:26:42.075017 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6b2f22c-613e-4774-b353-a90ff22bfba3-scripts" (OuterVolumeSpecName: "scripts") pod "b6b2f22c-613e-4774-b353-a90ff22bfba3" (UID: "b6b2f22c-613e-4774-b353-a90ff22bfba3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:42 crc kubenswrapper[4672]: I0217 16:26:42.077079 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b2f22c-613e-4774-b353-a90ff22bfba3-kube-api-access-wqsvr" (OuterVolumeSpecName: "kube-api-access-wqsvr") pod "b6b2f22c-613e-4774-b353-a90ff22bfba3" (UID: "b6b2f22c-613e-4774-b353-a90ff22bfba3"). InnerVolumeSpecName "kube-api-access-wqsvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:42 crc kubenswrapper[4672]: I0217 16:26:42.119481 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6b2f22c-613e-4774-b353-a90ff22bfba3-config-data" (OuterVolumeSpecName: "config-data") pod "b6b2f22c-613e-4774-b353-a90ff22bfba3" (UID: "b6b2f22c-613e-4774-b353-a90ff22bfba3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:42 crc kubenswrapper[4672]: I0217 16:26:42.136628 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6b2f22c-613e-4774-b353-a90ff22bfba3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6b2f22c-613e-4774-b353-a90ff22bfba3" (UID: "b6b2f22c-613e-4774-b353-a90ff22bfba3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:42 crc kubenswrapper[4672]: I0217 16:26:42.169302 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6b2f22c-613e-4774-b353-a90ff22bfba3-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:42 crc kubenswrapper[4672]: I0217 16:26:42.169634 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6b2f22c-613e-4774-b353-a90ff22bfba3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:42 crc kubenswrapper[4672]: I0217 16:26:42.169735 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqsvr\" (UniqueName: \"kubernetes.io/projected/b6b2f22c-613e-4774-b353-a90ff22bfba3-kube-api-access-wqsvr\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:42 crc kubenswrapper[4672]: I0217 16:26:42.169814 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6b2f22c-613e-4774-b353-a90ff22bfba3-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:42 crc kubenswrapper[4672]: I0217 16:26:42.419435 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-87krq" event={"ID":"b6b2f22c-613e-4774-b353-a90ff22bfba3","Type":"ContainerDied","Data":"4fcf1bbe588c21fb933fc3416566bc618b4f79e13b5bc4ba50f8823b0d6f7dda"} Feb 17 16:26:42 crc kubenswrapper[4672]: I0217 16:26:42.419847 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fcf1bbe588c21fb933fc3416566bc618b4f79e13b5bc4ba50f8823b0d6f7dda" Feb 17 16:26:42 crc kubenswrapper[4672]: I0217 16:26:42.419487 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-87krq" Feb 17 16:26:42 crc kubenswrapper[4672]: I0217 16:26:42.662193 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:26:42 crc kubenswrapper[4672]: I0217 16:26:42.662468 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="777c853a-fcc3-4f2a-ad78-32bd1782655a" containerName="nova-scheduler-scheduler" containerID="cri-o://7175aa643d710da236f5bd7ae8fddec92e4520c0cbb96d90168909a8b501bec0" gracePeriod=30 Feb 17 16:26:42 crc kubenswrapper[4672]: I0217 16:26:42.690902 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:26:42 crc kubenswrapper[4672]: I0217 16:26:42.691193 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ec43dce1-ee53-491f-91d7-8aa70d776a67" containerName="nova-api-log" containerID="cri-o://6a086bfbec5f81d7ebbc40f78ab46a5db37eb7e2083269b79b946a93a0615761" gracePeriod=30 Feb 17 16:26:42 crc kubenswrapper[4672]: I0217 16:26:42.691754 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ec43dce1-ee53-491f-91d7-8aa70d776a67" containerName="nova-api-api" containerID="cri-o://c4c757eee0900ecf11c4a585983c97da3312d12cc18530d0170712e9d216ce75" gracePeriod=30 Feb 17 16:26:42 crc kubenswrapper[4672]: I0217 16:26:42.716914 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:42 crc kubenswrapper[4672]: I0217 16:26:42.717342 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d54e3c9d-9a10-46ee-96e1-5c270ef2197d" containerName="nova-metadata-log" containerID="cri-o://5747cb31e96a22d6a6a148a25c99630bf5bc0dd9fbaf3eb1b5770ca051870b89" gracePeriod=30 Feb 17 16:26:42 crc kubenswrapper[4672]: I0217 16:26:42.717890 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d54e3c9d-9a10-46ee-96e1-5c270ef2197d" containerName="nova-metadata-metadata" containerID="cri-o://42f9885d0bfaae7ce2d31f0c0048c76fe2383872d4545e9a2ea7d9d2e95c7898" gracePeriod=30 Feb 17 16:26:43 crc kubenswrapper[4672]: I0217 16:26:43.431167 4672 generic.go:334] "Generic (PLEG): container finished" podID="ec43dce1-ee53-491f-91d7-8aa70d776a67" containerID="6a086bfbec5f81d7ebbc40f78ab46a5db37eb7e2083269b79b946a93a0615761" exitCode=143 Feb 17 16:26:43 crc kubenswrapper[4672]: I0217 16:26:43.431250 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec43dce1-ee53-491f-91d7-8aa70d776a67","Type":"ContainerDied","Data":"6a086bfbec5f81d7ebbc40f78ab46a5db37eb7e2083269b79b946a93a0615761"} Feb 17 16:26:43 crc kubenswrapper[4672]: I0217 16:26:43.434651 4672 generic.go:334] "Generic (PLEG): container finished" podID="d54e3c9d-9a10-46ee-96e1-5c270ef2197d" containerID="5747cb31e96a22d6a6a148a25c99630bf5bc0dd9fbaf3eb1b5770ca051870b89" exitCode=143 Feb 17 16:26:43 crc kubenswrapper[4672]: I0217 16:26:43.434808 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d54e3c9d-9a10-46ee-96e1-5c270ef2197d","Type":"ContainerDied","Data":"5747cb31e96a22d6a6a148a25c99630bf5bc0dd9fbaf3eb1b5770ca051870b89"} Feb 17 16:26:46 crc kubenswrapper[4672]: I0217 16:26:46.485583 4672 generic.go:334] "Generic (PLEG): container finished" podID="777c853a-fcc3-4f2a-ad78-32bd1782655a" containerID="7175aa643d710da236f5bd7ae8fddec92e4520c0cbb96d90168909a8b501bec0" exitCode=0 Feb 17 16:26:46 crc kubenswrapper[4672]: I0217 16:26:46.486065 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"777c853a-fcc3-4f2a-ad78-32bd1782655a","Type":"ContainerDied","Data":"7175aa643d710da236f5bd7ae8fddec92e4520c0cbb96d90168909a8b501bec0"} Feb 17 16:26:46 crc kubenswrapper[4672]: I0217 16:26:46.488937 4672 generic.go:334] "Generic (PLEG): container finished" podID="ec43dce1-ee53-491f-91d7-8aa70d776a67" containerID="c4c757eee0900ecf11c4a585983c97da3312d12cc18530d0170712e9d216ce75" exitCode=0 Feb 17 16:26:46 crc kubenswrapper[4672]: I0217 16:26:46.489055 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec43dce1-ee53-491f-91d7-8aa70d776a67","Type":"ContainerDied","Data":"c4c757eee0900ecf11c4a585983c97da3312d12cc18530d0170712e9d216ce75"} Feb 17 16:26:46 crc kubenswrapper[4672]: I0217 16:26:46.490991 4672 generic.go:334] "Generic (PLEG): container finished" podID="d54e3c9d-9a10-46ee-96e1-5c270ef2197d" containerID="42f9885d0bfaae7ce2d31f0c0048c76fe2383872d4545e9a2ea7d9d2e95c7898" exitCode=0 Feb 17 16:26:46 crc kubenswrapper[4672]: I0217 16:26:46.491066 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d54e3c9d-9a10-46ee-96e1-5c270ef2197d","Type":"ContainerDied","Data":"42f9885d0bfaae7ce2d31f0c0048c76fe2383872d4545e9a2ea7d9d2e95c7898"} Feb 17 16:26:46 crc kubenswrapper[4672]: I0217 16:26:46.948061 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.021882 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.030620 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.070793 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec43dce1-ee53-491f-91d7-8aa70d776a67-logs\") pod \"ec43dce1-ee53-491f-91d7-8aa70d776a67\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.070940 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-internal-tls-certs\") pod \"ec43dce1-ee53-491f-91d7-8aa70d776a67\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.070973 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-config-data\") pod \"ec43dce1-ee53-491f-91d7-8aa70d776a67\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.071022 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-combined-ca-bundle\") pod \"ec43dce1-ee53-491f-91d7-8aa70d776a67\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.071112 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-public-tls-certs\") pod \"ec43dce1-ee53-491f-91d7-8aa70d776a67\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.071194 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5gp8\" (UniqueName: \"kubernetes.io/projected/ec43dce1-ee53-491f-91d7-8aa70d776a67-kube-api-access-r5gp8\") pod \"ec43dce1-ee53-491f-91d7-8aa70d776a67\" (UID: \"ec43dce1-ee53-491f-91d7-8aa70d776a67\") " Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.071292 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec43dce1-ee53-491f-91d7-8aa70d776a67-logs" (OuterVolumeSpecName: "logs") pod "ec43dce1-ee53-491f-91d7-8aa70d776a67" (UID: "ec43dce1-ee53-491f-91d7-8aa70d776a67"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.071852 4672 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec43dce1-ee53-491f-91d7-8aa70d776a67-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.081791 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec43dce1-ee53-491f-91d7-8aa70d776a67-kube-api-access-r5gp8" (OuterVolumeSpecName: "kube-api-access-r5gp8") pod "ec43dce1-ee53-491f-91d7-8aa70d776a67" (UID: "ec43dce1-ee53-491f-91d7-8aa70d776a67"). InnerVolumeSpecName "kube-api-access-r5gp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.116654 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-config-data" (OuterVolumeSpecName: "config-data") pod "ec43dce1-ee53-491f-91d7-8aa70d776a67" (UID: "ec43dce1-ee53-491f-91d7-8aa70d776a67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.124815 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec43dce1-ee53-491f-91d7-8aa70d776a67" (UID: "ec43dce1-ee53-491f-91d7-8aa70d776a67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.139595 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ec43dce1-ee53-491f-91d7-8aa70d776a67" (UID: "ec43dce1-ee53-491f-91d7-8aa70d776a67"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.143637 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ec43dce1-ee53-491f-91d7-8aa70d776a67" (UID: "ec43dce1-ee53-491f-91d7-8aa70d776a67"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.173460 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-combined-ca-bundle\") pod \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\" (UID: \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\") " Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.173565 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-nova-metadata-tls-certs\") pod \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\" (UID: \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\") " Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.173622 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gftgw\" (UniqueName: \"kubernetes.io/projected/777c853a-fcc3-4f2a-ad78-32bd1782655a-kube-api-access-gftgw\") pod \"777c853a-fcc3-4f2a-ad78-32bd1782655a\" (UID: \"777c853a-fcc3-4f2a-ad78-32bd1782655a\") " Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.173659 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-config-data\") pod \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\" (UID: \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\") " Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.173731 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/777c853a-fcc3-4f2a-ad78-32bd1782655a-config-data\") pod \"777c853a-fcc3-4f2a-ad78-32bd1782655a\" (UID: \"777c853a-fcc3-4f2a-ad78-32bd1782655a\") " Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.173755 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-logs\") pod \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\" (UID: \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\") " Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.173823 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/777c853a-fcc3-4f2a-ad78-32bd1782655a-combined-ca-bundle\") pod \"777c853a-fcc3-4f2a-ad78-32bd1782655a\" (UID: \"777c853a-fcc3-4f2a-ad78-32bd1782655a\") " Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.173867 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d8m4\" (UniqueName: \"kubernetes.io/projected/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-kube-api-access-2d8m4\") pod \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\" (UID: \"d54e3c9d-9a10-46ee-96e1-5c270ef2197d\") " Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.174327 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-logs" (OuterVolumeSpecName: "logs") pod "d54e3c9d-9a10-46ee-96e1-5c270ef2197d" (UID: "d54e3c9d-9a10-46ee-96e1-5c270ef2197d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.174686 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5gp8\" (UniqueName: \"kubernetes.io/projected/ec43dce1-ee53-491f-91d7-8aa70d776a67-kube-api-access-r5gp8\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.174717 4672 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.174733 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.174745 4672 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.174756 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.174766 4672 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec43dce1-ee53-491f-91d7-8aa70d776a67-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.177563 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/777c853a-fcc3-4f2a-ad78-32bd1782655a-kube-api-access-gftgw" (OuterVolumeSpecName: "kube-api-access-gftgw") pod "777c853a-fcc3-4f2a-ad78-32bd1782655a" (UID: "777c853a-fcc3-4f2a-ad78-32bd1782655a"). InnerVolumeSpecName "kube-api-access-gftgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.177925 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-kube-api-access-2d8m4" (OuterVolumeSpecName: "kube-api-access-2d8m4") pod "d54e3c9d-9a10-46ee-96e1-5c270ef2197d" (UID: "d54e3c9d-9a10-46ee-96e1-5c270ef2197d"). InnerVolumeSpecName "kube-api-access-2d8m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.199893 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/777c853a-fcc3-4f2a-ad78-32bd1782655a-config-data" (OuterVolumeSpecName: "config-data") pod "777c853a-fcc3-4f2a-ad78-32bd1782655a" (UID: "777c853a-fcc3-4f2a-ad78-32bd1782655a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.204630 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d54e3c9d-9a10-46ee-96e1-5c270ef2197d" (UID: "d54e3c9d-9a10-46ee-96e1-5c270ef2197d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.211067 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/777c853a-fcc3-4f2a-ad78-32bd1782655a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "777c853a-fcc3-4f2a-ad78-32bd1782655a" (UID: "777c853a-fcc3-4f2a-ad78-32bd1782655a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.223360 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-config-data" (OuterVolumeSpecName: "config-data") pod "d54e3c9d-9a10-46ee-96e1-5c270ef2197d" (UID: "d54e3c9d-9a10-46ee-96e1-5c270ef2197d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.238675 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d54e3c9d-9a10-46ee-96e1-5c270ef2197d" (UID: "d54e3c9d-9a10-46ee-96e1-5c270ef2197d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.276259 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.276307 4672 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.276323 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gftgw\" (UniqueName: \"kubernetes.io/projected/777c853a-fcc3-4f2a-ad78-32bd1782655a-kube-api-access-gftgw\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.276338 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.276352 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/777c853a-fcc3-4f2a-ad78-32bd1782655a-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.276363 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/777c853a-fcc3-4f2a-ad78-32bd1782655a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.276375 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d8m4\" (UniqueName: \"kubernetes.io/projected/d54e3c9d-9a10-46ee-96e1-5c270ef2197d-kube-api-access-2d8m4\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.501471 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec43dce1-ee53-491f-91d7-8aa70d776a67","Type":"ContainerDied","Data":"977d54c10a2be3d3b3c3ba571f3ab04bfa40fa1f81eed22826f64d4a8ad98eed"} Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.501503 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.501548 4672 scope.go:117] "RemoveContainer" containerID="c4c757eee0900ecf11c4a585983c97da3312d12cc18530d0170712e9d216ce75" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.503491 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d54e3c9d-9a10-46ee-96e1-5c270ef2197d","Type":"ContainerDied","Data":"59cbb9f4abdaa1e2688e5581f4a5777b18691748e61efea866d7634347e45a36"} Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.503514 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.509015 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"777c853a-fcc3-4f2a-ad78-32bd1782655a","Type":"ContainerDied","Data":"1d039d026d5c855131ab98c21d8096c995e2e8771a9af186544286203af174aa"} Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.509236 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.533031 4672 scope.go:117] "RemoveContainer" containerID="6a086bfbec5f81d7ebbc40f78ab46a5db37eb7e2083269b79b946a93a0615761" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.552201 4672 scope.go:117] "RemoveContainer" containerID="42f9885d0bfaae7ce2d31f0c0048c76fe2383872d4545e9a2ea7d9d2e95c7898" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.556841 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.595085 4672 scope.go:117] "RemoveContainer" containerID="5747cb31e96a22d6a6a148a25c99630bf5bc0dd9fbaf3eb1b5770ca051870b89" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.596628 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.605540 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.623906 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.634720 4672 scope.go:117] "RemoveContainer" containerID="7175aa643d710da236f5bd7ae8fddec92e4520c0cbb96d90168909a8b501bec0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.639138 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:26:47 crc kubenswrapper[4672]: E0217 16:26:47.639635 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777c853a-fcc3-4f2a-ad78-32bd1782655a" containerName="nova-scheduler-scheduler" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.639652 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="777c853a-fcc3-4f2a-ad78-32bd1782655a" containerName="nova-scheduler-scheduler" Feb 17 16:26:47 crc kubenswrapper[4672]: E0217 16:26:47.639665 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95aa771c-7c3f-40bc-a845-a5b27b7581bd" containerName="extract-content" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.639672 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="95aa771c-7c3f-40bc-a845-a5b27b7581bd" containerName="extract-content" Feb 17 16:26:47 crc kubenswrapper[4672]: E0217 16:26:47.639688 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec43dce1-ee53-491f-91d7-8aa70d776a67" containerName="nova-api-log" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.639694 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec43dce1-ee53-491f-91d7-8aa70d776a67" containerName="nova-api-log" Feb 17 16:26:47 crc kubenswrapper[4672]: E0217 16:26:47.639706 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95aa771c-7c3f-40bc-a845-a5b27b7581bd" containerName="extract-utilities" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.639712 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="95aa771c-7c3f-40bc-a845-a5b27b7581bd" containerName="extract-utilities" Feb 17 16:26:47 crc kubenswrapper[4672]: E0217 16:26:47.639730 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec43dce1-ee53-491f-91d7-8aa70d776a67" containerName="nova-api-api" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.639737 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec43dce1-ee53-491f-91d7-8aa70d776a67" containerName="nova-api-api" Feb 17 16:26:47 crc kubenswrapper[4672]: E0217 16:26:47.639749 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95aa771c-7c3f-40bc-a845-a5b27b7581bd" containerName="registry-server" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.639754 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="95aa771c-7c3f-40bc-a845-a5b27b7581bd" containerName="registry-server" Feb 17 16:26:47 crc kubenswrapper[4672]: E0217 16:26:47.639766 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d54e3c9d-9a10-46ee-96e1-5c270ef2197d" containerName="nova-metadata-log" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.639774 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="d54e3c9d-9a10-46ee-96e1-5c270ef2197d" containerName="nova-metadata-log" Feb 17 16:26:47 crc kubenswrapper[4672]: E0217 16:26:47.639783 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d54e3c9d-9a10-46ee-96e1-5c270ef2197d" containerName="nova-metadata-metadata" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.639789 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="d54e3c9d-9a10-46ee-96e1-5c270ef2197d" containerName="nova-metadata-metadata" Feb 17 16:26:47 crc kubenswrapper[4672]: E0217 16:26:47.639800 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b2f22c-613e-4774-b353-a90ff22bfba3" containerName="nova-manage" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.639806 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b2f22c-613e-4774-b353-a90ff22bfba3" containerName="nova-manage" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.639992 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="d54e3c9d-9a10-46ee-96e1-5c270ef2197d" containerName="nova-metadata-metadata" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.640003 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6b2f22c-613e-4774-b353-a90ff22bfba3" containerName="nova-manage" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.640014 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec43dce1-ee53-491f-91d7-8aa70d776a67" containerName="nova-api-log" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.640025 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec43dce1-ee53-491f-91d7-8aa70d776a67" containerName="nova-api-api" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.640038 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="777c853a-fcc3-4f2a-ad78-32bd1782655a" containerName="nova-scheduler-scheduler" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.640047 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="d54e3c9d-9a10-46ee-96e1-5c270ef2197d" containerName="nova-metadata-log" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.640057 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="95aa771c-7c3f-40bc-a845-a5b27b7581bd" containerName="registry-server" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.641144 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.646876 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.647170 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.647331 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.655372 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.668987 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.679751 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.691699 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.694363 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.697066 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.698296 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.732592 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.734602 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.737633 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.765294 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.777108 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.795611 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec271161-02cd-4b97-925f-47e757c52e34-config-data\") pod \"nova-metadata-0\" (UID: \"ec271161-02cd-4b97-925f-47e757c52e34\") " pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.795667 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfbfv\" (UniqueName: \"kubernetes.io/projected/ec271161-02cd-4b97-925f-47e757c52e34-kube-api-access-rfbfv\") pod \"nova-metadata-0\" (UID: \"ec271161-02cd-4b97-925f-47e757c52e34\") " pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.795690 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec271161-02cd-4b97-925f-47e757c52e34-logs\") pod \"nova-metadata-0\" (UID: \"ec271161-02cd-4b97-925f-47e757c52e34\") " pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.795727 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/280f47cf-d6db-46e6-a9cf-6c2321f80d5d-config-data\") pod \"nova-api-0\" (UID: \"280f47cf-d6db-46e6-a9cf-6c2321f80d5d\") " pod="openstack/nova-api-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.795763 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec271161-02cd-4b97-925f-47e757c52e34-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec271161-02cd-4b97-925f-47e757c52e34\") " pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.795783 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/280f47cf-d6db-46e6-a9cf-6c2321f80d5d-logs\") pod \"nova-api-0\" (UID: \"280f47cf-d6db-46e6-a9cf-6c2321f80d5d\") " pod="openstack/nova-api-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.795800 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdb85\" (UniqueName: \"kubernetes.io/projected/280f47cf-d6db-46e6-a9cf-6c2321f80d5d-kube-api-access-wdb85\") pod \"nova-api-0\" (UID: \"280f47cf-d6db-46e6-a9cf-6c2321f80d5d\") " pod="openstack/nova-api-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.795853 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/280f47cf-d6db-46e6-a9cf-6c2321f80d5d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"280f47cf-d6db-46e6-a9cf-6c2321f80d5d\") " pod="openstack/nova-api-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.795877 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/280f47cf-d6db-46e6-a9cf-6c2321f80d5d-public-tls-certs\") pod \"nova-api-0\" (UID: \"280f47cf-d6db-46e6-a9cf-6c2321f80d5d\") " pod="openstack/nova-api-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.795923 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/280f47cf-d6db-46e6-a9cf-6c2321f80d5d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"280f47cf-d6db-46e6-a9cf-6c2321f80d5d\") " pod="openstack/nova-api-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.795962 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec271161-02cd-4b97-925f-47e757c52e34-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ec271161-02cd-4b97-925f-47e757c52e34\") " pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.898390 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec271161-02cd-4b97-925f-47e757c52e34-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec271161-02cd-4b97-925f-47e757c52e34\") " pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.898480 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/280f47cf-d6db-46e6-a9cf-6c2321f80d5d-logs\") pod \"nova-api-0\" (UID: \"280f47cf-d6db-46e6-a9cf-6c2321f80d5d\") " pod="openstack/nova-api-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.898543 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdb85\" (UniqueName: \"kubernetes.io/projected/280f47cf-d6db-46e6-a9cf-6c2321f80d5d-kube-api-access-wdb85\") pod \"nova-api-0\" (UID: \"280f47cf-d6db-46e6-a9cf-6c2321f80d5d\") " pod="openstack/nova-api-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.898597 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r7km\" (UniqueName: \"kubernetes.io/projected/7bb6bc7b-85e2-4379-a509-edf2d9424951-kube-api-access-5r7km\") pod \"nova-scheduler-0\" (UID: \"7bb6bc7b-85e2-4379-a509-edf2d9424951\") " pod="openstack/nova-scheduler-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.898659 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb6bc7b-85e2-4379-a509-edf2d9424951-config-data\") pod \"nova-scheduler-0\" (UID: \"7bb6bc7b-85e2-4379-a509-edf2d9424951\") " pod="openstack/nova-scheduler-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.898717 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/280f47cf-d6db-46e6-a9cf-6c2321f80d5d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"280f47cf-d6db-46e6-a9cf-6c2321f80d5d\") " pod="openstack/nova-api-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.898804 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/280f47cf-d6db-46e6-a9cf-6c2321f80d5d-public-tls-certs\") pod \"nova-api-0\" (UID: \"280f47cf-d6db-46e6-a9cf-6c2321f80d5d\") " pod="openstack/nova-api-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.898882 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb6bc7b-85e2-4379-a509-edf2d9424951-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7bb6bc7b-85e2-4379-a509-edf2d9424951\") " pod="openstack/nova-scheduler-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.898949 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/280f47cf-d6db-46e6-a9cf-6c2321f80d5d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"280f47cf-d6db-46e6-a9cf-6c2321f80d5d\") " pod="openstack/nova-api-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.899011 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/280f47cf-d6db-46e6-a9cf-6c2321f80d5d-logs\") pod \"nova-api-0\" (UID: \"280f47cf-d6db-46e6-a9cf-6c2321f80d5d\") " pod="openstack/nova-api-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.899017 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec271161-02cd-4b97-925f-47e757c52e34-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ec271161-02cd-4b97-925f-47e757c52e34\") " pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.899155 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec271161-02cd-4b97-925f-47e757c52e34-config-data\") pod \"nova-metadata-0\" (UID: \"ec271161-02cd-4b97-925f-47e757c52e34\") " pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.899237 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfbfv\" (UniqueName: \"kubernetes.io/projected/ec271161-02cd-4b97-925f-47e757c52e34-kube-api-access-rfbfv\") pod \"nova-metadata-0\" (UID: \"ec271161-02cd-4b97-925f-47e757c52e34\") " pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.899269 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec271161-02cd-4b97-925f-47e757c52e34-logs\") pod \"nova-metadata-0\" (UID: \"ec271161-02cd-4b97-925f-47e757c52e34\") " pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.899381 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/280f47cf-d6db-46e6-a9cf-6c2321f80d5d-config-data\") pod \"nova-api-0\" (UID: \"280f47cf-d6db-46e6-a9cf-6c2321f80d5d\") " pod="openstack/nova-api-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.899778 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec271161-02cd-4b97-925f-47e757c52e34-logs\") pod \"nova-metadata-0\" (UID: \"ec271161-02cd-4b97-925f-47e757c52e34\") " pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.904671 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/280f47cf-d6db-46e6-a9cf-6c2321f80d5d-public-tls-certs\") pod \"nova-api-0\" (UID: \"280f47cf-d6db-46e6-a9cf-6c2321f80d5d\") " pod="openstack/nova-api-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.904766 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec271161-02cd-4b97-925f-47e757c52e34-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ec271161-02cd-4b97-925f-47e757c52e34\") " pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.904796 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/280f47cf-d6db-46e6-a9cf-6c2321f80d5d-config-data\") pod \"nova-api-0\" (UID: \"280f47cf-d6db-46e6-a9cf-6c2321f80d5d\") " pod="openstack/nova-api-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.905997 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec271161-02cd-4b97-925f-47e757c52e34-config-data\") pod \"nova-metadata-0\" (UID: \"ec271161-02cd-4b97-925f-47e757c52e34\") " pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.908298 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec271161-02cd-4b97-925f-47e757c52e34-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec271161-02cd-4b97-925f-47e757c52e34\") " pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.911698 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/280f47cf-d6db-46e6-a9cf-6c2321f80d5d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"280f47cf-d6db-46e6-a9cf-6c2321f80d5d\") " pod="openstack/nova-api-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.913750 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/280f47cf-d6db-46e6-a9cf-6c2321f80d5d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"280f47cf-d6db-46e6-a9cf-6c2321f80d5d\") " pod="openstack/nova-api-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.921895 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdb85\" (UniqueName: \"kubernetes.io/projected/280f47cf-d6db-46e6-a9cf-6c2321f80d5d-kube-api-access-wdb85\") pod \"nova-api-0\" (UID: \"280f47cf-d6db-46e6-a9cf-6c2321f80d5d\") " pod="openstack/nova-api-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.922498 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfbfv\" (UniqueName: \"kubernetes.io/projected/ec271161-02cd-4b97-925f-47e757c52e34-kube-api-access-rfbfv\") pod \"nova-metadata-0\" (UID: \"ec271161-02cd-4b97-925f-47e757c52e34\") " pod="openstack/nova-metadata-0" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.937201 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rppp4" podUID="2cd3485d-3d10-4f38-bf4c-05368c7cd881" containerName="registry-server" probeResult="failure" output=< Feb 17 16:26:47 crc kubenswrapper[4672]: timeout: failed to connect service ":50051" within 1s Feb 17 16:26:47 crc kubenswrapper[4672]: > Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.959061 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="777c853a-fcc3-4f2a-ad78-32bd1782655a" path="/var/lib/kubelet/pods/777c853a-fcc3-4f2a-ad78-32bd1782655a/volumes" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.959857 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d54e3c9d-9a10-46ee-96e1-5c270ef2197d" path="/var/lib/kubelet/pods/d54e3c9d-9a10-46ee-96e1-5c270ef2197d/volumes" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.960702 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec43dce1-ee53-491f-91d7-8aa70d776a67" path="/var/lib/kubelet/pods/ec43dce1-ee53-491f-91d7-8aa70d776a67/volumes" Feb 17 16:26:47 crc kubenswrapper[4672]: I0217 16:26:47.961964 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:26:48 crc kubenswrapper[4672]: I0217 16:26:48.010781 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r7km\" (UniqueName: \"kubernetes.io/projected/7bb6bc7b-85e2-4379-a509-edf2d9424951-kube-api-access-5r7km\") pod \"nova-scheduler-0\" (UID: \"7bb6bc7b-85e2-4379-a509-edf2d9424951\") " pod="openstack/nova-scheduler-0" Feb 17 16:26:48 crc kubenswrapper[4672]: I0217 16:26:48.011073 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb6bc7b-85e2-4379-a509-edf2d9424951-config-data\") pod \"nova-scheduler-0\" (UID: \"7bb6bc7b-85e2-4379-a509-edf2d9424951\") " pod="openstack/nova-scheduler-0" Feb 17 16:26:48 crc kubenswrapper[4672]: I0217 16:26:48.011236 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb6bc7b-85e2-4379-a509-edf2d9424951-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7bb6bc7b-85e2-4379-a509-edf2d9424951\") " pod="openstack/nova-scheduler-0" Feb 17 16:26:48 crc kubenswrapper[4672]: I0217 16:26:48.021381 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb6bc7b-85e2-4379-a509-edf2d9424951-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7bb6bc7b-85e2-4379-a509-edf2d9424951\") " pod="openstack/nova-scheduler-0" Feb 17 16:26:48 crc kubenswrapper[4672]: I0217 16:26:48.021546 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb6bc7b-85e2-4379-a509-edf2d9424951-config-data\") pod \"nova-scheduler-0\" (UID: \"7bb6bc7b-85e2-4379-a509-edf2d9424951\") " pod="openstack/nova-scheduler-0" Feb 17 16:26:48 crc kubenswrapper[4672]: I0217 16:26:48.026154 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:26:48 crc kubenswrapper[4672]: I0217 16:26:48.032268 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r7km\" (UniqueName: \"kubernetes.io/projected/7bb6bc7b-85e2-4379-a509-edf2d9424951-kube-api-access-5r7km\") pod \"nova-scheduler-0\" (UID: \"7bb6bc7b-85e2-4379-a509-edf2d9424951\") " pod="openstack/nova-scheduler-0" Feb 17 16:26:48 crc kubenswrapper[4672]: I0217 16:26:48.049990 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:26:48 crc kubenswrapper[4672]: I0217 16:26:48.532526 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:26:48 crc kubenswrapper[4672]: I0217 16:26:48.541122 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:26:48 crc kubenswrapper[4672]: I0217 16:26:48.675715 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:26:48 crc kubenswrapper[4672]: W0217 16:26:48.688752 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec271161_02cd_4b97_925f_47e757c52e34.slice/crio-fa8e606c39e703672b322b722bc2d1799091ef9045ddeef13aff92485fa57ac5 WatchSource:0}: Error finding container fa8e606c39e703672b322b722bc2d1799091ef9045ddeef13aff92485fa57ac5: Status 404 returned error can't find the container with id fa8e606c39e703672b322b722bc2d1799091ef9045ddeef13aff92485fa57ac5 Feb 17 16:26:49 crc kubenswrapper[4672]: I0217 16:26:49.549353 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"280f47cf-d6db-46e6-a9cf-6c2321f80d5d","Type":"ContainerStarted","Data":"74d254b0afbb06c6cd2e6fea6192db77ccddedb9db8723cbab4d55ff909376bb"} Feb 17 16:26:49 crc kubenswrapper[4672]: I0217 16:26:49.549641 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"280f47cf-d6db-46e6-a9cf-6c2321f80d5d","Type":"ContainerStarted","Data":"fb1b5eea3163c8140829fcc1dc6e5310e4e46cd26fd94e17317fd5ec717f8fee"} Feb 17 16:26:49 crc kubenswrapper[4672]: I0217 16:26:49.549652 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"280f47cf-d6db-46e6-a9cf-6c2321f80d5d","Type":"ContainerStarted","Data":"c10f2bdd3366b5d07ae7118b5ff29f4f9a8c3a3e28602269db90722cbab63935"} Feb 17 16:26:49 crc kubenswrapper[4672]: I0217 16:26:49.553657 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7bb6bc7b-85e2-4379-a509-edf2d9424951","Type":"ContainerStarted","Data":"2ff1cdfa006ba939a476618f8e6e7ebb8f1617dade2fb093cac514fea3e0f9f6"} Feb 17 16:26:49 crc kubenswrapper[4672]: I0217 16:26:49.553683 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7bb6bc7b-85e2-4379-a509-edf2d9424951","Type":"ContainerStarted","Data":"34c1205a8222f83a7a6cbe06a7ab6816c25b95e1ea1604dc7cc6590519f195bd"} Feb 17 16:26:49 crc kubenswrapper[4672]: I0217 16:26:49.557629 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec271161-02cd-4b97-925f-47e757c52e34","Type":"ContainerStarted","Data":"0a276b6e9f108dae949922116161db8f2faa68b6c0157b0e9f7a00d9e4a897e4"} Feb 17 16:26:49 crc kubenswrapper[4672]: I0217 16:26:49.557653 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec271161-02cd-4b97-925f-47e757c52e34","Type":"ContainerStarted","Data":"ffb55c6847dc53ca5926778d6da7f3c79318ad48ab74c48cbe4109c3d4845c3e"} Feb 17 16:26:49 crc kubenswrapper[4672]: I0217 16:26:49.557662 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec271161-02cd-4b97-925f-47e757c52e34","Type":"ContainerStarted","Data":"fa8e606c39e703672b322b722bc2d1799091ef9045ddeef13aff92485fa57ac5"} Feb 17 16:26:49 crc kubenswrapper[4672]: I0217 16:26:49.583344 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.583323796 podStartE2EDuration="2.583323796s" podCreationTimestamp="2026-02-17 16:26:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:26:49.583326387 +0000 UTC m=+1418.337415159" watchObservedRunningTime="2026-02-17 16:26:49.583323796 +0000 UTC m=+1418.337412528" Feb 17 16:26:49 crc kubenswrapper[4672]: I0217 16:26:49.600703 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.600688334 podStartE2EDuration="2.600688334s" podCreationTimestamp="2026-02-17 16:26:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:26:49.596299519 +0000 UTC m=+1418.350388281" watchObservedRunningTime="2026-02-17 16:26:49.600688334 +0000 UTC m=+1418.354777066" Feb 17 16:26:49 crc kubenswrapper[4672]: I0217 16:26:49.623632 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.623606188 podStartE2EDuration="2.623606188s" podCreationTimestamp="2026-02-17 16:26:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:26:49.61606558 +0000 UTC m=+1418.370154352" watchObservedRunningTime="2026-02-17 16:26:49.623606188 +0000 UTC m=+1418.377694950" Feb 17 16:26:53 crc kubenswrapper[4672]: I0217 16:26:53.027463 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:26:53 crc kubenswrapper[4672]: I0217 16:26:53.028044 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:26:53 crc kubenswrapper[4672]: I0217 16:26:53.050781 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 16:26:54 crc kubenswrapper[4672]: I0217 16:26:54.817324 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 16:26:56 crc kubenswrapper[4672]: I0217 16:26:56.976485 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rppp4" Feb 17 16:26:57 crc kubenswrapper[4672]: I0217 16:26:57.064345 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rppp4" Feb 17 16:26:57 crc kubenswrapper[4672]: I0217 16:26:57.753706 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rppp4"] Feb 17 16:26:57 crc kubenswrapper[4672]: I0217 16:26:57.980719 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:26:57 crc kubenswrapper[4672]: I0217 16:26:57.981720 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:26:58 crc kubenswrapper[4672]: I0217 16:26:58.034039 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:26:58 crc kubenswrapper[4672]: I0217 16:26:58.034103 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:26:58 crc kubenswrapper[4672]: I0217 16:26:58.051330 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 16:26:58 crc kubenswrapper[4672]: I0217 16:26:58.090322 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 16:26:58 crc kubenswrapper[4672]: I0217 16:26:58.662599 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rppp4" podUID="2cd3485d-3d10-4f38-bf4c-05368c7cd881" containerName="registry-server" containerID="cri-o://3591fe829bc7122dc4787dce38f3291f756a461c191f5f0b753779bab931e1ee" gracePeriod=2 Feb 17 16:26:58 crc kubenswrapper[4672]: I0217 16:26:58.693924 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 16:26:58 crc kubenswrapper[4672]: I0217 16:26:58.978670 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="280f47cf-d6db-46e6-a9cf-6c2321f80d5d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.236:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:26:58 crc kubenswrapper[4672]: I0217 16:26:58.978684 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="280f47cf-d6db-46e6-a9cf-6c2321f80d5d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.236:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.047628 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ec271161-02cd-4b97-925f-47e757c52e34" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.237:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.047681 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ec271161-02cd-4b97-925f-47e757c52e34" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.237:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.166893 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rppp4" Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.364162 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cd3485d-3d10-4f38-bf4c-05368c7cd881-catalog-content\") pod \"2cd3485d-3d10-4f38-bf4c-05368c7cd881\" (UID: \"2cd3485d-3d10-4f38-bf4c-05368c7cd881\") " Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.364391 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cd3485d-3d10-4f38-bf4c-05368c7cd881-utilities\") pod \"2cd3485d-3d10-4f38-bf4c-05368c7cd881\" (UID: \"2cd3485d-3d10-4f38-bf4c-05368c7cd881\") " Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.364430 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqfwr\" (UniqueName: \"kubernetes.io/projected/2cd3485d-3d10-4f38-bf4c-05368c7cd881-kube-api-access-fqfwr\") pod \"2cd3485d-3d10-4f38-bf4c-05368c7cd881\" (UID: \"2cd3485d-3d10-4f38-bf4c-05368c7cd881\") " Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.364925 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cd3485d-3d10-4f38-bf4c-05368c7cd881-utilities" (OuterVolumeSpecName: "utilities") pod "2cd3485d-3d10-4f38-bf4c-05368c7cd881" (UID: "2cd3485d-3d10-4f38-bf4c-05368c7cd881"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.372810 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cd3485d-3d10-4f38-bf4c-05368c7cd881-kube-api-access-fqfwr" (OuterVolumeSpecName: "kube-api-access-fqfwr") pod "2cd3485d-3d10-4f38-bf4c-05368c7cd881" (UID: "2cd3485d-3d10-4f38-bf4c-05368c7cd881"). InnerVolumeSpecName "kube-api-access-fqfwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.467937 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cd3485d-3d10-4f38-bf4c-05368c7cd881-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.467981 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqfwr\" (UniqueName: \"kubernetes.io/projected/2cd3485d-3d10-4f38-bf4c-05368c7cd881-kube-api-access-fqfwr\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.476725 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cd3485d-3d10-4f38-bf4c-05368c7cd881-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2cd3485d-3d10-4f38-bf4c-05368c7cd881" (UID: "2cd3485d-3d10-4f38-bf4c-05368c7cd881"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.569814 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cd3485d-3d10-4f38-bf4c-05368c7cd881-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.678761 4672 generic.go:334] "Generic (PLEG): container finished" podID="2cd3485d-3d10-4f38-bf4c-05368c7cd881" containerID="3591fe829bc7122dc4787dce38f3291f756a461c191f5f0b753779bab931e1ee" exitCode=0 Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.679703 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rppp4" Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.680374 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rppp4" event={"ID":"2cd3485d-3d10-4f38-bf4c-05368c7cd881","Type":"ContainerDied","Data":"3591fe829bc7122dc4787dce38f3291f756a461c191f5f0b753779bab931e1ee"} Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.680427 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rppp4" event={"ID":"2cd3485d-3d10-4f38-bf4c-05368c7cd881","Type":"ContainerDied","Data":"fa7015936ebcde60a444cc907980509cc6f453022923a8c3a1cefa0e2b4b6ac2"} Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.680444 4672 scope.go:117] "RemoveContainer" containerID="3591fe829bc7122dc4787dce38f3291f756a461c191f5f0b753779bab931e1ee" Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.727371 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rppp4"] Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.729943 4672 scope.go:117] "RemoveContainer" containerID="e6fb0b20521ca0a8f284fad9f9c5a2f35aea9c13cc2c9c86bdcc5646b1543026" Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.741639 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rppp4"] Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.768689 4672 scope.go:117] "RemoveContainer" containerID="d9b1fe55ea1eb2ab5ab32963d8cbcd84460a6afab66ef976a3cad5797ff532f1" Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.825241 4672 scope.go:117] "RemoveContainer" containerID="3591fe829bc7122dc4787dce38f3291f756a461c191f5f0b753779bab931e1ee" Feb 17 16:26:59 crc kubenswrapper[4672]: E0217 16:26:59.825776 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3591fe829bc7122dc4787dce38f3291f756a461c191f5f0b753779bab931e1ee\": container with ID starting with 3591fe829bc7122dc4787dce38f3291f756a461c191f5f0b753779bab931e1ee not found: ID does not exist" containerID="3591fe829bc7122dc4787dce38f3291f756a461c191f5f0b753779bab931e1ee" Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.825812 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3591fe829bc7122dc4787dce38f3291f756a461c191f5f0b753779bab931e1ee"} err="failed to get container status \"3591fe829bc7122dc4787dce38f3291f756a461c191f5f0b753779bab931e1ee\": rpc error: code = NotFound desc = could not find container \"3591fe829bc7122dc4787dce38f3291f756a461c191f5f0b753779bab931e1ee\": container with ID starting with 3591fe829bc7122dc4787dce38f3291f756a461c191f5f0b753779bab931e1ee not found: ID does not exist" Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.825831 4672 scope.go:117] "RemoveContainer" containerID="e6fb0b20521ca0a8f284fad9f9c5a2f35aea9c13cc2c9c86bdcc5646b1543026" Feb 17 16:26:59 crc kubenswrapper[4672]: E0217 16:26:59.826189 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6fb0b20521ca0a8f284fad9f9c5a2f35aea9c13cc2c9c86bdcc5646b1543026\": container with ID starting with e6fb0b20521ca0a8f284fad9f9c5a2f35aea9c13cc2c9c86bdcc5646b1543026 not found: ID does not exist" containerID="e6fb0b20521ca0a8f284fad9f9c5a2f35aea9c13cc2c9c86bdcc5646b1543026" Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.826214 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6fb0b20521ca0a8f284fad9f9c5a2f35aea9c13cc2c9c86bdcc5646b1543026"} err="failed to get container status \"e6fb0b20521ca0a8f284fad9f9c5a2f35aea9c13cc2c9c86bdcc5646b1543026\": rpc error: code = NotFound desc = could not find container \"e6fb0b20521ca0a8f284fad9f9c5a2f35aea9c13cc2c9c86bdcc5646b1543026\": container with ID starting with e6fb0b20521ca0a8f284fad9f9c5a2f35aea9c13cc2c9c86bdcc5646b1543026 not found: ID does not exist" Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.826230 4672 scope.go:117] "RemoveContainer" containerID="d9b1fe55ea1eb2ab5ab32963d8cbcd84460a6afab66ef976a3cad5797ff532f1" Feb 17 16:26:59 crc kubenswrapper[4672]: E0217 16:26:59.826526 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9b1fe55ea1eb2ab5ab32963d8cbcd84460a6afab66ef976a3cad5797ff532f1\": container with ID starting with d9b1fe55ea1eb2ab5ab32963d8cbcd84460a6afab66ef976a3cad5797ff532f1 not found: ID does not exist" containerID="d9b1fe55ea1eb2ab5ab32963d8cbcd84460a6afab66ef976a3cad5797ff532f1" Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.826566 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9b1fe55ea1eb2ab5ab32963d8cbcd84460a6afab66ef976a3cad5797ff532f1"} err="failed to get container status \"d9b1fe55ea1eb2ab5ab32963d8cbcd84460a6afab66ef976a3cad5797ff532f1\": rpc error: code = NotFound desc = could not find container \"d9b1fe55ea1eb2ab5ab32963d8cbcd84460a6afab66ef976a3cad5797ff532f1\": container with ID starting with d9b1fe55ea1eb2ab5ab32963d8cbcd84460a6afab66ef976a3cad5797ff532f1 not found: ID does not exist" Feb 17 16:26:59 crc kubenswrapper[4672]: I0217 16:26:59.956986 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cd3485d-3d10-4f38-bf4c-05368c7cd881" path="/var/lib/kubelet/pods/2cd3485d-3d10-4f38-bf4c-05368c7cd881/volumes" Feb 17 16:27:07 crc kubenswrapper[4672]: I0217 16:27:07.971280 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:27:07 crc kubenswrapper[4672]: I0217 16:27:07.972120 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:27:07 crc kubenswrapper[4672]: I0217 16:27:07.972956 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:27:07 crc kubenswrapper[4672]: I0217 16:27:07.973791 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:27:07 crc kubenswrapper[4672]: I0217 16:27:07.979362 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:27:07 crc kubenswrapper[4672]: I0217 16:27:07.981862 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:27:08 crc kubenswrapper[4672]: I0217 16:27:08.041943 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 16:27:08 crc kubenswrapper[4672]: I0217 16:27:08.071560 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 16:27:08 crc kubenswrapper[4672]: I0217 16:27:08.081325 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 16:27:08 crc kubenswrapper[4672]: I0217 16:27:08.806146 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 16:27:19 crc kubenswrapper[4672]: I0217 16:27:19.857203 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-scpk5"] Feb 17 16:27:19 crc kubenswrapper[4672]: I0217 16:27:19.866852 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-scpk5"] Feb 17 16:27:19 crc kubenswrapper[4672]: I0217 16:27:19.928734 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-qrhj8"] Feb 17 16:27:19 crc kubenswrapper[4672]: E0217 16:27:19.929141 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd3485d-3d10-4f38-bf4c-05368c7cd881" containerName="extract-utilities" Feb 17 16:27:19 crc kubenswrapper[4672]: I0217 16:27:19.929158 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd3485d-3d10-4f38-bf4c-05368c7cd881" containerName="extract-utilities" Feb 17 16:27:19 crc kubenswrapper[4672]: E0217 16:27:19.929175 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd3485d-3d10-4f38-bf4c-05368c7cd881" containerName="registry-server" Feb 17 16:27:19 crc kubenswrapper[4672]: I0217 16:27:19.929181 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd3485d-3d10-4f38-bf4c-05368c7cd881" containerName="registry-server" Feb 17 16:27:19 crc kubenswrapper[4672]: E0217 16:27:19.929196 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd3485d-3d10-4f38-bf4c-05368c7cd881" containerName="extract-content" Feb 17 16:27:19 crc kubenswrapper[4672]: I0217 16:27:19.929202 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd3485d-3d10-4f38-bf4c-05368c7cd881" containerName="extract-content" Feb 17 16:27:19 crc kubenswrapper[4672]: I0217 16:27:19.929392 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd3485d-3d10-4f38-bf4c-05368c7cd881" containerName="registry-server" Feb 17 16:27:19 crc kubenswrapper[4672]: I0217 16:27:19.930085 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-qrhj8" Feb 17 16:27:19 crc kubenswrapper[4672]: I0217 16:27:19.932378 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 16:27:19 crc kubenswrapper[4672]: I0217 16:27:19.956622 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2" path="/var/lib/kubelet/pods/fb223fa0-5bed-4291-bc2d-3e1f6c90e6f2/volumes" Feb 17 16:27:19 crc kubenswrapper[4672]: I0217 16:27:19.964723 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-qrhj8"] Feb 17 16:27:20 crc kubenswrapper[4672]: I0217 16:27:20.051254 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc5471f5-2491-4841-be45-09c8f14b35c0-scripts\") pod \"cloudkitty-db-sync-qrhj8\" (UID: \"dc5471f5-2491-4841-be45-09c8f14b35c0\") " pod="openstack/cloudkitty-db-sync-qrhj8" Feb 17 16:27:20 crc kubenswrapper[4672]: I0217 16:27:20.051333 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5471f5-2491-4841-be45-09c8f14b35c0-config-data\") pod \"cloudkitty-db-sync-qrhj8\" (UID: \"dc5471f5-2491-4841-be45-09c8f14b35c0\") " pod="openstack/cloudkitty-db-sync-qrhj8" Feb 17 16:27:20 crc kubenswrapper[4672]: I0217 16:27:20.051471 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/dc5471f5-2491-4841-be45-09c8f14b35c0-certs\") pod \"cloudkitty-db-sync-qrhj8\" (UID: \"dc5471f5-2491-4841-be45-09c8f14b35c0\") " pod="openstack/cloudkitty-db-sync-qrhj8" Feb 17 16:27:20 crc kubenswrapper[4672]: I0217 16:27:20.051636 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq9ps\" (UniqueName: \"kubernetes.io/projected/dc5471f5-2491-4841-be45-09c8f14b35c0-kube-api-access-nq9ps\") pod \"cloudkitty-db-sync-qrhj8\" (UID: \"dc5471f5-2491-4841-be45-09c8f14b35c0\") " pod="openstack/cloudkitty-db-sync-qrhj8" Feb 17 16:27:20 crc kubenswrapper[4672]: I0217 16:27:20.051960 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5471f5-2491-4841-be45-09c8f14b35c0-combined-ca-bundle\") pod \"cloudkitty-db-sync-qrhj8\" (UID: \"dc5471f5-2491-4841-be45-09c8f14b35c0\") " pod="openstack/cloudkitty-db-sync-qrhj8" Feb 17 16:27:20 crc kubenswrapper[4672]: I0217 16:27:20.154126 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5471f5-2491-4841-be45-09c8f14b35c0-combined-ca-bundle\") pod \"cloudkitty-db-sync-qrhj8\" (UID: \"dc5471f5-2491-4841-be45-09c8f14b35c0\") " pod="openstack/cloudkitty-db-sync-qrhj8" Feb 17 16:27:20 crc kubenswrapper[4672]: I0217 16:27:20.154246 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc5471f5-2491-4841-be45-09c8f14b35c0-scripts\") pod \"cloudkitty-db-sync-qrhj8\" (UID: \"dc5471f5-2491-4841-be45-09c8f14b35c0\") " pod="openstack/cloudkitty-db-sync-qrhj8" Feb 17 16:27:20 crc kubenswrapper[4672]: I0217 16:27:20.154324 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5471f5-2491-4841-be45-09c8f14b35c0-config-data\") pod \"cloudkitty-db-sync-qrhj8\" (UID: \"dc5471f5-2491-4841-be45-09c8f14b35c0\") " pod="openstack/cloudkitty-db-sync-qrhj8" Feb 17 16:27:20 crc kubenswrapper[4672]: I0217 16:27:20.154506 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/dc5471f5-2491-4841-be45-09c8f14b35c0-certs\") pod \"cloudkitty-db-sync-qrhj8\" (UID: \"dc5471f5-2491-4841-be45-09c8f14b35c0\") " pod="openstack/cloudkitty-db-sync-qrhj8" Feb 17 16:27:20 crc kubenswrapper[4672]: I0217 16:27:20.155795 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq9ps\" (UniqueName: \"kubernetes.io/projected/dc5471f5-2491-4841-be45-09c8f14b35c0-kube-api-access-nq9ps\") pod \"cloudkitty-db-sync-qrhj8\" (UID: \"dc5471f5-2491-4841-be45-09c8f14b35c0\") " pod="openstack/cloudkitty-db-sync-qrhj8" Feb 17 16:27:20 crc kubenswrapper[4672]: I0217 16:27:20.160787 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5471f5-2491-4841-be45-09c8f14b35c0-combined-ca-bundle\") pod \"cloudkitty-db-sync-qrhj8\" (UID: \"dc5471f5-2491-4841-be45-09c8f14b35c0\") " pod="openstack/cloudkitty-db-sync-qrhj8" Feb 17 16:27:20 crc kubenswrapper[4672]: I0217 16:27:20.160928 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/dc5471f5-2491-4841-be45-09c8f14b35c0-certs\") pod \"cloudkitty-db-sync-qrhj8\" (UID: \"dc5471f5-2491-4841-be45-09c8f14b35c0\") " pod="openstack/cloudkitty-db-sync-qrhj8" Feb 17 16:27:20 crc kubenswrapper[4672]: I0217 16:27:20.161265 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc5471f5-2491-4841-be45-09c8f14b35c0-scripts\") pod \"cloudkitty-db-sync-qrhj8\" (UID: \"dc5471f5-2491-4841-be45-09c8f14b35c0\") " pod="openstack/cloudkitty-db-sync-qrhj8" Feb 17 16:27:20 crc kubenswrapper[4672]: I0217 16:27:20.177906 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5471f5-2491-4841-be45-09c8f14b35c0-config-data\") pod \"cloudkitty-db-sync-qrhj8\" (UID: \"dc5471f5-2491-4841-be45-09c8f14b35c0\") " pod="openstack/cloudkitty-db-sync-qrhj8" Feb 17 16:27:20 crc kubenswrapper[4672]: I0217 16:27:20.184490 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq9ps\" (UniqueName: \"kubernetes.io/projected/dc5471f5-2491-4841-be45-09c8f14b35c0-kube-api-access-nq9ps\") pod \"cloudkitty-db-sync-qrhj8\" (UID: \"dc5471f5-2491-4841-be45-09c8f14b35c0\") " pod="openstack/cloudkitty-db-sync-qrhj8" Feb 17 16:27:20 crc kubenswrapper[4672]: I0217 16:27:20.250052 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-qrhj8" Feb 17 16:27:20 crc kubenswrapper[4672]: I0217 16:27:20.763100 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-qrhj8"] Feb 17 16:27:20 crc kubenswrapper[4672]: E0217 16:27:20.920501 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:27:20 crc kubenswrapper[4672]: E0217 16:27:20.920619 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:27:20 crc kubenswrapper[4672]: E0217 16:27:20.920928 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq9ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-qrhj8_openstack(dc5471f5-2491-4841-be45-09c8f14b35c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:27:20 crc kubenswrapper[4672]: E0217 16:27:20.922146 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:27:20 crc kubenswrapper[4672]: I0217 16:27:20.929908 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-qrhj8" event={"ID":"dc5471f5-2491-4841-be45-09c8f14b35c0","Type":"ContainerStarted","Data":"b889fc2834d106ffccb518e51ac42019ec395464f21a4bb70326efe8966ac907"} Feb 17 16:27:20 crc kubenswrapper[4672]: E0217 16:27:20.937524 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:27:21 crc kubenswrapper[4672]: I0217 16:27:21.524142 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:27:21 crc kubenswrapper[4672]: I0217 16:27:21.777499 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:27:21 crc kubenswrapper[4672]: I0217 16:27:21.778312 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89a3e9fb-fe7e-40db-9a8d-d0654e17d835" containerName="ceilometer-central-agent" containerID="cri-o://c1d29ee5922e7df14b712649cec64d858d8e7f7322d72afdd3eb3afaa56fbeee" gracePeriod=30 Feb 17 16:27:21 crc kubenswrapper[4672]: I0217 16:27:21.778359 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89a3e9fb-fe7e-40db-9a8d-d0654e17d835" containerName="sg-core" containerID="cri-o://72ba7ba089b17af40885aea63b4ae50dd243b4633417c2796d5224769c5fde24" gracePeriod=30 Feb 17 16:27:21 crc kubenswrapper[4672]: I0217 16:27:21.778995 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89a3e9fb-fe7e-40db-9a8d-d0654e17d835" containerName="proxy-httpd" containerID="cri-o://208900d77f16c9655f477f14fdc99d4ebabd37d58f006eb99a0f70204c4002c0" gracePeriod=30 Feb 17 16:27:21 crc kubenswrapper[4672]: I0217 16:27:21.779143 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89a3e9fb-fe7e-40db-9a8d-d0654e17d835" containerName="ceilometer-notification-agent" containerID="cri-o://44ccec02fda8ea0a6bd0c733c15471c90c4dd3aee30d603d52ddf8c42a5baf00" gracePeriod=30 Feb 17 16:27:21 crc kubenswrapper[4672]: I0217 16:27:21.943224 4672 generic.go:334] "Generic (PLEG): container finished" podID="89a3e9fb-fe7e-40db-9a8d-d0654e17d835" containerID="72ba7ba089b17af40885aea63b4ae50dd243b4633417c2796d5224769c5fde24" exitCode=2 Feb 17 16:27:21 crc kubenswrapper[4672]: I0217 16:27:21.944346 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89a3e9fb-fe7e-40db-9a8d-d0654e17d835","Type":"ContainerDied","Data":"72ba7ba089b17af40885aea63b4ae50dd243b4633417c2796d5224769c5fde24"} Feb 17 16:27:21 crc kubenswrapper[4672]: E0217 16:27:21.945277 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:27:22 crc kubenswrapper[4672]: I0217 16:27:22.597780 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:27:22 crc kubenswrapper[4672]: I0217 16:27:22.971949 4672 generic.go:334] "Generic (PLEG): container finished" podID="89a3e9fb-fe7e-40db-9a8d-d0654e17d835" containerID="208900d77f16c9655f477f14fdc99d4ebabd37d58f006eb99a0f70204c4002c0" exitCode=0 Feb 17 16:27:22 crc kubenswrapper[4672]: I0217 16:27:22.971980 4672 generic.go:334] "Generic (PLEG): container finished" podID="89a3e9fb-fe7e-40db-9a8d-d0654e17d835" containerID="c1d29ee5922e7df14b712649cec64d858d8e7f7322d72afdd3eb3afaa56fbeee" exitCode=0 Feb 17 16:27:22 crc kubenswrapper[4672]: I0217 16:27:22.972001 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89a3e9fb-fe7e-40db-9a8d-d0654e17d835","Type":"ContainerDied","Data":"208900d77f16c9655f477f14fdc99d4ebabd37d58f006eb99a0f70204c4002c0"} Feb 17 16:27:22 crc kubenswrapper[4672]: I0217 16:27:22.972028 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89a3e9fb-fe7e-40db-9a8d-d0654e17d835","Type":"ContainerDied","Data":"c1d29ee5922e7df14b712649cec64d858d8e7f7322d72afdd3eb3afaa56fbeee"} Feb 17 16:27:24 crc kubenswrapper[4672]: I0217 16:27:24.802194 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="89a3e9fb-fe7e-40db-9a8d-d0654e17d835" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.231:3000/\": dial tcp 10.217.0.231:3000: connect: connection refused" Feb 17 16:27:25 crc kubenswrapper[4672]: I0217 16:27:25.365203 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="3068e639-1b58-4971-bf3e-c321ff88289b" containerName="rabbitmq" containerID="cri-o://a1e6f4fc864ae2ff390bb89ecd4ecc97ef1e8e578421ecf2fb2f1557f6a73ff6" gracePeriod=604797 Feb 17 16:27:25 crc kubenswrapper[4672]: E0217 16:27:25.527781 4672 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89a3e9fb_fe7e_40db_9a8d_d0654e17d835.slice/crio-conmon-44ccec02fda8ea0a6bd0c733c15471c90c4dd3aee30d603d52ddf8c42a5baf00.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89a3e9fb_fe7e_40db_9a8d_d0654e17d835.slice/crio-44ccec02fda8ea0a6bd0c733c15471c90c4dd3aee30d603d52ddf8c42a5baf00.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:27:25 crc kubenswrapper[4672]: I0217 16:27:25.934988 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:27:25 crc kubenswrapper[4672]: I0217 16:27:25.996281 4672 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="3068e639-1b58-4971-bf3e-c321ff88289b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.001598 4672 generic.go:334] "Generic (PLEG): container finished" podID="89a3e9fb-fe7e-40db-9a8d-d0654e17d835" containerID="44ccec02fda8ea0a6bd0c733c15471c90c4dd3aee30d603d52ddf8c42a5baf00" exitCode=0 Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.001643 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89a3e9fb-fe7e-40db-9a8d-d0654e17d835","Type":"ContainerDied","Data":"44ccec02fda8ea0a6bd0c733c15471c90c4dd3aee30d603d52ddf8c42a5baf00"} Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.001675 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89a3e9fb-fe7e-40db-9a8d-d0654e17d835","Type":"ContainerDied","Data":"64ef1a3f627108a8ead8cee18ac936b54e9c31a0811e43805c2f77a73b6caf7b"} Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.001694 4672 scope.go:117] "RemoveContainer" containerID="208900d77f16c9655f477f14fdc99d4ebabd37d58f006eb99a0f70204c4002c0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.001813 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.029390 4672 scope.go:117] "RemoveContainer" containerID="72ba7ba089b17af40885aea63b4ae50dd243b4633417c2796d5224769c5fde24" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.058523 4672 scope.go:117] "RemoveContainer" containerID="44ccec02fda8ea0a6bd0c733c15471c90c4dd3aee30d603d52ddf8c42a5baf00" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.076483 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9bz4\" (UniqueName: \"kubernetes.io/projected/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-kube-api-access-b9bz4\") pod \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.076557 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-sg-core-conf-yaml\") pod \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.076580 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-log-httpd\") pod \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.076625 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-ceilometer-tls-certs\") pod \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.076657 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-config-data\") pod \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.076677 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-combined-ca-bundle\") pod \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.076714 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-run-httpd\") pod \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.076836 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-scripts\") pod \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\" (UID: \"89a3e9fb-fe7e-40db-9a8d-d0654e17d835\") " Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.077849 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "89a3e9fb-fe7e-40db-9a8d-d0654e17d835" (UID: "89a3e9fb-fe7e-40db-9a8d-d0654e17d835"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.078066 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "89a3e9fb-fe7e-40db-9a8d-d0654e17d835" (UID: "89a3e9fb-fe7e-40db-9a8d-d0654e17d835"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.084719 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-kube-api-access-b9bz4" (OuterVolumeSpecName: "kube-api-access-b9bz4") pod "89a3e9fb-fe7e-40db-9a8d-d0654e17d835" (UID: "89a3e9fb-fe7e-40db-9a8d-d0654e17d835"). InnerVolumeSpecName "kube-api-access-b9bz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.093641 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-scripts" (OuterVolumeSpecName: "scripts") pod "89a3e9fb-fe7e-40db-9a8d-d0654e17d835" (UID: "89a3e9fb-fe7e-40db-9a8d-d0654e17d835"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.132695 4672 scope.go:117] "RemoveContainer" containerID="c1d29ee5922e7df14b712649cec64d858d8e7f7322d72afdd3eb3afaa56fbeee" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.147687 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "89a3e9fb-fe7e-40db-9a8d-d0654e17d835" (UID: "89a3e9fb-fe7e-40db-9a8d-d0654e17d835"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.150648 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "89a3e9fb-fe7e-40db-9a8d-d0654e17d835" (UID: "89a3e9fb-fe7e-40db-9a8d-d0654e17d835"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.178980 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9bz4\" (UniqueName: \"kubernetes.io/projected/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-kube-api-access-b9bz4\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.179013 4672 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.179023 4672 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.179033 4672 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.179045 4672 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.179055 4672 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.197714 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89a3e9fb-fe7e-40db-9a8d-d0654e17d835" (UID: "89a3e9fb-fe7e-40db-9a8d-d0654e17d835"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.210278 4672 scope.go:117] "RemoveContainer" containerID="208900d77f16c9655f477f14fdc99d4ebabd37d58f006eb99a0f70204c4002c0" Feb 17 16:27:26 crc kubenswrapper[4672]: E0217 16:27:26.211110 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"208900d77f16c9655f477f14fdc99d4ebabd37d58f006eb99a0f70204c4002c0\": container with ID starting with 208900d77f16c9655f477f14fdc99d4ebabd37d58f006eb99a0f70204c4002c0 not found: ID does not exist" containerID="208900d77f16c9655f477f14fdc99d4ebabd37d58f006eb99a0f70204c4002c0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.211177 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"208900d77f16c9655f477f14fdc99d4ebabd37d58f006eb99a0f70204c4002c0"} err="failed to get container status \"208900d77f16c9655f477f14fdc99d4ebabd37d58f006eb99a0f70204c4002c0\": rpc error: code = NotFound desc = could not find container \"208900d77f16c9655f477f14fdc99d4ebabd37d58f006eb99a0f70204c4002c0\": container with ID starting with 208900d77f16c9655f477f14fdc99d4ebabd37d58f006eb99a0f70204c4002c0 not found: ID does not exist" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.211204 4672 scope.go:117] "RemoveContainer" containerID="72ba7ba089b17af40885aea63b4ae50dd243b4633417c2796d5224769c5fde24" Feb 17 16:27:26 crc kubenswrapper[4672]: E0217 16:27:26.211737 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72ba7ba089b17af40885aea63b4ae50dd243b4633417c2796d5224769c5fde24\": container with ID starting with 72ba7ba089b17af40885aea63b4ae50dd243b4633417c2796d5224769c5fde24 not found: ID does not exist" containerID="72ba7ba089b17af40885aea63b4ae50dd243b4633417c2796d5224769c5fde24" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.211842 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72ba7ba089b17af40885aea63b4ae50dd243b4633417c2796d5224769c5fde24"} err="failed to get container status \"72ba7ba089b17af40885aea63b4ae50dd243b4633417c2796d5224769c5fde24\": rpc error: code = NotFound desc = could not find container \"72ba7ba089b17af40885aea63b4ae50dd243b4633417c2796d5224769c5fde24\": container with ID starting with 72ba7ba089b17af40885aea63b4ae50dd243b4633417c2796d5224769c5fde24 not found: ID does not exist" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.211921 4672 scope.go:117] "RemoveContainer" containerID="44ccec02fda8ea0a6bd0c733c15471c90c4dd3aee30d603d52ddf8c42a5baf00" Feb 17 16:27:26 crc kubenswrapper[4672]: E0217 16:27:26.212614 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44ccec02fda8ea0a6bd0c733c15471c90c4dd3aee30d603d52ddf8c42a5baf00\": container with ID starting with 44ccec02fda8ea0a6bd0c733c15471c90c4dd3aee30d603d52ddf8c42a5baf00 not found: ID does not exist" containerID="44ccec02fda8ea0a6bd0c733c15471c90c4dd3aee30d603d52ddf8c42a5baf00" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.212694 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44ccec02fda8ea0a6bd0c733c15471c90c4dd3aee30d603d52ddf8c42a5baf00"} err="failed to get container status \"44ccec02fda8ea0a6bd0c733c15471c90c4dd3aee30d603d52ddf8c42a5baf00\": rpc error: code = NotFound desc = could not find container \"44ccec02fda8ea0a6bd0c733c15471c90c4dd3aee30d603d52ddf8c42a5baf00\": container with ID starting with 44ccec02fda8ea0a6bd0c733c15471c90c4dd3aee30d603d52ddf8c42a5baf00 not found: ID does not exist" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.212835 4672 scope.go:117] "RemoveContainer" containerID="c1d29ee5922e7df14b712649cec64d858d8e7f7322d72afdd3eb3afaa56fbeee" Feb 17 16:27:26 crc kubenswrapper[4672]: E0217 16:27:26.213156 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1d29ee5922e7df14b712649cec64d858d8e7f7322d72afdd3eb3afaa56fbeee\": container with ID starting with c1d29ee5922e7df14b712649cec64d858d8e7f7322d72afdd3eb3afaa56fbeee not found: ID does not exist" containerID="c1d29ee5922e7df14b712649cec64d858d8e7f7322d72afdd3eb3afaa56fbeee" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.213241 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1d29ee5922e7df14b712649cec64d858d8e7f7322d72afdd3eb3afaa56fbeee"} err="failed to get container status \"c1d29ee5922e7df14b712649cec64d858d8e7f7322d72afdd3eb3afaa56fbeee\": rpc error: code = NotFound desc = could not find container \"c1d29ee5922e7df14b712649cec64d858d8e7f7322d72afdd3eb3afaa56fbeee\": container with ID starting with c1d29ee5922e7df14b712649cec64d858d8e7f7322d72afdd3eb3afaa56fbeee not found: ID does not exist" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.232177 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-config-data" (OuterVolumeSpecName: "config-data") pod "89a3e9fb-fe7e-40db-9a8d-d0654e17d835" (UID: "89a3e9fb-fe7e-40db-9a8d-d0654e17d835"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.281133 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.281163 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a3e9fb-fe7e-40db-9a8d-d0654e17d835-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.348391 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.360625 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.376135 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:27:26 crc kubenswrapper[4672]: E0217 16:27:26.376488 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a3e9fb-fe7e-40db-9a8d-d0654e17d835" containerName="ceilometer-central-agent" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.376518 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a3e9fb-fe7e-40db-9a8d-d0654e17d835" containerName="ceilometer-central-agent" Feb 17 16:27:26 crc kubenswrapper[4672]: E0217 16:27:26.376547 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a3e9fb-fe7e-40db-9a8d-d0654e17d835" containerName="proxy-httpd" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.376554 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a3e9fb-fe7e-40db-9a8d-d0654e17d835" containerName="proxy-httpd" Feb 17 16:27:26 crc kubenswrapper[4672]: E0217 16:27:26.376574 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a3e9fb-fe7e-40db-9a8d-d0654e17d835" containerName="sg-core" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.376580 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a3e9fb-fe7e-40db-9a8d-d0654e17d835" containerName="sg-core" Feb 17 16:27:26 crc kubenswrapper[4672]: E0217 16:27:26.376593 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a3e9fb-fe7e-40db-9a8d-d0654e17d835" containerName="ceilometer-notification-agent" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.376598 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a3e9fb-fe7e-40db-9a8d-d0654e17d835" containerName="ceilometer-notification-agent" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.376772 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a3e9fb-fe7e-40db-9a8d-d0654e17d835" containerName="sg-core" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.376789 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a3e9fb-fe7e-40db-9a8d-d0654e17d835" containerName="ceilometer-notification-agent" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.376798 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a3e9fb-fe7e-40db-9a8d-d0654e17d835" containerName="proxy-httpd" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.376811 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a3e9fb-fe7e-40db-9a8d-d0654e17d835" containerName="ceilometer-central-agent" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.379433 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.389578 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.389591 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.390074 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.411308 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.485119 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.485174 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.485203 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-scripts\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.485299 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-log-httpd\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.485320 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.485350 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-config-data\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.485380 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-run-httpd\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.485407 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx4bs\" (UniqueName: \"kubernetes.io/projected/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-kube-api-access-tx4bs\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.587500 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-log-httpd\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.587565 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.587598 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-config-data\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.587639 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-run-httpd\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.587669 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx4bs\" (UniqueName: \"kubernetes.io/projected/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-kube-api-access-tx4bs\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.587750 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.587784 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.587808 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-scripts\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.588130 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-log-httpd\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.588240 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-run-httpd\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.594567 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-config-data\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.595073 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.595478 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-scripts\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.595742 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.619280 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx4bs\" (UniqueName: \"kubernetes.io/projected/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-kube-api-access-tx4bs\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.625012 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e58ce9b-ddd5-42bb-8e07-08a22c8871a5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5\") " pod="openstack/ceilometer-0" Feb 17 16:27:26 crc kubenswrapper[4672]: I0217 16:27:26.698870 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:27:27 crc kubenswrapper[4672]: I0217 16:27:27.024336 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="5467b054-ae2f-4852-8d68-f9ba7cd2bdab" containerName="rabbitmq" containerID="cri-o://0ba50ac14fde088faf589887170dbfe9e183afe39627f702197f9e8cc3fe394d" gracePeriod=604796 Feb 17 16:27:27 crc kubenswrapper[4672]: I0217 16:27:27.216407 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:27:27 crc kubenswrapper[4672]: E0217 16:27:27.324020 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:27:27 crc kubenswrapper[4672]: E0217 16:27:27.324067 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:27:27 crc kubenswrapper[4672]: E0217 16:27:27.324167 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66h7h644h64ch5f8h565hfch5dh56chfdh8hfdh5b5h567h6dh665h557h74h665hcbh96h659h554h589h57fh5d9h55h564hcfh5dhffhfdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tx4bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9e58ce9b-ddd5-42bb-8e07-08a22c8871a5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:27:27 crc kubenswrapper[4672]: I0217 16:27:27.958605 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89a3e9fb-fe7e-40db-9a8d-d0654e17d835" path="/var/lib/kubelet/pods/89a3e9fb-fe7e-40db-9a8d-d0654e17d835/volumes" Feb 17 16:27:28 crc kubenswrapper[4672]: I0217 16:27:28.037254 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5","Type":"ContainerStarted","Data":"a5bbda24a0c1fa629dd22db4c82ae1544d3808b688c553f2099a7b6642624128"} Feb 17 16:27:29 crc kubenswrapper[4672]: I0217 16:27:29.049269 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5","Type":"ContainerStarted","Data":"b59c80696215bb7350bca6455809cc44fe6866c619a8d2b44b253b1d26c16e4e"} Feb 17 16:27:29 crc kubenswrapper[4672]: I0217 16:27:29.049985 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5","Type":"ContainerStarted","Data":"3b8af5821d06d30f1f12e94aee334c12fcd95e890873b939fe1e91adf3028bed"} Feb 17 16:27:30 crc kubenswrapper[4672]: E0217 16:27:30.348363 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:27:31 crc kubenswrapper[4672]: I0217 16:27:31.073025 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e58ce9b-ddd5-42bb-8e07-08a22c8871a5","Type":"ContainerStarted","Data":"2a54d3a4208cffc3ab3d120a410fb9f83d5bbcb3db3843c1ea2d3bb2949c6d25"} Feb 17 16:27:31 crc kubenswrapper[4672]: I0217 16:27:31.074454 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:27:31 crc kubenswrapper[4672]: E0217 16:27:31.076779 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.088395 4672 generic.go:334] "Generic (PLEG): container finished" podID="3068e639-1b58-4971-bf3e-c321ff88289b" containerID="a1e6f4fc864ae2ff390bb89ecd4ecc97ef1e8e578421ecf2fb2f1557f6a73ff6" exitCode=0 Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.088467 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3068e639-1b58-4971-bf3e-c321ff88289b","Type":"ContainerDied","Data":"a1e6f4fc864ae2ff390bb89ecd4ecc97ef1e8e578421ecf2fb2f1557f6a73ff6"} Feb 17 16:27:32 crc kubenswrapper[4672]: E0217 16:27:32.090749 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.272682 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.372067 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3068e639-1b58-4971-bf3e-c321ff88289b-erlang-cookie-secret\") pod \"3068e639-1b58-4971-bf3e-c321ff88289b\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.372123 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3068e639-1b58-4971-bf3e-c321ff88289b-plugins-conf\") pod \"3068e639-1b58-4971-bf3e-c321ff88289b\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.372282 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-tls\") pod \"3068e639-1b58-4971-bf3e-c321ff88289b\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.372333 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkx6l\" (UniqueName: \"kubernetes.io/projected/3068e639-1b58-4971-bf3e-c321ff88289b-kube-api-access-mkx6l\") pod \"3068e639-1b58-4971-bf3e-c321ff88289b\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.372374 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-plugins\") pod \"3068e639-1b58-4971-bf3e-c321ff88289b\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.373862 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ba6f626-16b3-4af7-837d-88c617ee5155\") pod \"3068e639-1b58-4971-bf3e-c321ff88289b\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.373953 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3068e639-1b58-4971-bf3e-c321ff88289b-server-conf\") pod \"3068e639-1b58-4971-bf3e-c321ff88289b\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.374105 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-erlang-cookie\") pod \"3068e639-1b58-4971-bf3e-c321ff88289b\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.374160 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3068e639-1b58-4971-bf3e-c321ff88289b-config-data\") pod \"3068e639-1b58-4971-bf3e-c321ff88289b\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.374232 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3068e639-1b58-4971-bf3e-c321ff88289b-pod-info\") pod \"3068e639-1b58-4971-bf3e-c321ff88289b\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.374274 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-confd\") pod \"3068e639-1b58-4971-bf3e-c321ff88289b\" (UID: \"3068e639-1b58-4971-bf3e-c321ff88289b\") " Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.376707 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3068e639-1b58-4971-bf3e-c321ff88289b" (UID: "3068e639-1b58-4971-bf3e-c321ff88289b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.378666 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3068e639-1b58-4971-bf3e-c321ff88289b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3068e639-1b58-4971-bf3e-c321ff88289b" (UID: "3068e639-1b58-4971-bf3e-c321ff88289b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.379166 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3068e639-1b58-4971-bf3e-c321ff88289b" (UID: "3068e639-1b58-4971-bf3e-c321ff88289b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.387083 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "3068e639-1b58-4971-bf3e-c321ff88289b" (UID: "3068e639-1b58-4971-bf3e-c321ff88289b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.398273 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3068e639-1b58-4971-bf3e-c321ff88289b-pod-info" (OuterVolumeSpecName: "pod-info") pod "3068e639-1b58-4971-bf3e-c321ff88289b" (UID: "3068e639-1b58-4971-bf3e-c321ff88289b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.427167 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3068e639-1b58-4971-bf3e-c321ff88289b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3068e639-1b58-4971-bf3e-c321ff88289b" (UID: "3068e639-1b58-4971-bf3e-c321ff88289b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.427829 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3068e639-1b58-4971-bf3e-c321ff88289b-kube-api-access-mkx6l" (OuterVolumeSpecName: "kube-api-access-mkx6l") pod "3068e639-1b58-4971-bf3e-c321ff88289b" (UID: "3068e639-1b58-4971-bf3e-c321ff88289b"). InnerVolumeSpecName "kube-api-access-mkx6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.451125 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3068e639-1b58-4971-bf3e-c321ff88289b-config-data" (OuterVolumeSpecName: "config-data") pod "3068e639-1b58-4971-bf3e-c321ff88289b" (UID: "3068e639-1b58-4971-bf3e-c321ff88289b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.474284 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ba6f626-16b3-4af7-837d-88c617ee5155" (OuterVolumeSpecName: "persistence") pod "3068e639-1b58-4971-bf3e-c321ff88289b" (UID: "3068e639-1b58-4971-bf3e-c321ff88289b"). InnerVolumeSpecName "pvc-2ba6f626-16b3-4af7-837d-88c617ee5155". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.476573 4672 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.476604 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3068e639-1b58-4971-bf3e-c321ff88289b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.476616 4672 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3068e639-1b58-4971-bf3e-c321ff88289b-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.476625 4672 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3068e639-1b58-4971-bf3e-c321ff88289b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.476633 4672 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3068e639-1b58-4971-bf3e-c321ff88289b-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.476642 4672 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.476650 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkx6l\" (UniqueName: \"kubernetes.io/projected/3068e639-1b58-4971-bf3e-c321ff88289b-kube-api-access-mkx6l\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.476658 4672 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.476683 4672 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2ba6f626-16b3-4af7-837d-88c617ee5155\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ba6f626-16b3-4af7-837d-88c617ee5155\") on node \"crc\" " Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.498931 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3068e639-1b58-4971-bf3e-c321ff88289b-server-conf" (OuterVolumeSpecName: "server-conf") pod "3068e639-1b58-4971-bf3e-c321ff88289b" (UID: "3068e639-1b58-4971-bf3e-c321ff88289b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.530416 4672 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.530593 4672 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2ba6f626-16b3-4af7-837d-88c617ee5155" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ba6f626-16b3-4af7-837d-88c617ee5155") on node "crc" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.542694 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3068e639-1b58-4971-bf3e-c321ff88289b" (UID: "3068e639-1b58-4971-bf3e-c321ff88289b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.578188 4672 reconciler_common.go:293] "Volume detached for volume \"pvc-2ba6f626-16b3-4af7-837d-88c617ee5155\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ba6f626-16b3-4af7-837d-88c617ee5155\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.578221 4672 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3068e639-1b58-4971-bf3e-c321ff88289b-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:32 crc kubenswrapper[4672]: I0217 16:27:32.578233 4672 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3068e639-1b58-4971-bf3e-c321ff88289b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.103795 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3068e639-1b58-4971-bf3e-c321ff88289b","Type":"ContainerDied","Data":"4029f29e2e8251dbfdbbade279a804f76b0c90787f3793456d5bd7f15117fa4b"} Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.104235 4672 scope.go:117] "RemoveContainer" containerID="a1e6f4fc864ae2ff390bb89ecd4ecc97ef1e8e578421ecf2fb2f1557f6a73ff6" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.104448 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.130292 4672 scope.go:117] "RemoveContainer" containerID="c6fb63d9f2a376c50007c407a43b299fc08c9519b4a5c7f6c3e24d766cae0726" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.321389 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.334269 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.348183 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:27:33 crc kubenswrapper[4672]: E0217 16:27:33.348689 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3068e639-1b58-4971-bf3e-c321ff88289b" containerName="setup-container" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.348705 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="3068e639-1b58-4971-bf3e-c321ff88289b" containerName="setup-container" Feb 17 16:27:33 crc kubenswrapper[4672]: E0217 16:27:33.348719 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3068e639-1b58-4971-bf3e-c321ff88289b" containerName="rabbitmq" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.348725 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="3068e639-1b58-4971-bf3e-c321ff88289b" containerName="rabbitmq" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.348932 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="3068e639-1b58-4971-bf3e-c321ff88289b" containerName="rabbitmq" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.350035 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.355003 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.355148 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.355250 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.355395 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-kgd9v" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.355521 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.355632 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.356258 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.391481 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.394214 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2da88232-8248-48fa-98e2-3220a17cc432-config-data\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.394249 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2da88232-8248-48fa-98e2-3220a17cc432-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.394269 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2da88232-8248-48fa-98e2-3220a17cc432-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.394297 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2da88232-8248-48fa-98e2-3220a17cc432-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.394318 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2da88232-8248-48fa-98e2-3220a17cc432-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.394340 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2da88232-8248-48fa-98e2-3220a17cc432-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.394382 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2da88232-8248-48fa-98e2-3220a17cc432-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.394399 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2da88232-8248-48fa-98e2-3220a17cc432-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.394415 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2da88232-8248-48fa-98e2-3220a17cc432-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.394464 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tbc9\" (UniqueName: \"kubernetes.io/projected/2da88232-8248-48fa-98e2-3220a17cc432-kube-api-access-6tbc9\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.394490 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2ba6f626-16b3-4af7-837d-88c617ee5155\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ba6f626-16b3-4af7-837d-88c617ee5155\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.497087 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2da88232-8248-48fa-98e2-3220a17cc432-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.497327 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2da88232-8248-48fa-98e2-3220a17cc432-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.497395 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tbc9\" (UniqueName: \"kubernetes.io/projected/2da88232-8248-48fa-98e2-3220a17cc432-kube-api-access-6tbc9\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.497427 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2ba6f626-16b3-4af7-837d-88c617ee5155\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ba6f626-16b3-4af7-837d-88c617ee5155\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.497500 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2da88232-8248-48fa-98e2-3220a17cc432-config-data\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.497610 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2da88232-8248-48fa-98e2-3220a17cc432-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.497635 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2da88232-8248-48fa-98e2-3220a17cc432-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.497666 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2da88232-8248-48fa-98e2-3220a17cc432-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.497691 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2da88232-8248-48fa-98e2-3220a17cc432-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.497716 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2da88232-8248-48fa-98e2-3220a17cc432-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.497756 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2da88232-8248-48fa-98e2-3220a17cc432-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.498501 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2da88232-8248-48fa-98e2-3220a17cc432-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.499046 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2da88232-8248-48fa-98e2-3220a17cc432-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.499578 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2da88232-8248-48fa-98e2-3220a17cc432-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.499763 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2da88232-8248-48fa-98e2-3220a17cc432-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.499922 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2da88232-8248-48fa-98e2-3220a17cc432-config-data\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.502325 4672 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.502390 4672 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2ba6f626-16b3-4af7-837d-88c617ee5155\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ba6f626-16b3-4af7-837d-88c617ee5155\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/346e7e88f6122aec89ca532feb4a65d3c17e46d11b652f2eb4c6a257471d0b1f/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.503524 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2da88232-8248-48fa-98e2-3220a17cc432-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.503639 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2da88232-8248-48fa-98e2-3220a17cc432-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.508832 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2da88232-8248-48fa-98e2-3220a17cc432-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.520617 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tbc9\" (UniqueName: \"kubernetes.io/projected/2da88232-8248-48fa-98e2-3220a17cc432-kube-api-access-6tbc9\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.551297 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2da88232-8248-48fa-98e2-3220a17cc432-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.714700 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.826360 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-server-conf\") pod \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.826470 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-plugins\") pod \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.826530 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-tls\") pod \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.826598 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsm65\" (UniqueName: \"kubernetes.io/projected/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-kube-api-access-lsm65\") pod \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.826654 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-plugins-conf\") pod \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.826674 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-erlang-cookie\") pod \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.828056 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b\") pod \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.828089 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-confd\") pod \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.828129 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-pod-info\") pod \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.828174 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-config-data\") pod \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.828204 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-erlang-cookie-secret\") pod \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\" (UID: \"5467b054-ae2f-4852-8d68-f9ba7cd2bdab\") " Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.830946 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "5467b054-ae2f-4852-8d68-f9ba7cd2bdab" (UID: "5467b054-ae2f-4852-8d68-f9ba7cd2bdab"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.847409 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "5467b054-ae2f-4852-8d68-f9ba7cd2bdab" (UID: "5467b054-ae2f-4852-8d68-f9ba7cd2bdab"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.848278 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "5467b054-ae2f-4852-8d68-f9ba7cd2bdab" (UID: "5467b054-ae2f-4852-8d68-f9ba7cd2bdab"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.850398 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "5467b054-ae2f-4852-8d68-f9ba7cd2bdab" (UID: "5467b054-ae2f-4852-8d68-f9ba7cd2bdab"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.855198 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "5467b054-ae2f-4852-8d68-f9ba7cd2bdab" (UID: "5467b054-ae2f-4852-8d68-f9ba7cd2bdab"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.855780 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-pod-info" (OuterVolumeSpecName: "pod-info") pod "5467b054-ae2f-4852-8d68-f9ba7cd2bdab" (UID: "5467b054-ae2f-4852-8d68-f9ba7cd2bdab"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.863299 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-kube-api-access-lsm65" (OuterVolumeSpecName: "kube-api-access-lsm65") pod "5467b054-ae2f-4852-8d68-f9ba7cd2bdab" (UID: "5467b054-ae2f-4852-8d68-f9ba7cd2bdab"). InnerVolumeSpecName "kube-api-access-lsm65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.864135 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2ba6f626-16b3-4af7-837d-88c617ee5155\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ba6f626-16b3-4af7-837d-88c617ee5155\") pod \"rabbitmq-server-0\" (UID: \"2da88232-8248-48fa-98e2-3220a17cc432\") " pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.908450 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b" (OuterVolumeSpecName: "persistence") pod "5467b054-ae2f-4852-8d68-f9ba7cd2bdab" (UID: "5467b054-ae2f-4852-8d68-f9ba7cd2bdab"). InnerVolumeSpecName "pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.930210 4672 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.930243 4672 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.930252 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsm65\" (UniqueName: \"kubernetes.io/projected/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-kube-api-access-lsm65\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.930262 4672 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.930271 4672 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.930293 4672 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b\") on node \"crc\" " Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.930301 4672 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.930312 4672 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.938896 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-config-data" (OuterVolumeSpecName: "config-data") pod "5467b054-ae2f-4852-8d68-f9ba7cd2bdab" (UID: "5467b054-ae2f-4852-8d68-f9ba7cd2bdab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.948725 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-server-conf" (OuterVolumeSpecName: "server-conf") pod "5467b054-ae2f-4852-8d68-f9ba7cd2bdab" (UID: "5467b054-ae2f-4852-8d68-f9ba7cd2bdab"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.960685 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3068e639-1b58-4971-bf3e-c321ff88289b" path="/var/lib/kubelet/pods/3068e639-1b58-4971-bf3e-c321ff88289b/volumes" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.980991 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:27:33 crc kubenswrapper[4672]: I0217 16:27:33.998549 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "5467b054-ae2f-4852-8d68-f9ba7cd2bdab" (UID: "5467b054-ae2f-4852-8d68-f9ba7cd2bdab"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.003499 4672 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.003644 4672 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b") on node "crc" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.032257 4672 reconciler_common.go:293] "Volume detached for volume \"pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.032290 4672 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.032301 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.032314 4672 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5467b054-ae2f-4852-8d68-f9ba7cd2bdab-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:34 crc kubenswrapper[4672]: E0217 16:27:34.070146 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:27:34 crc kubenswrapper[4672]: E0217 16:27:34.070187 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:27:34 crc kubenswrapper[4672]: E0217 16:27:34.070301 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq9ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-qrhj8_openstack(dc5471f5-2491-4841-be45-09c8f14b35c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:27:34 crc kubenswrapper[4672]: E0217 16:27:34.072303 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.141352 4672 generic.go:334] "Generic (PLEG): container finished" podID="5467b054-ae2f-4852-8d68-f9ba7cd2bdab" containerID="0ba50ac14fde088faf589887170dbfe9e183afe39627f702197f9e8cc3fe394d" exitCode=0 Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.141487 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.142056 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5467b054-ae2f-4852-8d68-f9ba7cd2bdab","Type":"ContainerDied","Data":"0ba50ac14fde088faf589887170dbfe9e183afe39627f702197f9e8cc3fe394d"} Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.142100 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5467b054-ae2f-4852-8d68-f9ba7cd2bdab","Type":"ContainerDied","Data":"c68c63ef4b5f69c59ac8418dfc5f19e709593b1166aef17d859be15327370e9c"} Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.142116 4672 scope.go:117] "RemoveContainer" containerID="0ba50ac14fde088faf589887170dbfe9e183afe39627f702197f9e8cc3fe394d" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.187756 4672 scope.go:117] "RemoveContainer" containerID="87ffd86f4e0157f11b7f529a9b0619f55b4daab09253878dc6cec5c5e545a0d2" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.197826 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.217778 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.238716 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:27:34 crc kubenswrapper[4672]: E0217 16:27:34.239946 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5467b054-ae2f-4852-8d68-f9ba7cd2bdab" containerName="rabbitmq" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.239960 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="5467b054-ae2f-4852-8d68-f9ba7cd2bdab" containerName="rabbitmq" Feb 17 16:27:34 crc kubenswrapper[4672]: E0217 16:27:34.239977 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5467b054-ae2f-4852-8d68-f9ba7cd2bdab" containerName="setup-container" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.239983 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="5467b054-ae2f-4852-8d68-f9ba7cd2bdab" containerName="setup-container" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.255907 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="5467b054-ae2f-4852-8d68-f9ba7cd2bdab" containerName="rabbitmq" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.257990 4672 scope.go:117] "RemoveContainer" containerID="0ba50ac14fde088faf589887170dbfe9e183afe39627f702197f9e8cc3fe394d" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.281142 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.286054 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.286079 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.286335 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.286545 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.286681 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bp98c" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.286808 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.288066 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.296628 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:27:34 crc kubenswrapper[4672]: E0217 16:27:34.297296 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ba50ac14fde088faf589887170dbfe9e183afe39627f702197f9e8cc3fe394d\": container with ID starting with 0ba50ac14fde088faf589887170dbfe9e183afe39627f702197f9e8cc3fe394d not found: ID does not exist" containerID="0ba50ac14fde088faf589887170dbfe9e183afe39627f702197f9e8cc3fe394d" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.297403 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ba50ac14fde088faf589887170dbfe9e183afe39627f702197f9e8cc3fe394d"} err="failed to get container status \"0ba50ac14fde088faf589887170dbfe9e183afe39627f702197f9e8cc3fe394d\": rpc error: code = NotFound desc = could not find container \"0ba50ac14fde088faf589887170dbfe9e183afe39627f702197f9e8cc3fe394d\": container with ID starting with 0ba50ac14fde088faf589887170dbfe9e183afe39627f702197f9e8cc3fe394d not found: ID does not exist" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.297433 4672 scope.go:117] "RemoveContainer" containerID="87ffd86f4e0157f11b7f529a9b0619f55b4daab09253878dc6cec5c5e545a0d2" Feb 17 16:27:34 crc kubenswrapper[4672]: E0217 16:27:34.298385 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87ffd86f4e0157f11b7f529a9b0619f55b4daab09253878dc6cec5c5e545a0d2\": container with ID starting with 87ffd86f4e0157f11b7f529a9b0619f55b4daab09253878dc6cec5c5e545a0d2 not found: ID does not exist" containerID="87ffd86f4e0157f11b7f529a9b0619f55b4daab09253878dc6cec5c5e545a0d2" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.298409 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87ffd86f4e0157f11b7f529a9b0619f55b4daab09253878dc6cec5c5e545a0d2"} err="failed to get container status \"87ffd86f4e0157f11b7f529a9b0619f55b4daab09253878dc6cec5c5e545a0d2\": rpc error: code = NotFound desc = could not find container \"87ffd86f4e0157f11b7f529a9b0619f55b4daab09253878dc6cec5c5e545a0d2\": container with ID starting with 87ffd86f4e0157f11b7f529a9b0619f55b4daab09253878dc6cec5c5e545a0d2 not found: ID does not exist" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.378110 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9a73e2db-d320-4e3c-9412-02555a0a17eb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.378201 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a73e2db-d320-4e3c-9412-02555a0a17eb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.378309 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp7rg\" (UniqueName: \"kubernetes.io/projected/9a73e2db-d320-4e3c-9412-02555a0a17eb-kube-api-access-gp7rg\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.378329 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9a73e2db-d320-4e3c-9412-02555a0a17eb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.378385 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9a73e2db-d320-4e3c-9412-02555a0a17eb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.378404 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9a73e2db-d320-4e3c-9412-02555a0a17eb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.378453 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9a73e2db-d320-4e3c-9412-02555a0a17eb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.378590 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9a73e2db-d320-4e3c-9412-02555a0a17eb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.378627 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.378673 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9a73e2db-d320-4e3c-9412-02555a0a17eb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.379729 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9a73e2db-d320-4e3c-9412-02555a0a17eb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.481346 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9a73e2db-d320-4e3c-9412-02555a0a17eb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.481449 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9a73e2db-d320-4e3c-9412-02555a0a17eb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.481482 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a73e2db-d320-4e3c-9412-02555a0a17eb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.481582 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp7rg\" (UniqueName: \"kubernetes.io/projected/9a73e2db-d320-4e3c-9412-02555a0a17eb-kube-api-access-gp7rg\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.481608 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9a73e2db-d320-4e3c-9412-02555a0a17eb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.481647 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9a73e2db-d320-4e3c-9412-02555a0a17eb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.481676 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9a73e2db-d320-4e3c-9412-02555a0a17eb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.481709 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9a73e2db-d320-4e3c-9412-02555a0a17eb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.481737 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9a73e2db-d320-4e3c-9412-02555a0a17eb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.481790 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.481817 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9a73e2db-d320-4e3c-9412-02555a0a17eb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.482071 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9a73e2db-d320-4e3c-9412-02555a0a17eb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.482645 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9a73e2db-d320-4e3c-9412-02555a0a17eb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.483023 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9a73e2db-d320-4e3c-9412-02555a0a17eb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.483145 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9a73e2db-d320-4e3c-9412-02555a0a17eb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.483195 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a73e2db-d320-4e3c-9412-02555a0a17eb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.489244 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9a73e2db-d320-4e3c-9412-02555a0a17eb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.489263 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9a73e2db-d320-4e3c-9412-02555a0a17eb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.489700 4672 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.489726 4672 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/70972079a6ad7ad29c2dd1359cd5a4462575bbc49aff6da85b5dda1e965af91e/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.490368 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9a73e2db-d320-4e3c-9412-02555a0a17eb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.491984 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9a73e2db-d320-4e3c-9412-02555a0a17eb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.499743 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp7rg\" (UniqueName: \"kubernetes.io/projected/9a73e2db-d320-4e3c-9412-02555a0a17eb-kube-api-access-gp7rg\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.513902 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:27:34 crc kubenswrapper[4672]: W0217 16:27:34.517251 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2da88232_8248_48fa_98e2_3220a17cc432.slice/crio-f9bb79c8d13c9f7a76802ff3e7d2110da667e7e2d124651fd4cf205f60f52183 WatchSource:0}: Error finding container f9bb79c8d13c9f7a76802ff3e7d2110da667e7e2d124651fd4cf205f60f52183: Status 404 returned error can't find the container with id f9bb79c8d13c9f7a76802ff3e7d2110da667e7e2d124651fd4cf205f60f52183 Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.548604 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8943599e-c5dc-4d0c-945c-12fd7b89042b\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a73e2db-d320-4e3c-9412-02555a0a17eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:34 crc kubenswrapper[4672]: I0217 16:27:34.625002 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:27:35 crc kubenswrapper[4672]: I0217 16:27:35.079733 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:27:35 crc kubenswrapper[4672]: W0217 16:27:35.084311 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a73e2db_d320_4e3c_9412_02555a0a17eb.slice/crio-ef45c459b1086e1acfae2cc9ab5d5d66d8f1572701fb50fb0a296a6d0e450c92 WatchSource:0}: Error finding container ef45c459b1086e1acfae2cc9ab5d5d66d8f1572701fb50fb0a296a6d0e450c92: Status 404 returned error can't find the container with id ef45c459b1086e1acfae2cc9ab5d5d66d8f1572701fb50fb0a296a6d0e450c92 Feb 17 16:27:35 crc kubenswrapper[4672]: I0217 16:27:35.154972 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2da88232-8248-48fa-98e2-3220a17cc432","Type":"ContainerStarted","Data":"f9bb79c8d13c9f7a76802ff3e7d2110da667e7e2d124651fd4cf205f60f52183"} Feb 17 16:27:35 crc kubenswrapper[4672]: I0217 16:27:35.157160 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9a73e2db-d320-4e3c-9412-02555a0a17eb","Type":"ContainerStarted","Data":"ef45c459b1086e1acfae2cc9ab5d5d66d8f1572701fb50fb0a296a6d0e450c92"} Feb 17 16:27:35 crc kubenswrapper[4672]: I0217 16:27:35.959474 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5467b054-ae2f-4852-8d68-f9ba7cd2bdab" path="/var/lib/kubelet/pods/5467b054-ae2f-4852-8d68-f9ba7cd2bdab/volumes" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.068823 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74554f47dc-zc49w"] Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.070934 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.077070 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.085394 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74554f47dc-zc49w"] Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.125762 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-ovsdbserver-nb\") pod \"dnsmasq-dns-74554f47dc-zc49w\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.125817 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-dns-svc\") pod \"dnsmasq-dns-74554f47dc-zc49w\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.125841 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-ovsdbserver-sb\") pod \"dnsmasq-dns-74554f47dc-zc49w\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.125872 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-config\") pod \"dnsmasq-dns-74554f47dc-zc49w\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.125919 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwm9l\" (UniqueName: \"kubernetes.io/projected/12bb0440-d6f0-427e-86f6-3d043e8d36ff-kube-api-access-mwm9l\") pod \"dnsmasq-dns-74554f47dc-zc49w\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.125990 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-openstack-edpm-ipam\") pod \"dnsmasq-dns-74554f47dc-zc49w\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.126008 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-dns-swift-storage-0\") pod \"dnsmasq-dns-74554f47dc-zc49w\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.228112 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-ovsdbserver-nb\") pod \"dnsmasq-dns-74554f47dc-zc49w\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.228175 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-dns-svc\") pod \"dnsmasq-dns-74554f47dc-zc49w\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.228204 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-ovsdbserver-sb\") pod \"dnsmasq-dns-74554f47dc-zc49w\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.228237 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-config\") pod \"dnsmasq-dns-74554f47dc-zc49w\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.228288 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwm9l\" (UniqueName: \"kubernetes.io/projected/12bb0440-d6f0-427e-86f6-3d043e8d36ff-kube-api-access-mwm9l\") pod \"dnsmasq-dns-74554f47dc-zc49w\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.228359 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-openstack-edpm-ipam\") pod \"dnsmasq-dns-74554f47dc-zc49w\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.228376 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-dns-swift-storage-0\") pod \"dnsmasq-dns-74554f47dc-zc49w\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.228986 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-ovsdbserver-nb\") pod \"dnsmasq-dns-74554f47dc-zc49w\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.229184 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-dns-svc\") pod \"dnsmasq-dns-74554f47dc-zc49w\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.229739 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-openstack-edpm-ipam\") pod \"dnsmasq-dns-74554f47dc-zc49w\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.229842 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-config\") pod \"dnsmasq-dns-74554f47dc-zc49w\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.230097 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-ovsdbserver-sb\") pod \"dnsmasq-dns-74554f47dc-zc49w\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.230115 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-dns-swift-storage-0\") pod \"dnsmasq-dns-74554f47dc-zc49w\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.260452 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwm9l\" (UniqueName: \"kubernetes.io/projected/12bb0440-d6f0-427e-86f6-3d043e8d36ff-kube-api-access-mwm9l\") pod \"dnsmasq-dns-74554f47dc-zc49w\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.397653 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:36 crc kubenswrapper[4672]: W0217 16:27:36.887156 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12bb0440_d6f0_427e_86f6_3d043e8d36ff.slice/crio-2121fe7d2de35117974527f84f2308ac410cda6ac8a02bb17e02e6e68520f0d7 WatchSource:0}: Error finding container 2121fe7d2de35117974527f84f2308ac410cda6ac8a02bb17e02e6e68520f0d7: Status 404 returned error can't find the container with id 2121fe7d2de35117974527f84f2308ac410cda6ac8a02bb17e02e6e68520f0d7 Feb 17 16:27:36 crc kubenswrapper[4672]: I0217 16:27:36.888364 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74554f47dc-zc49w"] Feb 17 16:27:37 crc kubenswrapper[4672]: I0217 16:27:37.188096 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74554f47dc-zc49w" event={"ID":"12bb0440-d6f0-427e-86f6-3d043e8d36ff","Type":"ContainerStarted","Data":"a520bf150e914c123552966358df7a533afb90f08437f818f296a7d221d30063"} Feb 17 16:27:37 crc kubenswrapper[4672]: I0217 16:27:37.188577 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74554f47dc-zc49w" event={"ID":"12bb0440-d6f0-427e-86f6-3d043e8d36ff","Type":"ContainerStarted","Data":"2121fe7d2de35117974527f84f2308ac410cda6ac8a02bb17e02e6e68520f0d7"} Feb 17 16:27:37 crc kubenswrapper[4672]: I0217 16:27:37.190673 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2da88232-8248-48fa-98e2-3220a17cc432","Type":"ContainerStarted","Data":"403ed9cfe8251aab21bcb53b8cd45c10067c8d563a098641b7571d23ba14aa9d"} Feb 17 16:27:37 crc kubenswrapper[4672]: I0217 16:27:37.195221 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9a73e2db-d320-4e3c-9412-02555a0a17eb","Type":"ContainerStarted","Data":"9c2c432e5abea3cf3420594a691c7254308dd10e1e350c342c30cc571fd96d96"} Feb 17 16:27:38 crc kubenswrapper[4672]: I0217 16:27:38.205806 4672 generic.go:334] "Generic (PLEG): container finished" podID="12bb0440-d6f0-427e-86f6-3d043e8d36ff" containerID="a520bf150e914c123552966358df7a533afb90f08437f818f296a7d221d30063" exitCode=0 Feb 17 16:27:38 crc kubenswrapper[4672]: I0217 16:27:38.205881 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74554f47dc-zc49w" event={"ID":"12bb0440-d6f0-427e-86f6-3d043e8d36ff","Type":"ContainerDied","Data":"a520bf150e914c123552966358df7a533afb90f08437f818f296a7d221d30063"} Feb 17 16:27:39 crc kubenswrapper[4672]: I0217 16:27:39.220702 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74554f47dc-zc49w" event={"ID":"12bb0440-d6f0-427e-86f6-3d043e8d36ff","Type":"ContainerStarted","Data":"c0fe7346be88e146ecd51e2fe609d041f8d01f8c2028fab47d6c927d9aabaeb1"} Feb 17 16:27:39 crc kubenswrapper[4672]: I0217 16:27:39.221182 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:39 crc kubenswrapper[4672]: I0217 16:27:39.256854 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74554f47dc-zc49w" podStartSLOduration=3.256831378 podStartE2EDuration="3.256831378s" podCreationTimestamp="2026-02-17 16:27:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:27:39.247103391 +0000 UTC m=+1468.001192163" watchObservedRunningTime="2026-02-17 16:27:39.256831378 +0000 UTC m=+1468.010920140" Feb 17 16:27:46 crc kubenswrapper[4672]: E0217 16:27:46.048650 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:27:46 crc kubenswrapper[4672]: E0217 16:27:46.049089 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:27:46 crc kubenswrapper[4672]: E0217 16:27:46.049199 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66h7h644h64ch5f8h565hfch5dh56chfdh8hfdh5b5h567h6dh665h557h74h665hcbh96h659h554h589h57fh5d9h55h564hcfh5dhffhfdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tx4bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9e58ce9b-ddd5-42bb-8e07-08a22c8871a5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:27:46 crc kubenswrapper[4672]: E0217 16:27:46.051067 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.124374 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 16:27:46 crc kubenswrapper[4672]: E0217 16:27:46.334236 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.400669 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.486019 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64c8b5dcc-w87br"] Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.486459 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" podUID="8520f022-ba09-48ff-a7e7-1d8f55225a69" containerName="dnsmasq-dns" containerID="cri-o://58d7610309c352aa467eb7bd43abeaa2b6b8cfd8fd57f238ee68c2a23b7f67eb" gracePeriod=10 Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.668840 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bb494c7f-zl6qj"] Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.670827 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.691836 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bb494c7f-zl6qj"] Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.866867 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d859d437-24f3-497a-96b0-6ccd5e0381b7-dns-svc\") pod \"dnsmasq-dns-7bb494c7f-zl6qj\" (UID: \"d859d437-24f3-497a-96b0-6ccd5e0381b7\") " pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.866971 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d859d437-24f3-497a-96b0-6ccd5e0381b7-dns-swift-storage-0\") pod \"dnsmasq-dns-7bb494c7f-zl6qj\" (UID: \"d859d437-24f3-497a-96b0-6ccd5e0381b7\") " pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.867192 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d859d437-24f3-497a-96b0-6ccd5e0381b7-ovsdbserver-nb\") pod \"dnsmasq-dns-7bb494c7f-zl6qj\" (UID: \"d859d437-24f3-497a-96b0-6ccd5e0381b7\") " pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.867247 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d859d437-24f3-497a-96b0-6ccd5e0381b7-openstack-edpm-ipam\") pod \"dnsmasq-dns-7bb494c7f-zl6qj\" (UID: \"d859d437-24f3-497a-96b0-6ccd5e0381b7\") " pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.867275 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d859d437-24f3-497a-96b0-6ccd5e0381b7-config\") pod \"dnsmasq-dns-7bb494c7f-zl6qj\" (UID: \"d859d437-24f3-497a-96b0-6ccd5e0381b7\") " pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.867334 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnh8t\" (UniqueName: \"kubernetes.io/projected/d859d437-24f3-497a-96b0-6ccd5e0381b7-kube-api-access-xnh8t\") pod \"dnsmasq-dns-7bb494c7f-zl6qj\" (UID: \"d859d437-24f3-497a-96b0-6ccd5e0381b7\") " pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.867369 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d859d437-24f3-497a-96b0-6ccd5e0381b7-ovsdbserver-sb\") pod \"dnsmasq-dns-7bb494c7f-zl6qj\" (UID: \"d859d437-24f3-497a-96b0-6ccd5e0381b7\") " pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.969311 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d859d437-24f3-497a-96b0-6ccd5e0381b7-dns-svc\") pod \"dnsmasq-dns-7bb494c7f-zl6qj\" (UID: \"d859d437-24f3-497a-96b0-6ccd5e0381b7\") " pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.970390 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d859d437-24f3-497a-96b0-6ccd5e0381b7-dns-swift-storage-0\") pod \"dnsmasq-dns-7bb494c7f-zl6qj\" (UID: \"d859d437-24f3-497a-96b0-6ccd5e0381b7\") " pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.970466 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d859d437-24f3-497a-96b0-6ccd5e0381b7-dns-svc\") pod \"dnsmasq-dns-7bb494c7f-zl6qj\" (UID: \"d859d437-24f3-497a-96b0-6ccd5e0381b7\") " pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.969388 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d859d437-24f3-497a-96b0-6ccd5e0381b7-dns-swift-storage-0\") pod \"dnsmasq-dns-7bb494c7f-zl6qj\" (UID: \"d859d437-24f3-497a-96b0-6ccd5e0381b7\") " pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.970732 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d859d437-24f3-497a-96b0-6ccd5e0381b7-ovsdbserver-nb\") pod \"dnsmasq-dns-7bb494c7f-zl6qj\" (UID: \"d859d437-24f3-497a-96b0-6ccd5e0381b7\") " pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.970793 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d859d437-24f3-497a-96b0-6ccd5e0381b7-openstack-edpm-ipam\") pod \"dnsmasq-dns-7bb494c7f-zl6qj\" (UID: \"d859d437-24f3-497a-96b0-6ccd5e0381b7\") " pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.970814 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d859d437-24f3-497a-96b0-6ccd5e0381b7-config\") pod \"dnsmasq-dns-7bb494c7f-zl6qj\" (UID: \"d859d437-24f3-497a-96b0-6ccd5e0381b7\") " pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.971254 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnh8t\" (UniqueName: \"kubernetes.io/projected/d859d437-24f3-497a-96b0-6ccd5e0381b7-kube-api-access-xnh8t\") pod \"dnsmasq-dns-7bb494c7f-zl6qj\" (UID: \"d859d437-24f3-497a-96b0-6ccd5e0381b7\") " pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.971365 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d859d437-24f3-497a-96b0-6ccd5e0381b7-ovsdbserver-sb\") pod \"dnsmasq-dns-7bb494c7f-zl6qj\" (UID: \"d859d437-24f3-497a-96b0-6ccd5e0381b7\") " pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.972648 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d859d437-24f3-497a-96b0-6ccd5e0381b7-config\") pod \"dnsmasq-dns-7bb494c7f-zl6qj\" (UID: \"d859d437-24f3-497a-96b0-6ccd5e0381b7\") " pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.972554 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d859d437-24f3-497a-96b0-6ccd5e0381b7-ovsdbserver-nb\") pod \"dnsmasq-dns-7bb494c7f-zl6qj\" (UID: \"d859d437-24f3-497a-96b0-6ccd5e0381b7\") " pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.973398 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d859d437-24f3-497a-96b0-6ccd5e0381b7-ovsdbserver-sb\") pod \"dnsmasq-dns-7bb494c7f-zl6qj\" (UID: \"d859d437-24f3-497a-96b0-6ccd5e0381b7\") " pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.978786 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d859d437-24f3-497a-96b0-6ccd5e0381b7-openstack-edpm-ipam\") pod \"dnsmasq-dns-7bb494c7f-zl6qj\" (UID: \"d859d437-24f3-497a-96b0-6ccd5e0381b7\") " pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.992730 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnh8t\" (UniqueName: \"kubernetes.io/projected/d859d437-24f3-497a-96b0-6ccd5e0381b7-kube-api-access-xnh8t\") pod \"dnsmasq-dns-7bb494c7f-zl6qj\" (UID: \"d859d437-24f3-497a-96b0-6ccd5e0381b7\") " pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:46 crc kubenswrapper[4672]: I0217 16:27:46.994421 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.108410 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.282893 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-ovsdbserver-nb\") pod \"8520f022-ba09-48ff-a7e7-1d8f55225a69\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.282933 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-config\") pod \"8520f022-ba09-48ff-a7e7-1d8f55225a69\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.282994 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-dns-svc\") pod \"8520f022-ba09-48ff-a7e7-1d8f55225a69\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.283051 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-ovsdbserver-sb\") pod \"8520f022-ba09-48ff-a7e7-1d8f55225a69\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.283129 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv9vf\" (UniqueName: \"kubernetes.io/projected/8520f022-ba09-48ff-a7e7-1d8f55225a69-kube-api-access-mv9vf\") pod \"8520f022-ba09-48ff-a7e7-1d8f55225a69\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.283182 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-dns-swift-storage-0\") pod \"8520f022-ba09-48ff-a7e7-1d8f55225a69\" (UID: \"8520f022-ba09-48ff-a7e7-1d8f55225a69\") " Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.289699 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8520f022-ba09-48ff-a7e7-1d8f55225a69-kube-api-access-mv9vf" (OuterVolumeSpecName: "kube-api-access-mv9vf") pod "8520f022-ba09-48ff-a7e7-1d8f55225a69" (UID: "8520f022-ba09-48ff-a7e7-1d8f55225a69"). InnerVolumeSpecName "kube-api-access-mv9vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.347570 4672 generic.go:334] "Generic (PLEG): container finished" podID="8520f022-ba09-48ff-a7e7-1d8f55225a69" containerID="58d7610309c352aa467eb7bd43abeaa2b6b8cfd8fd57f238ee68c2a23b7f67eb" exitCode=0 Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.347648 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" event={"ID":"8520f022-ba09-48ff-a7e7-1d8f55225a69","Type":"ContainerDied","Data":"58d7610309c352aa467eb7bd43abeaa2b6b8cfd8fd57f238ee68c2a23b7f67eb"} Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.347677 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" event={"ID":"8520f022-ba09-48ff-a7e7-1d8f55225a69","Type":"ContainerDied","Data":"fdd58ec5ec2fd730b7d0e96c83c01fd3f08f68b5f2c86e1d72a961028b2a4fc3"} Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.347662 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64c8b5dcc-w87br" Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.347694 4672 scope.go:117] "RemoveContainer" containerID="58d7610309c352aa467eb7bd43abeaa2b6b8cfd8fd57f238ee68c2a23b7f67eb" Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.356065 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8520f022-ba09-48ff-a7e7-1d8f55225a69" (UID: "8520f022-ba09-48ff-a7e7-1d8f55225a69"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.366822 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-config" (OuterVolumeSpecName: "config") pod "8520f022-ba09-48ff-a7e7-1d8f55225a69" (UID: "8520f022-ba09-48ff-a7e7-1d8f55225a69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.380630 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8520f022-ba09-48ff-a7e7-1d8f55225a69" (UID: "8520f022-ba09-48ff-a7e7-1d8f55225a69"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.386006 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8520f022-ba09-48ff-a7e7-1d8f55225a69" (UID: "8520f022-ba09-48ff-a7e7-1d8f55225a69"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.386809 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.386839 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.386848 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.386859 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv9vf\" (UniqueName: \"kubernetes.io/projected/8520f022-ba09-48ff-a7e7-1d8f55225a69-kube-api-access-mv9vf\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.386870 4672 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.394689 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8520f022-ba09-48ff-a7e7-1d8f55225a69" (UID: "8520f022-ba09-48ff-a7e7-1d8f55225a69"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.395010 4672 scope.go:117] "RemoveContainer" containerID="9d2cd1a9d544ccf31401dca8e1f81b63310c7e5351247a6afd9010822a403768" Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.467737 4672 scope.go:117] "RemoveContainer" containerID="58d7610309c352aa467eb7bd43abeaa2b6b8cfd8fd57f238ee68c2a23b7f67eb" Feb 17 16:27:47 crc kubenswrapper[4672]: E0217 16:27:47.469137 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58d7610309c352aa467eb7bd43abeaa2b6b8cfd8fd57f238ee68c2a23b7f67eb\": container with ID starting with 58d7610309c352aa467eb7bd43abeaa2b6b8cfd8fd57f238ee68c2a23b7f67eb not found: ID does not exist" containerID="58d7610309c352aa467eb7bd43abeaa2b6b8cfd8fd57f238ee68c2a23b7f67eb" Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.469183 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58d7610309c352aa467eb7bd43abeaa2b6b8cfd8fd57f238ee68c2a23b7f67eb"} err="failed to get container status \"58d7610309c352aa467eb7bd43abeaa2b6b8cfd8fd57f238ee68c2a23b7f67eb\": rpc error: code = NotFound desc = could not find container \"58d7610309c352aa467eb7bd43abeaa2b6b8cfd8fd57f238ee68c2a23b7f67eb\": container with ID starting with 58d7610309c352aa467eb7bd43abeaa2b6b8cfd8fd57f238ee68c2a23b7f67eb not found: ID does not exist" Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.469212 4672 scope.go:117] "RemoveContainer" containerID="9d2cd1a9d544ccf31401dca8e1f81b63310c7e5351247a6afd9010822a403768" Feb 17 16:27:47 crc kubenswrapper[4672]: E0217 16:27:47.470450 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d2cd1a9d544ccf31401dca8e1f81b63310c7e5351247a6afd9010822a403768\": container with ID starting with 9d2cd1a9d544ccf31401dca8e1f81b63310c7e5351247a6afd9010822a403768 not found: ID does not exist" containerID="9d2cd1a9d544ccf31401dca8e1f81b63310c7e5351247a6afd9010822a403768" Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.470562 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d2cd1a9d544ccf31401dca8e1f81b63310c7e5351247a6afd9010822a403768"} err="failed to get container status \"9d2cd1a9d544ccf31401dca8e1f81b63310c7e5351247a6afd9010822a403768\": rpc error: code = NotFound desc = could not find container \"9d2cd1a9d544ccf31401dca8e1f81b63310c7e5351247a6afd9010822a403768\": container with ID starting with 9d2cd1a9d544ccf31401dca8e1f81b63310c7e5351247a6afd9010822a403768 not found: ID does not exist" Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.490194 4672 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8520f022-ba09-48ff-a7e7-1d8f55225a69-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.504617 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bb494c7f-zl6qj"] Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.731006 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64c8b5dcc-w87br"] Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.744371 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64c8b5dcc-w87br"] Feb 17 16:27:47 crc kubenswrapper[4672]: E0217 16:27:47.946847 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:27:47 crc kubenswrapper[4672]: I0217 16:27:47.957810 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8520f022-ba09-48ff-a7e7-1d8f55225a69" path="/var/lib/kubelet/pods/8520f022-ba09-48ff-a7e7-1d8f55225a69/volumes" Feb 17 16:27:48 crc kubenswrapper[4672]: I0217 16:27:48.361153 4672 generic.go:334] "Generic (PLEG): container finished" podID="d859d437-24f3-497a-96b0-6ccd5e0381b7" containerID="52864ff961db8eb27e6b6f4ae2e359172b412242037fedc34eaab74b2c43bede" exitCode=0 Feb 17 16:27:48 crc kubenswrapper[4672]: I0217 16:27:48.361212 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" event={"ID":"d859d437-24f3-497a-96b0-6ccd5e0381b7","Type":"ContainerDied","Data":"52864ff961db8eb27e6b6f4ae2e359172b412242037fedc34eaab74b2c43bede"} Feb 17 16:27:48 crc kubenswrapper[4672]: I0217 16:27:48.361236 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" event={"ID":"d859d437-24f3-497a-96b0-6ccd5e0381b7","Type":"ContainerStarted","Data":"f56e130a1cf83b10095f307281afa421d3276c1721bb37889ad6576591e0b394"} Feb 17 16:27:49 crc kubenswrapper[4672]: I0217 16:27:49.379635 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" event={"ID":"d859d437-24f3-497a-96b0-6ccd5e0381b7","Type":"ContainerStarted","Data":"15b3ac9f5d25c1e5683c2f7f53de16037b0d652769d357004e95fa6d7ff187f5"} Feb 17 16:27:49 crc kubenswrapper[4672]: I0217 16:27:49.379997 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:49 crc kubenswrapper[4672]: I0217 16:27:49.421705 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" podStartSLOduration=3.421683484 podStartE2EDuration="3.421683484s" podCreationTimestamp="2026-02-17 16:27:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:27:49.416273452 +0000 UTC m=+1478.170362254" watchObservedRunningTime="2026-02-17 16:27:49.421683484 +0000 UTC m=+1478.175772226" Feb 17 16:27:56 crc kubenswrapper[4672]: I0217 16:27:56.996837 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bb494c7f-zl6qj" Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.087277 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74554f47dc-zc49w"] Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.092034 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74554f47dc-zc49w" podUID="12bb0440-d6f0-427e-86f6-3d043e8d36ff" containerName="dnsmasq-dns" containerID="cri-o://c0fe7346be88e146ecd51e2fe609d041f8d01f8c2028fab47d6c927d9aabaeb1" gracePeriod=10 Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.470808 4672 generic.go:334] "Generic (PLEG): container finished" podID="12bb0440-d6f0-427e-86f6-3d043e8d36ff" containerID="c0fe7346be88e146ecd51e2fe609d041f8d01f8c2028fab47d6c927d9aabaeb1" exitCode=0 Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.470894 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74554f47dc-zc49w" event={"ID":"12bb0440-d6f0-427e-86f6-3d043e8d36ff","Type":"ContainerDied","Data":"c0fe7346be88e146ecd51e2fe609d041f8d01f8c2028fab47d6c927d9aabaeb1"} Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.685222 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.818083 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-ovsdbserver-sb\") pod \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.818147 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwm9l\" (UniqueName: \"kubernetes.io/projected/12bb0440-d6f0-427e-86f6-3d043e8d36ff-kube-api-access-mwm9l\") pod \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.818181 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-dns-swift-storage-0\") pod \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.818245 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-ovsdbserver-nb\") pod \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.818266 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-openstack-edpm-ipam\") pod \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.818303 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-dns-svc\") pod \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.818460 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-config\") pod \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\" (UID: \"12bb0440-d6f0-427e-86f6-3d043e8d36ff\") " Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.823657 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12bb0440-d6f0-427e-86f6-3d043e8d36ff-kube-api-access-mwm9l" (OuterVolumeSpecName: "kube-api-access-mwm9l") pod "12bb0440-d6f0-427e-86f6-3d043e8d36ff" (UID: "12bb0440-d6f0-427e-86f6-3d043e8d36ff"). InnerVolumeSpecName "kube-api-access-mwm9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.878043 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "12bb0440-d6f0-427e-86f6-3d043e8d36ff" (UID: "12bb0440-d6f0-427e-86f6-3d043e8d36ff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.900163 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "12bb0440-d6f0-427e-86f6-3d043e8d36ff" (UID: "12bb0440-d6f0-427e-86f6-3d043e8d36ff"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.902185 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "12bb0440-d6f0-427e-86f6-3d043e8d36ff" (UID: "12bb0440-d6f0-427e-86f6-3d043e8d36ff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.902192 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-config" (OuterVolumeSpecName: "config") pod "12bb0440-d6f0-427e-86f6-3d043e8d36ff" (UID: "12bb0440-d6f0-427e-86f6-3d043e8d36ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.910650 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "12bb0440-d6f0-427e-86f6-3d043e8d36ff" (UID: "12bb0440-d6f0-427e-86f6-3d043e8d36ff"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.914846 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "12bb0440-d6f0-427e-86f6-3d043e8d36ff" (UID: "12bb0440-d6f0-427e-86f6-3d043e8d36ff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.920778 4672 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.920800 4672 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.920810 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.920820 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwm9l\" (UniqueName: \"kubernetes.io/projected/12bb0440-d6f0-427e-86f6-3d043e8d36ff-kube-api-access-mwm9l\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.920829 4672 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.920837 4672 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:57 crc kubenswrapper[4672]: I0217 16:27:57.920845 4672 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/12bb0440-d6f0-427e-86f6-3d043e8d36ff-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:27:58 crc kubenswrapper[4672]: I0217 16:27:58.481747 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74554f47dc-zc49w" event={"ID":"12bb0440-d6f0-427e-86f6-3d043e8d36ff","Type":"ContainerDied","Data":"2121fe7d2de35117974527f84f2308ac410cda6ac8a02bb17e02e6e68520f0d7"} Feb 17 16:27:58 crc kubenswrapper[4672]: I0217 16:27:58.482096 4672 scope.go:117] "RemoveContainer" containerID="c0fe7346be88e146ecd51e2fe609d041f8d01f8c2028fab47d6c927d9aabaeb1" Feb 17 16:27:58 crc kubenswrapper[4672]: I0217 16:27:58.481804 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74554f47dc-zc49w" Feb 17 16:27:58 crc kubenswrapper[4672]: I0217 16:27:58.509092 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74554f47dc-zc49w"] Feb 17 16:27:58 crc kubenswrapper[4672]: I0217 16:27:58.521684 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74554f47dc-zc49w"] Feb 17 16:27:58 crc kubenswrapper[4672]: I0217 16:27:58.526889 4672 scope.go:117] "RemoveContainer" containerID="a520bf150e914c123552966358df7a533afb90f08437f818f296a7d221d30063" Feb 17 16:27:59 crc kubenswrapper[4672]: E0217 16:27:59.076982 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:27:59 crc kubenswrapper[4672]: E0217 16:27:59.077042 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:27:59 crc kubenswrapper[4672]: E0217 16:27:59.077184 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq9ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-qrhj8_openstack(dc5471f5-2491-4841-be45-09c8f14b35c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:27:59 crc kubenswrapper[4672]: E0217 16:27:59.078411 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:27:59 crc kubenswrapper[4672]: I0217 16:27:59.957353 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12bb0440-d6f0-427e-86f6-3d043e8d36ff" path="/var/lib/kubelet/pods/12bb0440-d6f0-427e-86f6-3d043e8d36ff/volumes" Feb 17 16:28:00 crc kubenswrapper[4672]: E0217 16:28:00.947053 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:28:01 crc kubenswrapper[4672]: I0217 16:28:01.341223 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mz2mt"] Feb 17 16:28:01 crc kubenswrapper[4672]: E0217 16:28:01.342306 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12bb0440-d6f0-427e-86f6-3d043e8d36ff" containerName="init" Feb 17 16:28:01 crc kubenswrapper[4672]: I0217 16:28:01.342370 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="12bb0440-d6f0-427e-86f6-3d043e8d36ff" containerName="init" Feb 17 16:28:01 crc kubenswrapper[4672]: E0217 16:28:01.342399 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8520f022-ba09-48ff-a7e7-1d8f55225a69" containerName="dnsmasq-dns" Feb 17 16:28:01 crc kubenswrapper[4672]: I0217 16:28:01.342412 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="8520f022-ba09-48ff-a7e7-1d8f55225a69" containerName="dnsmasq-dns" Feb 17 16:28:01 crc kubenswrapper[4672]: E0217 16:28:01.342489 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12bb0440-d6f0-427e-86f6-3d043e8d36ff" containerName="dnsmasq-dns" Feb 17 16:28:01 crc kubenswrapper[4672]: I0217 16:28:01.342503 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="12bb0440-d6f0-427e-86f6-3d043e8d36ff" containerName="dnsmasq-dns" Feb 17 16:28:01 crc kubenswrapper[4672]: E0217 16:28:01.342568 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8520f022-ba09-48ff-a7e7-1d8f55225a69" containerName="init" Feb 17 16:28:01 crc kubenswrapper[4672]: I0217 16:28:01.342581 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="8520f022-ba09-48ff-a7e7-1d8f55225a69" containerName="init" Feb 17 16:28:01 crc kubenswrapper[4672]: I0217 16:28:01.343128 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="8520f022-ba09-48ff-a7e7-1d8f55225a69" containerName="dnsmasq-dns" Feb 17 16:28:01 crc kubenswrapper[4672]: I0217 16:28:01.343178 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="12bb0440-d6f0-427e-86f6-3d043e8d36ff" containerName="dnsmasq-dns" Feb 17 16:28:01 crc kubenswrapper[4672]: I0217 16:28:01.346054 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mz2mt" Feb 17 16:28:01 crc kubenswrapper[4672]: I0217 16:28:01.354873 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mz2mt"] Feb 17 16:28:01 crc kubenswrapper[4672]: I0217 16:28:01.503133 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbthh\" (UniqueName: \"kubernetes.io/projected/e335a878-a3d7-4447-b8db-6e4236f184b8-kube-api-access-gbthh\") pod \"community-operators-mz2mt\" (UID: \"e335a878-a3d7-4447-b8db-6e4236f184b8\") " pod="openshift-marketplace/community-operators-mz2mt" Feb 17 16:28:01 crc kubenswrapper[4672]: I0217 16:28:01.503528 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e335a878-a3d7-4447-b8db-6e4236f184b8-utilities\") pod \"community-operators-mz2mt\" (UID: \"e335a878-a3d7-4447-b8db-6e4236f184b8\") " pod="openshift-marketplace/community-operators-mz2mt" Feb 17 16:28:01 crc kubenswrapper[4672]: I0217 16:28:01.503599 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e335a878-a3d7-4447-b8db-6e4236f184b8-catalog-content\") pod \"community-operators-mz2mt\" (UID: \"e335a878-a3d7-4447-b8db-6e4236f184b8\") " pod="openshift-marketplace/community-operators-mz2mt" Feb 17 16:28:01 crc kubenswrapper[4672]: I0217 16:28:01.605561 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e335a878-a3d7-4447-b8db-6e4236f184b8-utilities\") pod \"community-operators-mz2mt\" (UID: \"e335a878-a3d7-4447-b8db-6e4236f184b8\") " pod="openshift-marketplace/community-operators-mz2mt" Feb 17 16:28:01 crc kubenswrapper[4672]: I0217 16:28:01.605663 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e335a878-a3d7-4447-b8db-6e4236f184b8-catalog-content\") pod \"community-operators-mz2mt\" (UID: \"e335a878-a3d7-4447-b8db-6e4236f184b8\") " pod="openshift-marketplace/community-operators-mz2mt" Feb 17 16:28:01 crc kubenswrapper[4672]: I0217 16:28:01.605868 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbthh\" (UniqueName: \"kubernetes.io/projected/e335a878-a3d7-4447-b8db-6e4236f184b8-kube-api-access-gbthh\") pod \"community-operators-mz2mt\" (UID: \"e335a878-a3d7-4447-b8db-6e4236f184b8\") " pod="openshift-marketplace/community-operators-mz2mt" Feb 17 16:28:01 crc kubenswrapper[4672]: I0217 16:28:01.606222 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e335a878-a3d7-4447-b8db-6e4236f184b8-utilities\") pod \"community-operators-mz2mt\" (UID: \"e335a878-a3d7-4447-b8db-6e4236f184b8\") " pod="openshift-marketplace/community-operators-mz2mt" Feb 17 16:28:01 crc kubenswrapper[4672]: I0217 16:28:01.606310 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e335a878-a3d7-4447-b8db-6e4236f184b8-catalog-content\") pod \"community-operators-mz2mt\" (UID: \"e335a878-a3d7-4447-b8db-6e4236f184b8\") " pod="openshift-marketplace/community-operators-mz2mt" Feb 17 16:28:01 crc kubenswrapper[4672]: I0217 16:28:01.638985 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbthh\" (UniqueName: \"kubernetes.io/projected/e335a878-a3d7-4447-b8db-6e4236f184b8-kube-api-access-gbthh\") pod \"community-operators-mz2mt\" (UID: \"e335a878-a3d7-4447-b8db-6e4236f184b8\") " pod="openshift-marketplace/community-operators-mz2mt" Feb 17 16:28:01 crc kubenswrapper[4672]: I0217 16:28:01.681304 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mz2mt" Feb 17 16:28:02 crc kubenswrapper[4672]: I0217 16:28:02.311631 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mz2mt"] Feb 17 16:28:02 crc kubenswrapper[4672]: W0217 16:28:02.312328 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode335a878_a3d7_4447_b8db_6e4236f184b8.slice/crio-d547c0b8137d9026dc69c862ad4c8af79bcd61b1d9f0c819ad3f94f019271f3a WatchSource:0}: Error finding container d547c0b8137d9026dc69c862ad4c8af79bcd61b1d9f0c819ad3f94f019271f3a: Status 404 returned error can't find the container with id d547c0b8137d9026dc69c862ad4c8af79bcd61b1d9f0c819ad3f94f019271f3a Feb 17 16:28:02 crc kubenswrapper[4672]: I0217 16:28:02.528780 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mz2mt" event={"ID":"e335a878-a3d7-4447-b8db-6e4236f184b8","Type":"ContainerStarted","Data":"d547c0b8137d9026dc69c862ad4c8af79bcd61b1d9f0c819ad3f94f019271f3a"} Feb 17 16:28:03 crc kubenswrapper[4672]: I0217 16:28:03.541671 4672 generic.go:334] "Generic (PLEG): container finished" podID="e335a878-a3d7-4447-b8db-6e4236f184b8" containerID="8f6f149566f68a6fe2459d0d664ed488b936849902be64317606655d473ffce6" exitCode=0 Feb 17 16:28:03 crc kubenswrapper[4672]: I0217 16:28:03.542254 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mz2mt" event={"ID":"e335a878-a3d7-4447-b8db-6e4236f184b8","Type":"ContainerDied","Data":"8f6f149566f68a6fe2459d0d664ed488b936849902be64317606655d473ffce6"} Feb 17 16:28:04 crc kubenswrapper[4672]: I0217 16:28:04.555547 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mz2mt" event={"ID":"e335a878-a3d7-4447-b8db-6e4236f184b8","Type":"ContainerStarted","Data":"c91e507fa6c51e45e5b8441208504ae558aa3d0c6af7a5b3ef7475a6eb9bffaa"} Feb 17 16:28:05 crc kubenswrapper[4672]: I0217 16:28:05.567037 4672 generic.go:334] "Generic (PLEG): container finished" podID="e335a878-a3d7-4447-b8db-6e4236f184b8" containerID="c91e507fa6c51e45e5b8441208504ae558aa3d0c6af7a5b3ef7475a6eb9bffaa" exitCode=0 Feb 17 16:28:05 crc kubenswrapper[4672]: I0217 16:28:05.567148 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mz2mt" event={"ID":"e335a878-a3d7-4447-b8db-6e4236f184b8","Type":"ContainerDied","Data":"c91e507fa6c51e45e5b8441208504ae558aa3d0c6af7a5b3ef7475a6eb9bffaa"} Feb 17 16:28:06 crc kubenswrapper[4672]: I0217 16:28:06.580765 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mz2mt" event={"ID":"e335a878-a3d7-4447-b8db-6e4236f184b8","Type":"ContainerStarted","Data":"933a974be0e65b25a48c6ac8c900d44c248e8a6d0bb4671b79e7a23099369c08"} Feb 17 16:28:06 crc kubenswrapper[4672]: I0217 16:28:06.650692 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mz2mt" podStartSLOduration=3.149406244 podStartE2EDuration="5.650670698s" podCreationTimestamp="2026-02-17 16:28:01 +0000 UTC" firstStartedPulling="2026-02-17 16:28:03.544273554 +0000 UTC m=+1492.298362306" lastFinishedPulling="2026-02-17 16:28:06.045537988 +0000 UTC m=+1494.799626760" observedRunningTime="2026-02-17 16:28:06.606150805 +0000 UTC m=+1495.360239557" watchObservedRunningTime="2026-02-17 16:28:06.650670698 +0000 UTC m=+1495.404759430" Feb 17 16:28:08 crc kubenswrapper[4672]: I0217 16:28:08.604704 4672 generic.go:334] "Generic (PLEG): container finished" podID="2da88232-8248-48fa-98e2-3220a17cc432" containerID="403ed9cfe8251aab21bcb53b8cd45c10067c8d563a098641b7571d23ba14aa9d" exitCode=0 Feb 17 16:28:08 crc kubenswrapper[4672]: I0217 16:28:08.604792 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2da88232-8248-48fa-98e2-3220a17cc432","Type":"ContainerDied","Data":"403ed9cfe8251aab21bcb53b8cd45c10067c8d563a098641b7571d23ba14aa9d"} Feb 17 16:28:09 crc kubenswrapper[4672]: I0217 16:28:09.631551 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2da88232-8248-48fa-98e2-3220a17cc432","Type":"ContainerStarted","Data":"2c08bace1b3aaf5f655cedf339b8aaa69b35f16391ebe04681862fdbda9c432c"} Feb 17 16:28:09 crc kubenswrapper[4672]: I0217 16:28:09.633661 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 17 16:28:09 crc kubenswrapper[4672]: I0217 16:28:09.637188 4672 generic.go:334] "Generic (PLEG): container finished" podID="9a73e2db-d320-4e3c-9412-02555a0a17eb" containerID="9c2c432e5abea3cf3420594a691c7254308dd10e1e350c342c30cc571fd96d96" exitCode=0 Feb 17 16:28:09 crc kubenswrapper[4672]: I0217 16:28:09.637225 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9a73e2db-d320-4e3c-9412-02555a0a17eb","Type":"ContainerDied","Data":"9c2c432e5abea3cf3420594a691c7254308dd10e1e350c342c30cc571fd96d96"} Feb 17 16:28:09 crc kubenswrapper[4672]: I0217 16:28:09.676429 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.676415558 podStartE2EDuration="36.676415558s" podCreationTimestamp="2026-02-17 16:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:28:09.668376556 +0000 UTC m=+1498.422465308" watchObservedRunningTime="2026-02-17 16:28:09.676415558 +0000 UTC m=+1498.430504300" Feb 17 16:28:10 crc kubenswrapper[4672]: I0217 16:28:10.457123 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6"] Feb 17 16:28:10 crc kubenswrapper[4672]: I0217 16:28:10.458729 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6" Feb 17 16:28:10 crc kubenswrapper[4672]: I0217 16:28:10.463049 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:28:10 crc kubenswrapper[4672]: I0217 16:28:10.463263 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6sng" Feb 17 16:28:10 crc kubenswrapper[4672]: I0217 16:28:10.463441 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:28:10 crc kubenswrapper[4672]: I0217 16:28:10.465832 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:28:10 crc kubenswrapper[4672]: I0217 16:28:10.465879 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6"] Feb 17 16:28:10 crc kubenswrapper[4672]: I0217 16:28:10.534025 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5km4\" (UniqueName: \"kubernetes.io/projected/dd8e4614-fb4d-4444-827f-659cffc613ea-kube-api-access-n5km4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6\" (UID: \"dd8e4614-fb4d-4444-827f-659cffc613ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6" Feb 17 16:28:10 crc kubenswrapper[4672]: I0217 16:28:10.534196 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd8e4614-fb4d-4444-827f-659cffc613ea-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6\" (UID: \"dd8e4614-fb4d-4444-827f-659cffc613ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6" Feb 17 16:28:10 crc kubenswrapper[4672]: I0217 16:28:10.534408 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd8e4614-fb4d-4444-827f-659cffc613ea-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6\" (UID: \"dd8e4614-fb4d-4444-827f-659cffc613ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6" Feb 17 16:28:10 crc kubenswrapper[4672]: I0217 16:28:10.535171 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd8e4614-fb4d-4444-827f-659cffc613ea-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6\" (UID: \"dd8e4614-fb4d-4444-827f-659cffc613ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6" Feb 17 16:28:10 crc kubenswrapper[4672]: I0217 16:28:10.636745 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd8e4614-fb4d-4444-827f-659cffc613ea-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6\" (UID: \"dd8e4614-fb4d-4444-827f-659cffc613ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6" Feb 17 16:28:10 crc kubenswrapper[4672]: I0217 16:28:10.636855 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd8e4614-fb4d-4444-827f-659cffc613ea-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6\" (UID: \"dd8e4614-fb4d-4444-827f-659cffc613ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6" Feb 17 16:28:10 crc kubenswrapper[4672]: I0217 16:28:10.636893 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5km4\" (UniqueName: \"kubernetes.io/projected/dd8e4614-fb4d-4444-827f-659cffc613ea-kube-api-access-n5km4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6\" (UID: \"dd8e4614-fb4d-4444-827f-659cffc613ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6" Feb 17 16:28:10 crc kubenswrapper[4672]: I0217 16:28:10.636939 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd8e4614-fb4d-4444-827f-659cffc613ea-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6\" (UID: \"dd8e4614-fb4d-4444-827f-659cffc613ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6" Feb 17 16:28:10 crc kubenswrapper[4672]: I0217 16:28:10.643652 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd8e4614-fb4d-4444-827f-659cffc613ea-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6\" (UID: \"dd8e4614-fb4d-4444-827f-659cffc613ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6" Feb 17 16:28:10 crc kubenswrapper[4672]: I0217 16:28:10.643991 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd8e4614-fb4d-4444-827f-659cffc613ea-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6\" (UID: \"dd8e4614-fb4d-4444-827f-659cffc613ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6" Feb 17 16:28:10 crc kubenswrapper[4672]: I0217 16:28:10.648992 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9a73e2db-d320-4e3c-9412-02555a0a17eb","Type":"ContainerStarted","Data":"26d91f6d525f101d07fc3307be86fec4ddee6716fb40d14a32e479bc1a2d9fdf"} Feb 17 16:28:10 crc kubenswrapper[4672]: I0217 16:28:10.649369 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:10 crc kubenswrapper[4672]: I0217 16:28:10.654639 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd8e4614-fb4d-4444-827f-659cffc613ea-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6\" (UID: \"dd8e4614-fb4d-4444-827f-659cffc613ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6" Feb 17 16:28:10 crc kubenswrapper[4672]: I0217 16:28:10.674478 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5km4\" (UniqueName: \"kubernetes.io/projected/dd8e4614-fb4d-4444-827f-659cffc613ea-kube-api-access-n5km4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6\" (UID: \"dd8e4614-fb4d-4444-827f-659cffc613ea\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6" Feb 17 16:28:10 crc kubenswrapper[4672]: I0217 16:28:10.677192 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.677175517 podStartE2EDuration="36.677175517s" podCreationTimestamp="2026-02-17 16:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:28:10.675456542 +0000 UTC m=+1499.429545284" watchObservedRunningTime="2026-02-17 16:28:10.677175517 +0000 UTC m=+1499.431264249" Feb 17 16:28:10 crc kubenswrapper[4672]: I0217 16:28:10.779130 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6" Feb 17 16:28:11 crc kubenswrapper[4672]: I0217 16:28:11.497073 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6"] Feb 17 16:28:11 crc kubenswrapper[4672]: W0217 16:28:11.507619 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd8e4614_fb4d_4444_827f_659cffc613ea.slice/crio-dcf46046b24582d98d8890f1dfc83775a0a5ed6d8ae0e0c955a5ddc1fc2423f9 WatchSource:0}: Error finding container dcf46046b24582d98d8890f1dfc83775a0a5ed6d8ae0e0c955a5ddc1fc2423f9: Status 404 returned error can't find the container with id dcf46046b24582d98d8890f1dfc83775a0a5ed6d8ae0e0c955a5ddc1fc2423f9 Feb 17 16:28:11 crc kubenswrapper[4672]: I0217 16:28:11.661210 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6" event={"ID":"dd8e4614-fb4d-4444-827f-659cffc613ea","Type":"ContainerStarted","Data":"dcf46046b24582d98d8890f1dfc83775a0a5ed6d8ae0e0c955a5ddc1fc2423f9"} Feb 17 16:28:11 crc kubenswrapper[4672]: I0217 16:28:11.682502 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mz2mt" Feb 17 16:28:11 crc kubenswrapper[4672]: I0217 16:28:11.682576 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mz2mt" Feb 17 16:28:11 crc kubenswrapper[4672]: I0217 16:28:11.734533 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mz2mt" Feb 17 16:28:11 crc kubenswrapper[4672]: E0217 16:28:11.965231 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:28:12 crc kubenswrapper[4672]: I0217 16:28:12.773328 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mz2mt" Feb 17 16:28:12 crc kubenswrapper[4672]: I0217 16:28:12.873465 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mz2mt"] Feb 17 16:28:14 crc kubenswrapper[4672]: I0217 16:28:14.697301 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mz2mt" podUID="e335a878-a3d7-4447-b8db-6e4236f184b8" containerName="registry-server" containerID="cri-o://933a974be0e65b25a48c6ac8c900d44c248e8a6d0bb4671b79e7a23099369c08" gracePeriod=2 Feb 17 16:28:15 crc kubenswrapper[4672]: E0217 16:28:15.058276 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:28:15 crc kubenswrapper[4672]: E0217 16:28:15.059242 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:28:15 crc kubenswrapper[4672]: E0217 16:28:15.059348 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66h7h644h64ch5f8h565hfch5dh56chfdh8hfdh5b5h567h6dh665h557h74h665hcbh96h659h554h589h57fh5d9h55h564hcfh5dhffhfdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tx4bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9e58ce9b-ddd5-42bb-8e07-08a22c8871a5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:28:15 crc kubenswrapper[4672]: E0217 16:28:15.060556 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:28:15 crc kubenswrapper[4672]: I0217 16:28:15.709414 4672 generic.go:334] "Generic (PLEG): container finished" podID="e335a878-a3d7-4447-b8db-6e4236f184b8" containerID="933a974be0e65b25a48c6ac8c900d44c248e8a6d0bb4671b79e7a23099369c08" exitCode=0 Feb 17 16:28:15 crc kubenswrapper[4672]: I0217 16:28:15.709460 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mz2mt" event={"ID":"e335a878-a3d7-4447-b8db-6e4236f184b8","Type":"ContainerDied","Data":"933a974be0e65b25a48c6ac8c900d44c248e8a6d0bb4671b79e7a23099369c08"} Feb 17 16:28:21 crc kubenswrapper[4672]: I0217 16:28:21.274310 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:28:21 crc kubenswrapper[4672]: I0217 16:28:21.449862 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mz2mt" Feb 17 16:28:21 crc kubenswrapper[4672]: I0217 16:28:21.611739 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbthh\" (UniqueName: \"kubernetes.io/projected/e335a878-a3d7-4447-b8db-6e4236f184b8-kube-api-access-gbthh\") pod \"e335a878-a3d7-4447-b8db-6e4236f184b8\" (UID: \"e335a878-a3d7-4447-b8db-6e4236f184b8\") " Feb 17 16:28:21 crc kubenswrapper[4672]: I0217 16:28:21.612174 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e335a878-a3d7-4447-b8db-6e4236f184b8-utilities\") pod \"e335a878-a3d7-4447-b8db-6e4236f184b8\" (UID: \"e335a878-a3d7-4447-b8db-6e4236f184b8\") " Feb 17 16:28:21 crc kubenswrapper[4672]: I0217 16:28:21.612233 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e335a878-a3d7-4447-b8db-6e4236f184b8-catalog-content\") pod \"e335a878-a3d7-4447-b8db-6e4236f184b8\" (UID: \"e335a878-a3d7-4447-b8db-6e4236f184b8\") " Feb 17 16:28:21 crc kubenswrapper[4672]: I0217 16:28:21.615233 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e335a878-a3d7-4447-b8db-6e4236f184b8-utilities" (OuterVolumeSpecName: "utilities") pod "e335a878-a3d7-4447-b8db-6e4236f184b8" (UID: "e335a878-a3d7-4447-b8db-6e4236f184b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:28:21 crc kubenswrapper[4672]: I0217 16:28:21.615623 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e335a878-a3d7-4447-b8db-6e4236f184b8-kube-api-access-gbthh" (OuterVolumeSpecName: "kube-api-access-gbthh") pod "e335a878-a3d7-4447-b8db-6e4236f184b8" (UID: "e335a878-a3d7-4447-b8db-6e4236f184b8"). InnerVolumeSpecName "kube-api-access-gbthh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:28:21 crc kubenswrapper[4672]: I0217 16:28:21.650984 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e335a878-a3d7-4447-b8db-6e4236f184b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e335a878-a3d7-4447-b8db-6e4236f184b8" (UID: "e335a878-a3d7-4447-b8db-6e4236f184b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:28:21 crc kubenswrapper[4672]: I0217 16:28:21.714737 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbthh\" (UniqueName: \"kubernetes.io/projected/e335a878-a3d7-4447-b8db-6e4236f184b8-kube-api-access-gbthh\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:21 crc kubenswrapper[4672]: I0217 16:28:21.714767 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e335a878-a3d7-4447-b8db-6e4236f184b8-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:21 crc kubenswrapper[4672]: I0217 16:28:21.714777 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e335a878-a3d7-4447-b8db-6e4236f184b8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:21 crc kubenswrapper[4672]: I0217 16:28:21.766438 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mz2mt" event={"ID":"e335a878-a3d7-4447-b8db-6e4236f184b8","Type":"ContainerDied","Data":"d547c0b8137d9026dc69c862ad4c8af79bcd61b1d9f0c819ad3f94f019271f3a"} Feb 17 16:28:21 crc kubenswrapper[4672]: I0217 16:28:21.766446 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mz2mt" Feb 17 16:28:21 crc kubenswrapper[4672]: I0217 16:28:21.766489 4672 scope.go:117] "RemoveContainer" containerID="933a974be0e65b25a48c6ac8c900d44c248e8a6d0bb4671b79e7a23099369c08" Feb 17 16:28:21 crc kubenswrapper[4672]: I0217 16:28:21.772227 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6" event={"ID":"dd8e4614-fb4d-4444-827f-659cffc613ea","Type":"ContainerStarted","Data":"b3f75647b2dae24fbee6ed5b04c9e7aa4b4630a601f2e6512b7ac0fea81bfbe2"} Feb 17 16:28:21 crc kubenswrapper[4672]: I0217 16:28:21.821233 4672 scope.go:117] "RemoveContainer" containerID="c91e507fa6c51e45e5b8441208504ae558aa3d0c6af7a5b3ef7475a6eb9bffaa" Feb 17 16:28:21 crc kubenswrapper[4672]: I0217 16:28:21.822660 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6" podStartSLOduration=2.062417733 podStartE2EDuration="11.822648203s" podCreationTimestamp="2026-02-17 16:28:10 +0000 UTC" firstStartedPulling="2026-02-17 16:28:11.510680479 +0000 UTC m=+1500.264769211" lastFinishedPulling="2026-02-17 16:28:21.270910939 +0000 UTC m=+1510.024999681" observedRunningTime="2026-02-17 16:28:21.804898795 +0000 UTC m=+1510.558987527" watchObservedRunningTime="2026-02-17 16:28:21.822648203 +0000 UTC m=+1510.576736935" Feb 17 16:28:21 crc kubenswrapper[4672]: I0217 16:28:21.827339 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mz2mt"] Feb 17 16:28:21 crc kubenswrapper[4672]: I0217 16:28:21.840923 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mz2mt"] Feb 17 16:28:21 crc kubenswrapper[4672]: I0217 16:28:21.846564 4672 scope.go:117] "RemoveContainer" containerID="8f6f149566f68a6fe2459d0d664ed488b936849902be64317606655d473ffce6" Feb 17 16:28:21 crc kubenswrapper[4672]: I0217 16:28:21.961142 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e335a878-a3d7-4447-b8db-6e4236f184b8" path="/var/lib/kubelet/pods/e335a878-a3d7-4447-b8db-6e4236f184b8/volumes" Feb 17 16:28:23 crc kubenswrapper[4672]: I0217 16:28:23.985445 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 17 16:28:24 crc kubenswrapper[4672]: I0217 16:28:24.627962 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:28:24 crc kubenswrapper[4672]: E0217 16:28:24.947506 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:28:27 crc kubenswrapper[4672]: I0217 16:28:27.565692 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:28:27 crc kubenswrapper[4672]: I0217 16:28:27.566855 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:28:28 crc kubenswrapper[4672]: E0217 16:28:28.947317 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:28:32 crc kubenswrapper[4672]: I0217 16:28:32.905225 4672 generic.go:334] "Generic (PLEG): container finished" podID="dd8e4614-fb4d-4444-827f-659cffc613ea" containerID="b3f75647b2dae24fbee6ed5b04c9e7aa4b4630a601f2e6512b7ac0fea81bfbe2" exitCode=0 Feb 17 16:28:32 crc kubenswrapper[4672]: I0217 16:28:32.905317 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6" event={"ID":"dd8e4614-fb4d-4444-827f-659cffc613ea","Type":"ContainerDied","Data":"b3f75647b2dae24fbee6ed5b04c9e7aa4b4630a601f2e6512b7ac0fea81bfbe2"} Feb 17 16:28:34 crc kubenswrapper[4672]: I0217 16:28:34.593363 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6" Feb 17 16:28:34 crc kubenswrapper[4672]: I0217 16:28:34.632265 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5km4\" (UniqueName: \"kubernetes.io/projected/dd8e4614-fb4d-4444-827f-659cffc613ea-kube-api-access-n5km4\") pod \"dd8e4614-fb4d-4444-827f-659cffc613ea\" (UID: \"dd8e4614-fb4d-4444-827f-659cffc613ea\") " Feb 17 16:28:34 crc kubenswrapper[4672]: I0217 16:28:34.632543 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd8e4614-fb4d-4444-827f-659cffc613ea-inventory\") pod \"dd8e4614-fb4d-4444-827f-659cffc613ea\" (UID: \"dd8e4614-fb4d-4444-827f-659cffc613ea\") " Feb 17 16:28:34 crc kubenswrapper[4672]: I0217 16:28:34.632629 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd8e4614-fb4d-4444-827f-659cffc613ea-repo-setup-combined-ca-bundle\") pod \"dd8e4614-fb4d-4444-827f-659cffc613ea\" (UID: \"dd8e4614-fb4d-4444-827f-659cffc613ea\") " Feb 17 16:28:34 crc kubenswrapper[4672]: I0217 16:28:34.632679 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd8e4614-fb4d-4444-827f-659cffc613ea-ssh-key-openstack-edpm-ipam\") pod \"dd8e4614-fb4d-4444-827f-659cffc613ea\" (UID: \"dd8e4614-fb4d-4444-827f-659cffc613ea\") " Feb 17 16:28:34 crc kubenswrapper[4672]: I0217 16:28:34.639559 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd8e4614-fb4d-4444-827f-659cffc613ea-kube-api-access-n5km4" (OuterVolumeSpecName: "kube-api-access-n5km4") pod "dd8e4614-fb4d-4444-827f-659cffc613ea" (UID: "dd8e4614-fb4d-4444-827f-659cffc613ea"). InnerVolumeSpecName "kube-api-access-n5km4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:28:34 crc kubenswrapper[4672]: I0217 16:28:34.652154 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd8e4614-fb4d-4444-827f-659cffc613ea-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "dd8e4614-fb4d-4444-827f-659cffc613ea" (UID: "dd8e4614-fb4d-4444-827f-659cffc613ea"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:28:34 crc kubenswrapper[4672]: I0217 16:28:34.679439 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd8e4614-fb4d-4444-827f-659cffc613ea-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dd8e4614-fb4d-4444-827f-659cffc613ea" (UID: "dd8e4614-fb4d-4444-827f-659cffc613ea"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:28:34 crc kubenswrapper[4672]: I0217 16:28:34.722939 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd8e4614-fb4d-4444-827f-659cffc613ea-inventory" (OuterVolumeSpecName: "inventory") pod "dd8e4614-fb4d-4444-827f-659cffc613ea" (UID: "dd8e4614-fb4d-4444-827f-659cffc613ea"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:28:34 crc kubenswrapper[4672]: I0217 16:28:34.735596 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5km4\" (UniqueName: \"kubernetes.io/projected/dd8e4614-fb4d-4444-827f-659cffc613ea-kube-api-access-n5km4\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:34 crc kubenswrapper[4672]: I0217 16:28:34.735628 4672 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd8e4614-fb4d-4444-827f-659cffc613ea-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:34 crc kubenswrapper[4672]: I0217 16:28:34.735638 4672 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd8e4614-fb4d-4444-827f-659cffc613ea-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:34 crc kubenswrapper[4672]: I0217 16:28:34.735647 4672 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd8e4614-fb4d-4444-827f-659cffc613ea-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:34 crc kubenswrapper[4672]: I0217 16:28:34.932924 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6" event={"ID":"dd8e4614-fb4d-4444-827f-659cffc613ea","Type":"ContainerDied","Data":"dcf46046b24582d98d8890f1dfc83775a0a5ed6d8ae0e0c955a5ddc1fc2423f9"} Feb 17 16:28:34 crc kubenswrapper[4672]: I0217 16:28:34.932967 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcf46046b24582d98d8890f1dfc83775a0a5ed6d8ae0e0c955a5ddc1fc2423f9" Feb 17 16:28:34 crc kubenswrapper[4672]: I0217 16:28:34.933053 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6" Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.025765 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4dj4d"] Feb 17 16:28:35 crc kubenswrapper[4672]: E0217 16:28:35.026258 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd8e4614-fb4d-4444-827f-659cffc613ea" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.026281 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd8e4614-fb4d-4444-827f-659cffc613ea" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 16:28:35 crc kubenswrapper[4672]: E0217 16:28:35.026296 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e335a878-a3d7-4447-b8db-6e4236f184b8" containerName="registry-server" Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.026306 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="e335a878-a3d7-4447-b8db-6e4236f184b8" containerName="registry-server" Feb 17 16:28:35 crc kubenswrapper[4672]: E0217 16:28:35.026335 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e335a878-a3d7-4447-b8db-6e4236f184b8" containerName="extract-content" Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.026342 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="e335a878-a3d7-4447-b8db-6e4236f184b8" containerName="extract-content" Feb 17 16:28:35 crc kubenswrapper[4672]: E0217 16:28:35.026372 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e335a878-a3d7-4447-b8db-6e4236f184b8" containerName="extract-utilities" Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.026380 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="e335a878-a3d7-4447-b8db-6e4236f184b8" containerName="extract-utilities" Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.026638 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="e335a878-a3d7-4447-b8db-6e4236f184b8" containerName="registry-server" Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.026661 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd8e4614-fb4d-4444-827f-659cffc613ea" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.027505 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4dj4d" Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.033052 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.033433 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.033797 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.035625 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6sng" Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.042467 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3efee2c1-0f1f-4611-ada4-055dac7d9bc5-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4dj4d\" (UID: \"3efee2c1-0f1f-4611-ada4-055dac7d9bc5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4dj4d" Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.042588 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndr7k\" (UniqueName: \"kubernetes.io/projected/3efee2c1-0f1f-4611-ada4-055dac7d9bc5-kube-api-access-ndr7k\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4dj4d\" (UID: \"3efee2c1-0f1f-4611-ada4-055dac7d9bc5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4dj4d" Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.042804 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3efee2c1-0f1f-4611-ada4-055dac7d9bc5-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4dj4d\" (UID: \"3efee2c1-0f1f-4611-ada4-055dac7d9bc5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4dj4d" Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.060826 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4dj4d"] Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.145074 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3efee2c1-0f1f-4611-ada4-055dac7d9bc5-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4dj4d\" (UID: \"3efee2c1-0f1f-4611-ada4-055dac7d9bc5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4dj4d" Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.145160 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3efee2c1-0f1f-4611-ada4-055dac7d9bc5-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4dj4d\" (UID: \"3efee2c1-0f1f-4611-ada4-055dac7d9bc5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4dj4d" Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.145197 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndr7k\" (UniqueName: \"kubernetes.io/projected/3efee2c1-0f1f-4611-ada4-055dac7d9bc5-kube-api-access-ndr7k\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4dj4d\" (UID: \"3efee2c1-0f1f-4611-ada4-055dac7d9bc5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4dj4d" Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.150552 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3efee2c1-0f1f-4611-ada4-055dac7d9bc5-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4dj4d\" (UID: \"3efee2c1-0f1f-4611-ada4-055dac7d9bc5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4dj4d" Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.150789 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3efee2c1-0f1f-4611-ada4-055dac7d9bc5-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4dj4d\" (UID: \"3efee2c1-0f1f-4611-ada4-055dac7d9bc5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4dj4d" Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.161238 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndr7k\" (UniqueName: \"kubernetes.io/projected/3efee2c1-0f1f-4611-ada4-055dac7d9bc5-kube-api-access-ndr7k\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4dj4d\" (UID: \"3efee2c1-0f1f-4611-ada4-055dac7d9bc5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4dj4d" Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.356014 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4dj4d" Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.888139 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4dj4d"] Feb 17 16:28:35 crc kubenswrapper[4672]: I0217 16:28:35.942427 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4dj4d" event={"ID":"3efee2c1-0f1f-4611-ada4-055dac7d9bc5","Type":"ContainerStarted","Data":"91146deb47e6860608fb31dcde6cd6b0ee4c1cac79f4d455dbb9e7d8bce00add"} Feb 17 16:28:36 crc kubenswrapper[4672]: I0217 16:28:36.954637 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4dj4d" event={"ID":"3efee2c1-0f1f-4611-ada4-055dac7d9bc5","Type":"ContainerStarted","Data":"5883013b969ab0d555e1e66dade0e196897a7dd3020d82be86eb847dbebef4a0"} Feb 17 16:28:36 crc kubenswrapper[4672]: I0217 16:28:36.991453 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4dj4d" podStartSLOduration=2.420150925 podStartE2EDuration="2.991426584s" podCreationTimestamp="2026-02-17 16:28:34 +0000 UTC" firstStartedPulling="2026-02-17 16:28:35.892982789 +0000 UTC m=+1524.647071521" lastFinishedPulling="2026-02-17 16:28:36.464258448 +0000 UTC m=+1525.218347180" observedRunningTime="2026-02-17 16:28:36.973286006 +0000 UTC m=+1525.727374748" watchObservedRunningTime="2026-02-17 16:28:36.991426584 +0000 UTC m=+1525.745515356" Feb 17 16:28:39 crc kubenswrapper[4672]: I0217 16:28:39.994157 4672 generic.go:334] "Generic (PLEG): container finished" podID="3efee2c1-0f1f-4611-ada4-055dac7d9bc5" containerID="5883013b969ab0d555e1e66dade0e196897a7dd3020d82be86eb847dbebef4a0" exitCode=0 Feb 17 16:28:39 crc kubenswrapper[4672]: I0217 16:28:39.994271 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4dj4d" event={"ID":"3efee2c1-0f1f-4611-ada4-055dac7d9bc5","Type":"ContainerDied","Data":"5883013b969ab0d555e1e66dade0e196897a7dd3020d82be86eb847dbebef4a0"} Feb 17 16:28:40 crc kubenswrapper[4672]: E0217 16:28:40.094269 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:28:40 crc kubenswrapper[4672]: E0217 16:28:40.094790 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:28:40 crc kubenswrapper[4672]: E0217 16:28:40.094980 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq9ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-qrhj8_openstack(dc5471f5-2491-4841-be45-09c8f14b35c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:28:40 crc kubenswrapper[4672]: E0217 16:28:40.096222 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:28:41 crc kubenswrapper[4672]: I0217 16:28:41.642586 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4dj4d" Feb 17 16:28:41 crc kubenswrapper[4672]: I0217 16:28:41.811318 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3efee2c1-0f1f-4611-ada4-055dac7d9bc5-inventory\") pod \"3efee2c1-0f1f-4611-ada4-055dac7d9bc5\" (UID: \"3efee2c1-0f1f-4611-ada4-055dac7d9bc5\") " Feb 17 16:28:41 crc kubenswrapper[4672]: I0217 16:28:41.811568 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndr7k\" (UniqueName: \"kubernetes.io/projected/3efee2c1-0f1f-4611-ada4-055dac7d9bc5-kube-api-access-ndr7k\") pod \"3efee2c1-0f1f-4611-ada4-055dac7d9bc5\" (UID: \"3efee2c1-0f1f-4611-ada4-055dac7d9bc5\") " Feb 17 16:28:41 crc kubenswrapper[4672]: I0217 16:28:41.811649 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3efee2c1-0f1f-4611-ada4-055dac7d9bc5-ssh-key-openstack-edpm-ipam\") pod \"3efee2c1-0f1f-4611-ada4-055dac7d9bc5\" (UID: \"3efee2c1-0f1f-4611-ada4-055dac7d9bc5\") " Feb 17 16:28:41 crc kubenswrapper[4672]: I0217 16:28:41.818379 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3efee2c1-0f1f-4611-ada4-055dac7d9bc5-kube-api-access-ndr7k" (OuterVolumeSpecName: "kube-api-access-ndr7k") pod "3efee2c1-0f1f-4611-ada4-055dac7d9bc5" (UID: "3efee2c1-0f1f-4611-ada4-055dac7d9bc5"). InnerVolumeSpecName "kube-api-access-ndr7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:28:41 crc kubenswrapper[4672]: I0217 16:28:41.840424 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3efee2c1-0f1f-4611-ada4-055dac7d9bc5-inventory" (OuterVolumeSpecName: "inventory") pod "3efee2c1-0f1f-4611-ada4-055dac7d9bc5" (UID: "3efee2c1-0f1f-4611-ada4-055dac7d9bc5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:28:41 crc kubenswrapper[4672]: I0217 16:28:41.853856 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3efee2c1-0f1f-4611-ada4-055dac7d9bc5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3efee2c1-0f1f-4611-ada4-055dac7d9bc5" (UID: "3efee2c1-0f1f-4611-ada4-055dac7d9bc5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:28:41 crc kubenswrapper[4672]: I0217 16:28:41.914660 4672 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3efee2c1-0f1f-4611-ada4-055dac7d9bc5-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:41 crc kubenswrapper[4672]: I0217 16:28:41.914696 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndr7k\" (UniqueName: \"kubernetes.io/projected/3efee2c1-0f1f-4611-ada4-055dac7d9bc5-kube-api-access-ndr7k\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:41 crc kubenswrapper[4672]: I0217 16:28:41.914707 4672 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3efee2c1-0f1f-4611-ada4-055dac7d9bc5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:41 crc kubenswrapper[4672]: E0217 16:28:41.957398 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.018038 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4dj4d" event={"ID":"3efee2c1-0f1f-4611-ada4-055dac7d9bc5","Type":"ContainerDied","Data":"91146deb47e6860608fb31dcde6cd6b0ee4c1cac79f4d455dbb9e7d8bce00add"} Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.018090 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91146deb47e6860608fb31dcde6cd6b0ee4c1cac79f4d455dbb9e7d8bce00add" Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.018461 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4dj4d" Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.090377 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6"] Feb 17 16:28:42 crc kubenswrapper[4672]: E0217 16:28:42.090904 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3efee2c1-0f1f-4611-ada4-055dac7d9bc5" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.090925 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="3efee2c1-0f1f-4611-ada4-055dac7d9bc5" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.091228 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="3efee2c1-0f1f-4611-ada4-055dac7d9bc5" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.092382 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6" Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.103827 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.104443 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.104862 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.105222 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6sng" Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.106745 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6"] Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.140354 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb402cd6-e885-4c1e-958a-cb731cdd4569-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6\" (UID: \"fb402cd6-e885-4c1e-958a-cb731cdd4569\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6" Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.140645 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwbmf\" (UniqueName: \"kubernetes.io/projected/fb402cd6-e885-4c1e-958a-cb731cdd4569-kube-api-access-cwbmf\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6\" (UID: \"fb402cd6-e885-4c1e-958a-cb731cdd4569\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6" Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.140922 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb402cd6-e885-4c1e-958a-cb731cdd4569-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6\" (UID: \"fb402cd6-e885-4c1e-958a-cb731cdd4569\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6" Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.141012 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb402cd6-e885-4c1e-958a-cb731cdd4569-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6\" (UID: \"fb402cd6-e885-4c1e-958a-cb731cdd4569\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6" Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.243807 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb402cd6-e885-4c1e-958a-cb731cdd4569-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6\" (UID: \"fb402cd6-e885-4c1e-958a-cb731cdd4569\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6" Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.243985 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwbmf\" (UniqueName: \"kubernetes.io/projected/fb402cd6-e885-4c1e-958a-cb731cdd4569-kube-api-access-cwbmf\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6\" (UID: \"fb402cd6-e885-4c1e-958a-cb731cdd4569\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6" Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.244165 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb402cd6-e885-4c1e-958a-cb731cdd4569-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6\" (UID: \"fb402cd6-e885-4c1e-958a-cb731cdd4569\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6" Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.244873 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb402cd6-e885-4c1e-958a-cb731cdd4569-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6\" (UID: \"fb402cd6-e885-4c1e-958a-cb731cdd4569\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6" Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.249540 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb402cd6-e885-4c1e-958a-cb731cdd4569-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6\" (UID: \"fb402cd6-e885-4c1e-958a-cb731cdd4569\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6" Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.250200 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb402cd6-e885-4c1e-958a-cb731cdd4569-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6\" (UID: \"fb402cd6-e885-4c1e-958a-cb731cdd4569\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6" Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.250383 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb402cd6-e885-4c1e-958a-cb731cdd4569-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6\" (UID: \"fb402cd6-e885-4c1e-958a-cb731cdd4569\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6" Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.268118 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwbmf\" (UniqueName: \"kubernetes.io/projected/fb402cd6-e885-4c1e-958a-cb731cdd4569-kube-api-access-cwbmf\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6\" (UID: \"fb402cd6-e885-4c1e-958a-cb731cdd4569\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6" Feb 17 16:28:42 crc kubenswrapper[4672]: I0217 16:28:42.452045 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6" Feb 17 16:28:43 crc kubenswrapper[4672]: I0217 16:28:43.009088 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6"] Feb 17 16:28:43 crc kubenswrapper[4672]: I0217 16:28:43.035473 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6" event={"ID":"fb402cd6-e885-4c1e-958a-cb731cdd4569","Type":"ContainerStarted","Data":"e10aff40213c2db79558d261786b31c08ea762b79b23480030d2cb307cd8230f"} Feb 17 16:28:44 crc kubenswrapper[4672]: I0217 16:28:44.053105 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6" event={"ID":"fb402cd6-e885-4c1e-958a-cb731cdd4569","Type":"ContainerStarted","Data":"23cbfd877ddd5378a27c9570201fd876a7517b4c61c8359a9774f5ea7c7d70eb"} Feb 17 16:28:44 crc kubenswrapper[4672]: I0217 16:28:44.085249 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6" podStartSLOduration=1.682140462 podStartE2EDuration="2.085224217s" podCreationTimestamp="2026-02-17 16:28:42 +0000 UTC" firstStartedPulling="2026-02-17 16:28:43.024456735 +0000 UTC m=+1531.778545487" lastFinishedPulling="2026-02-17 16:28:43.42754051 +0000 UTC m=+1532.181629242" observedRunningTime="2026-02-17 16:28:44.074158915 +0000 UTC m=+1532.828247687" watchObservedRunningTime="2026-02-17 16:28:44.085224217 +0000 UTC m=+1532.839312969" Feb 17 16:28:50 crc kubenswrapper[4672]: E0217 16:28:50.947875 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:28:54 crc kubenswrapper[4672]: E0217 16:28:54.947932 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:28:57 crc kubenswrapper[4672]: I0217 16:28:57.565838 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:28:57 crc kubenswrapper[4672]: I0217 16:28:57.566246 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:29:01 crc kubenswrapper[4672]: E0217 16:29:01.962570 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:29:07 crc kubenswrapper[4672]: E0217 16:29:07.075481 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:29:07 crc kubenswrapper[4672]: E0217 16:29:07.076088 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:29:07 crc kubenswrapper[4672]: E0217 16:29:07.076237 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66h7h644h64ch5f8h565hfch5dh56chfdh8hfdh5b5h567h6dh665h557h74h665hcbh96h659h554h589h57fh5d9h55h564hcfh5dhffhfdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tx4bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9e58ce9b-ddd5-42bb-8e07-08a22c8871a5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:29:07 crc kubenswrapper[4672]: E0217 16:29:07.077587 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:29:13 crc kubenswrapper[4672]: E0217 16:29:13.948059 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:29:14 crc kubenswrapper[4672]: I0217 16:29:14.888173 4672 scope.go:117] "RemoveContainer" containerID="f311e93ba52e53e04bc016546e66c38491b82ef7abef5d50d6c540b5885e99a7" Feb 17 16:29:14 crc kubenswrapper[4672]: I0217 16:29:14.935456 4672 scope.go:117] "RemoveContainer" containerID="4e239253615386b74a32a02d370df1d52edd468fd4cc3937b61a87ae1b60e2fa" Feb 17 16:29:15 crc kubenswrapper[4672]: I0217 16:29:15.024824 4672 scope.go:117] "RemoveContainer" containerID="9ec3b47d69ecbc02bf5535cd18c1587b2c5efea38a7090c4b5037148d6f43f52" Feb 17 16:29:15 crc kubenswrapper[4672]: I0217 16:29:15.073145 4672 scope.go:117] "RemoveContainer" containerID="0426f5faa9be35ab2713513540515886f13b5bd7bc01b26accedadb2ccde784f" Feb 17 16:29:21 crc kubenswrapper[4672]: E0217 16:29:21.957392 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:29:27 crc kubenswrapper[4672]: I0217 16:29:27.566686 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:29:27 crc kubenswrapper[4672]: I0217 16:29:27.567362 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:29:27 crc kubenswrapper[4672]: I0217 16:29:27.567427 4672 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:29:27 crc kubenswrapper[4672]: I0217 16:29:27.568645 4672 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261"} pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:29:27 crc kubenswrapper[4672]: I0217 16:29:27.568750 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" containerID="cri-o://788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" gracePeriod=600 Feb 17 16:29:27 crc kubenswrapper[4672]: E0217 16:29:27.700019 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:29:27 crc kubenswrapper[4672]: E0217 16:29:27.947208 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:29:28 crc kubenswrapper[4672]: I0217 16:29:28.138082 4672 generic.go:334] "Generic (PLEG): container finished" podID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" exitCode=0 Feb 17 16:29:28 crc kubenswrapper[4672]: I0217 16:29:28.138139 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerDied","Data":"788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261"} Feb 17 16:29:28 crc kubenswrapper[4672]: I0217 16:29:28.138207 4672 scope.go:117] "RemoveContainer" containerID="1722f428334a1de321c821e299e3526dfaf27650f5a791aad97e83a2cd3ceac4" Feb 17 16:29:28 crc kubenswrapper[4672]: I0217 16:29:28.139182 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:29:28 crc kubenswrapper[4672]: E0217 16:29:28.139687 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:29:32 crc kubenswrapper[4672]: E0217 16:29:32.949880 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:29:41 crc kubenswrapper[4672]: E0217 16:29:41.957691 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:29:42 crc kubenswrapper[4672]: I0217 16:29:42.945189 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:29:42 crc kubenswrapper[4672]: E0217 16:29:42.945742 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:29:43 crc kubenswrapper[4672]: E0217 16:29:43.948451 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:29:54 crc kubenswrapper[4672]: E0217 16:29:54.948328 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:29:55 crc kubenswrapper[4672]: E0217 16:29:55.948157 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:29:56 crc kubenswrapper[4672]: I0217 16:29:56.945920 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:29:56 crc kubenswrapper[4672]: E0217 16:29:56.946305 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:30:00 crc kubenswrapper[4672]: I0217 16:30:00.167686 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-mvmlp"] Feb 17 16:30:00 crc kubenswrapper[4672]: I0217 16:30:00.170439 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-mvmlp" Feb 17 16:30:00 crc kubenswrapper[4672]: I0217 16:30:00.173582 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:30:00 crc kubenswrapper[4672]: I0217 16:30:00.173746 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:30:00 crc kubenswrapper[4672]: I0217 16:30:00.185656 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-mvmlp"] Feb 17 16:30:00 crc kubenswrapper[4672]: I0217 16:30:00.335605 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a7c33111-5ce4-4e4d-b36c-58896f808426-secret-volume\") pod \"collect-profiles-29522430-mvmlp\" (UID: \"a7c33111-5ce4-4e4d-b36c-58896f808426\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-mvmlp" Feb 17 16:30:00 crc kubenswrapper[4672]: I0217 16:30:00.337701 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdrx4\" (UniqueName: \"kubernetes.io/projected/a7c33111-5ce4-4e4d-b36c-58896f808426-kube-api-access-pdrx4\") pod \"collect-profiles-29522430-mvmlp\" (UID: \"a7c33111-5ce4-4e4d-b36c-58896f808426\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-mvmlp" Feb 17 16:30:00 crc kubenswrapper[4672]: I0217 16:30:00.337868 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7c33111-5ce4-4e4d-b36c-58896f808426-config-volume\") pod \"collect-profiles-29522430-mvmlp\" (UID: \"a7c33111-5ce4-4e4d-b36c-58896f808426\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-mvmlp" Feb 17 16:30:00 crc kubenswrapper[4672]: I0217 16:30:00.440787 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a7c33111-5ce4-4e4d-b36c-58896f808426-secret-volume\") pod \"collect-profiles-29522430-mvmlp\" (UID: \"a7c33111-5ce4-4e4d-b36c-58896f808426\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-mvmlp" Feb 17 16:30:00 crc kubenswrapper[4672]: I0217 16:30:00.440902 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdrx4\" (UniqueName: \"kubernetes.io/projected/a7c33111-5ce4-4e4d-b36c-58896f808426-kube-api-access-pdrx4\") pod \"collect-profiles-29522430-mvmlp\" (UID: \"a7c33111-5ce4-4e4d-b36c-58896f808426\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-mvmlp" Feb 17 16:30:00 crc kubenswrapper[4672]: I0217 16:30:00.440973 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7c33111-5ce4-4e4d-b36c-58896f808426-config-volume\") pod \"collect-profiles-29522430-mvmlp\" (UID: \"a7c33111-5ce4-4e4d-b36c-58896f808426\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-mvmlp" Feb 17 16:30:00 crc kubenswrapper[4672]: I0217 16:30:00.442334 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7c33111-5ce4-4e4d-b36c-58896f808426-config-volume\") pod \"collect-profiles-29522430-mvmlp\" (UID: \"a7c33111-5ce4-4e4d-b36c-58896f808426\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-mvmlp" Feb 17 16:30:00 crc kubenswrapper[4672]: I0217 16:30:00.448577 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a7c33111-5ce4-4e4d-b36c-58896f808426-secret-volume\") pod \"collect-profiles-29522430-mvmlp\" (UID: \"a7c33111-5ce4-4e4d-b36c-58896f808426\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-mvmlp" Feb 17 16:30:00 crc kubenswrapper[4672]: I0217 16:30:00.461269 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdrx4\" (UniqueName: \"kubernetes.io/projected/a7c33111-5ce4-4e4d-b36c-58896f808426-kube-api-access-pdrx4\") pod \"collect-profiles-29522430-mvmlp\" (UID: \"a7c33111-5ce4-4e4d-b36c-58896f808426\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-mvmlp" Feb 17 16:30:00 crc kubenswrapper[4672]: I0217 16:30:00.534847 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-mvmlp" Feb 17 16:30:01 crc kubenswrapper[4672]: I0217 16:30:01.043998 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-mvmlp"] Feb 17 16:30:01 crc kubenswrapper[4672]: I0217 16:30:01.573666 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-mvmlp" event={"ID":"a7c33111-5ce4-4e4d-b36c-58896f808426","Type":"ContainerStarted","Data":"c83dce869c5f306dabb7d2a96a97af980ba8258cb2d22b0fd1cd077022c17de5"} Feb 17 16:30:01 crc kubenswrapper[4672]: I0217 16:30:01.574156 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-mvmlp" event={"ID":"a7c33111-5ce4-4e4d-b36c-58896f808426","Type":"ContainerStarted","Data":"fc20c7f93f2b8391fad982f39f77aa9e82135791554be250f3255e4e0f0a2bba"} Feb 17 16:30:01 crc kubenswrapper[4672]: I0217 16:30:01.594180 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-mvmlp" podStartSLOduration=1.594165608 podStartE2EDuration="1.594165608s" podCreationTimestamp="2026-02-17 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:30:01.590167133 +0000 UTC m=+1610.344255925" watchObservedRunningTime="2026-02-17 16:30:01.594165608 +0000 UTC m=+1610.348254340" Feb 17 16:30:02 crc kubenswrapper[4672]: I0217 16:30:02.582336 4672 generic.go:334] "Generic (PLEG): container finished" podID="a7c33111-5ce4-4e4d-b36c-58896f808426" containerID="c83dce869c5f306dabb7d2a96a97af980ba8258cb2d22b0fd1cd077022c17de5" exitCode=0 Feb 17 16:30:02 crc kubenswrapper[4672]: I0217 16:30:02.582598 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-mvmlp" event={"ID":"a7c33111-5ce4-4e4d-b36c-58896f808426","Type":"ContainerDied","Data":"c83dce869c5f306dabb7d2a96a97af980ba8258cb2d22b0fd1cd077022c17de5"} Feb 17 16:30:04 crc kubenswrapper[4672]: I0217 16:30:04.068851 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-mvmlp" Feb 17 16:30:04 crc kubenswrapper[4672]: I0217 16:30:04.125766 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a7c33111-5ce4-4e4d-b36c-58896f808426-secret-volume\") pod \"a7c33111-5ce4-4e4d-b36c-58896f808426\" (UID: \"a7c33111-5ce4-4e4d-b36c-58896f808426\") " Feb 17 16:30:04 crc kubenswrapper[4672]: I0217 16:30:04.125895 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7c33111-5ce4-4e4d-b36c-58896f808426-config-volume\") pod \"a7c33111-5ce4-4e4d-b36c-58896f808426\" (UID: \"a7c33111-5ce4-4e4d-b36c-58896f808426\") " Feb 17 16:30:04 crc kubenswrapper[4672]: I0217 16:30:04.125951 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdrx4\" (UniqueName: \"kubernetes.io/projected/a7c33111-5ce4-4e4d-b36c-58896f808426-kube-api-access-pdrx4\") pod \"a7c33111-5ce4-4e4d-b36c-58896f808426\" (UID: \"a7c33111-5ce4-4e4d-b36c-58896f808426\") " Feb 17 16:30:04 crc kubenswrapper[4672]: I0217 16:30:04.128025 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7c33111-5ce4-4e4d-b36c-58896f808426-config-volume" (OuterVolumeSpecName: "config-volume") pod "a7c33111-5ce4-4e4d-b36c-58896f808426" (UID: "a7c33111-5ce4-4e4d-b36c-58896f808426"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:30:04 crc kubenswrapper[4672]: I0217 16:30:04.164175 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c33111-5ce4-4e4d-b36c-58896f808426-kube-api-access-pdrx4" (OuterVolumeSpecName: "kube-api-access-pdrx4") pod "a7c33111-5ce4-4e4d-b36c-58896f808426" (UID: "a7c33111-5ce4-4e4d-b36c-58896f808426"). InnerVolumeSpecName "kube-api-access-pdrx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:30:04 crc kubenswrapper[4672]: I0217 16:30:04.164670 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c33111-5ce4-4e4d-b36c-58896f808426-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a7c33111-5ce4-4e4d-b36c-58896f808426" (UID: "a7c33111-5ce4-4e4d-b36c-58896f808426"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:30:04 crc kubenswrapper[4672]: I0217 16:30:04.233157 4672 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7c33111-5ce4-4e4d-b36c-58896f808426-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:04 crc kubenswrapper[4672]: I0217 16:30:04.233190 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdrx4\" (UniqueName: \"kubernetes.io/projected/a7c33111-5ce4-4e4d-b36c-58896f808426-kube-api-access-pdrx4\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:04 crc kubenswrapper[4672]: I0217 16:30:04.233201 4672 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a7c33111-5ce4-4e4d-b36c-58896f808426-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:04 crc kubenswrapper[4672]: I0217 16:30:04.607659 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-mvmlp" event={"ID":"a7c33111-5ce4-4e4d-b36c-58896f808426","Type":"ContainerDied","Data":"fc20c7f93f2b8391fad982f39f77aa9e82135791554be250f3255e4e0f0a2bba"} Feb 17 16:30:04 crc kubenswrapper[4672]: I0217 16:30:04.607710 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc20c7f93f2b8391fad982f39f77aa9e82135791554be250f3255e4e0f0a2bba" Feb 17 16:30:04 crc kubenswrapper[4672]: I0217 16:30:04.607727 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-mvmlp" Feb 17 16:30:05 crc kubenswrapper[4672]: E0217 16:30:05.947147 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:30:06 crc kubenswrapper[4672]: I0217 16:30:06.946905 4672 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:30:07 crc kubenswrapper[4672]: E0217 16:30:07.083944 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:30:07 crc kubenswrapper[4672]: E0217 16:30:07.084009 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:30:07 crc kubenswrapper[4672]: E0217 16:30:07.084153 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq9ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-qrhj8_openstack(dc5471f5-2491-4841-be45-09c8f14b35c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:30:07 crc kubenswrapper[4672]: E0217 16:30:07.085353 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:30:07 crc kubenswrapper[4672]: I0217 16:30:07.944914 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:30:07 crc kubenswrapper[4672]: E0217 16:30:07.946834 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:30:15 crc kubenswrapper[4672]: I0217 16:30:15.229189 4672 scope.go:117] "RemoveContainer" containerID="84da35121780a7f67dc2d7e9383f9689bebfd9af7cb3001dba721f23323ad680" Feb 17 16:30:15 crc kubenswrapper[4672]: I0217 16:30:15.279624 4672 scope.go:117] "RemoveContainer" containerID="5de0f4fbb7d27105885f6d589fd071b134695174d91ec2848d522fa7b7395b1c" Feb 17 16:30:15 crc kubenswrapper[4672]: I0217 16:30:15.326671 4672 scope.go:117] "RemoveContainer" containerID="b4eeadfb9ece5de10f49b2da19997621f6375e0b1e4f58923e6410083e99843a" Feb 17 16:30:17 crc kubenswrapper[4672]: E0217 16:30:17.946432 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:30:22 crc kubenswrapper[4672]: I0217 16:30:22.945560 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:30:22 crc kubenswrapper[4672]: E0217 16:30:22.946565 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:30:22 crc kubenswrapper[4672]: E0217 16:30:22.948245 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:30:29 crc kubenswrapper[4672]: E0217 16:30:29.055722 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:30:29 crc kubenswrapper[4672]: E0217 16:30:29.056307 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:30:29 crc kubenswrapper[4672]: E0217 16:30:29.056482 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66h7h644h64ch5f8h565hfch5dh56chfdh8hfdh5b5h567h6dh665h557h74h665hcbh96h659h554h589h57fh5d9h55h564hcfh5dhffhfdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tx4bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9e58ce9b-ddd5-42bb-8e07-08a22c8871a5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:30:29 crc kubenswrapper[4672]: E0217 16:30:29.057887 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:30:33 crc kubenswrapper[4672]: E0217 16:30:33.948593 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:30:34 crc kubenswrapper[4672]: I0217 16:30:34.945387 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:30:34 crc kubenswrapper[4672]: E0217 16:30:34.945984 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:30:42 crc kubenswrapper[4672]: E0217 16:30:42.947892 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:30:45 crc kubenswrapper[4672]: I0217 16:30:45.946679 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:30:45 crc kubenswrapper[4672]: E0217 16:30:45.948638 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:30:45 crc kubenswrapper[4672]: E0217 16:30:45.951494 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:30:56 crc kubenswrapper[4672]: I0217 16:30:56.945102 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:30:56 crc kubenswrapper[4672]: E0217 16:30:56.946246 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:30:56 crc kubenswrapper[4672]: E0217 16:30:56.948813 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:30:56 crc kubenswrapper[4672]: E0217 16:30:56.948921 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:31:07 crc kubenswrapper[4672]: E0217 16:31:07.950124 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:31:10 crc kubenswrapper[4672]: I0217 16:31:10.408420 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:31:10 crc kubenswrapper[4672]: E0217 16:31:10.411449 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:31:10 crc kubenswrapper[4672]: E0217 16:31:10.432417 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:31:19 crc kubenswrapper[4672]: E0217 16:31:19.947962 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:31:22 crc kubenswrapper[4672]: I0217 16:31:22.946385 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:31:22 crc kubenswrapper[4672]: E0217 16:31:22.947115 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:31:25 crc kubenswrapper[4672]: E0217 16:31:25.947910 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:31:31 crc kubenswrapper[4672]: E0217 16:31:31.954705 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:31:35 crc kubenswrapper[4672]: I0217 16:31:35.945242 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:31:35 crc kubenswrapper[4672]: E0217 16:31:35.946294 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:31:36 crc kubenswrapper[4672]: E0217 16:31:36.947719 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:31:45 crc kubenswrapper[4672]: E0217 16:31:45.947161 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:31:48 crc kubenswrapper[4672]: I0217 16:31:48.945413 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:31:48 crc kubenswrapper[4672]: E0217 16:31:48.946339 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:31:49 crc kubenswrapper[4672]: E0217 16:31:49.948061 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:32:00 crc kubenswrapper[4672]: I0217 16:32:00.945639 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:32:00 crc kubenswrapper[4672]: E0217 16:32:00.946607 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:32:00 crc kubenswrapper[4672]: E0217 16:32:00.946909 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:32:01 crc kubenswrapper[4672]: E0217 16:32:01.967073 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:32:11 crc kubenswrapper[4672]: I0217 16:32:11.959701 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:32:11 crc kubenswrapper[4672]: E0217 16:32:11.960734 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:32:11 crc kubenswrapper[4672]: E0217 16:32:11.961265 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:32:15 crc kubenswrapper[4672]: I0217 16:32:15.502040 4672 scope.go:117] "RemoveContainer" containerID="9f361a03c53d9f50073ff05525e6ea98f10745f0a26ea97652a9f9aee183f86a" Feb 17 16:32:15 crc kubenswrapper[4672]: I0217 16:32:15.542233 4672 scope.go:117] "RemoveContainer" containerID="f2b2451a4a4376e9dd0ad26d598659806b0503b00f24bbb839ca9bede738e149" Feb 17 16:32:16 crc kubenswrapper[4672]: E0217 16:32:16.947962 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:32:19 crc kubenswrapper[4672]: I0217 16:32:19.196810 4672 generic.go:334] "Generic (PLEG): container finished" podID="fb402cd6-e885-4c1e-958a-cb731cdd4569" containerID="23cbfd877ddd5378a27c9570201fd876a7517b4c61c8359a9774f5ea7c7d70eb" exitCode=0 Feb 17 16:32:19 crc kubenswrapper[4672]: I0217 16:32:19.197104 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6" event={"ID":"fb402cd6-e885-4c1e-958a-cb731cdd4569","Type":"ContainerDied","Data":"23cbfd877ddd5378a27c9570201fd876a7517b4c61c8359a9774f5ea7c7d70eb"} Feb 17 16:32:20 crc kubenswrapper[4672]: I0217 16:32:20.740277 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6" Feb 17 16:32:20 crc kubenswrapper[4672]: I0217 16:32:20.869429 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwbmf\" (UniqueName: \"kubernetes.io/projected/fb402cd6-e885-4c1e-958a-cb731cdd4569-kube-api-access-cwbmf\") pod \"fb402cd6-e885-4c1e-958a-cb731cdd4569\" (UID: \"fb402cd6-e885-4c1e-958a-cb731cdd4569\") " Feb 17 16:32:20 crc kubenswrapper[4672]: I0217 16:32:20.869803 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb402cd6-e885-4c1e-958a-cb731cdd4569-ssh-key-openstack-edpm-ipam\") pod \"fb402cd6-e885-4c1e-958a-cb731cdd4569\" (UID: \"fb402cd6-e885-4c1e-958a-cb731cdd4569\") " Feb 17 16:32:20 crc kubenswrapper[4672]: I0217 16:32:20.869847 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb402cd6-e885-4c1e-958a-cb731cdd4569-inventory\") pod \"fb402cd6-e885-4c1e-958a-cb731cdd4569\" (UID: \"fb402cd6-e885-4c1e-958a-cb731cdd4569\") " Feb 17 16:32:20 crc kubenswrapper[4672]: I0217 16:32:20.869872 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb402cd6-e885-4c1e-958a-cb731cdd4569-bootstrap-combined-ca-bundle\") pod \"fb402cd6-e885-4c1e-958a-cb731cdd4569\" (UID: \"fb402cd6-e885-4c1e-958a-cb731cdd4569\") " Feb 17 16:32:20 crc kubenswrapper[4672]: I0217 16:32:20.874862 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb402cd6-e885-4c1e-958a-cb731cdd4569-kube-api-access-cwbmf" (OuterVolumeSpecName: "kube-api-access-cwbmf") pod "fb402cd6-e885-4c1e-958a-cb731cdd4569" (UID: "fb402cd6-e885-4c1e-958a-cb731cdd4569"). InnerVolumeSpecName "kube-api-access-cwbmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:32:20 crc kubenswrapper[4672]: I0217 16:32:20.875429 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb402cd6-e885-4c1e-958a-cb731cdd4569-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "fb402cd6-e885-4c1e-958a-cb731cdd4569" (UID: "fb402cd6-e885-4c1e-958a-cb731cdd4569"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:32:20 crc kubenswrapper[4672]: I0217 16:32:20.898475 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb402cd6-e885-4c1e-958a-cb731cdd4569-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fb402cd6-e885-4c1e-958a-cb731cdd4569" (UID: "fb402cd6-e885-4c1e-958a-cb731cdd4569"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:32:20 crc kubenswrapper[4672]: I0217 16:32:20.902012 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb402cd6-e885-4c1e-958a-cb731cdd4569-inventory" (OuterVolumeSpecName: "inventory") pod "fb402cd6-e885-4c1e-958a-cb731cdd4569" (UID: "fb402cd6-e885-4c1e-958a-cb731cdd4569"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:32:20 crc kubenswrapper[4672]: I0217 16:32:20.972778 4672 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fb402cd6-e885-4c1e-958a-cb731cdd4569-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:32:20 crc kubenswrapper[4672]: I0217 16:32:20.972944 4672 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb402cd6-e885-4c1e-958a-cb731cdd4569-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:32:20 crc kubenswrapper[4672]: I0217 16:32:20.972962 4672 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb402cd6-e885-4c1e-958a-cb731cdd4569-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:32:20 crc kubenswrapper[4672]: I0217 16:32:20.972974 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwbmf\" (UniqueName: \"kubernetes.io/projected/fb402cd6-e885-4c1e-958a-cb731cdd4569-kube-api-access-cwbmf\") on node \"crc\" DevicePath \"\"" Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.234916 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6" event={"ID":"fb402cd6-e885-4c1e-958a-cb731cdd4569","Type":"ContainerDied","Data":"e10aff40213c2db79558d261786b31c08ea762b79b23480030d2cb307cd8230f"} Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.234973 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e10aff40213c2db79558d261786b31c08ea762b79b23480030d2cb307cd8230f" Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.235053 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6" Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.334480 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2"] Feb 17 16:32:21 crc kubenswrapper[4672]: E0217 16:32:21.335065 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb402cd6-e885-4c1e-958a-cb731cdd4569" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.335093 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb402cd6-e885-4c1e-958a-cb731cdd4569" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 16:32:21 crc kubenswrapper[4672]: E0217 16:32:21.335120 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c33111-5ce4-4e4d-b36c-58896f808426" containerName="collect-profiles" Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.335131 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c33111-5ce4-4e4d-b36c-58896f808426" containerName="collect-profiles" Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.335404 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7c33111-5ce4-4e4d-b36c-58896f808426" containerName="collect-profiles" Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.335430 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb402cd6-e885-4c1e-958a-cb731cdd4569" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.336476 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2" Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.338640 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.339175 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.339366 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.339559 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6sng" Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.347740 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2"] Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.483845 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/945f70cb-9394-43c9-b44c-c6ef7d021f78-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2\" (UID: \"945f70cb-9394-43c9-b44c-c6ef7d021f78\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2" Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.483917 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/945f70cb-9394-43c9-b44c-c6ef7d021f78-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2\" (UID: \"945f70cb-9394-43c9-b44c-c6ef7d021f78\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2" Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.483972 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wtgn\" (UniqueName: \"kubernetes.io/projected/945f70cb-9394-43c9-b44c-c6ef7d021f78-kube-api-access-9wtgn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2\" (UID: \"945f70cb-9394-43c9-b44c-c6ef7d021f78\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2" Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.586849 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/945f70cb-9394-43c9-b44c-c6ef7d021f78-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2\" (UID: \"945f70cb-9394-43c9-b44c-c6ef7d021f78\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2" Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.586971 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/945f70cb-9394-43c9-b44c-c6ef7d021f78-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2\" (UID: \"945f70cb-9394-43c9-b44c-c6ef7d021f78\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2" Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.587051 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wtgn\" (UniqueName: \"kubernetes.io/projected/945f70cb-9394-43c9-b44c-c6ef7d021f78-kube-api-access-9wtgn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2\" (UID: \"945f70cb-9394-43c9-b44c-c6ef7d021f78\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2" Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.591475 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/945f70cb-9394-43c9-b44c-c6ef7d021f78-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2\" (UID: \"945f70cb-9394-43c9-b44c-c6ef7d021f78\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2" Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.592370 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/945f70cb-9394-43c9-b44c-c6ef7d021f78-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2\" (UID: \"945f70cb-9394-43c9-b44c-c6ef7d021f78\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2" Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.612268 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wtgn\" (UniqueName: \"kubernetes.io/projected/945f70cb-9394-43c9-b44c-c6ef7d021f78-kube-api-access-9wtgn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2\" (UID: \"945f70cb-9394-43c9-b44c-c6ef7d021f78\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2" Feb 17 16:32:21 crc kubenswrapper[4672]: I0217 16:32:21.669993 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2" Feb 17 16:32:22 crc kubenswrapper[4672]: I0217 16:32:22.203044 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2"] Feb 17 16:32:22 crc kubenswrapper[4672]: I0217 16:32:22.246650 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2" event={"ID":"945f70cb-9394-43c9-b44c-c6ef7d021f78","Type":"ContainerStarted","Data":"28b0c2116c9048d5da069e8949a5fd6d345fa66e8cd2749952060405fd03ba7e"} Feb 17 16:32:24 crc kubenswrapper[4672]: I0217 16:32:24.282038 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2" event={"ID":"945f70cb-9394-43c9-b44c-c6ef7d021f78","Type":"ContainerStarted","Data":"afcadeadafefb4a59bc9a152fcc109de13de1238df4c19268fa5eda8a80f60e7"} Feb 17 16:32:24 crc kubenswrapper[4672]: I0217 16:32:24.313122 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2" podStartSLOduration=1.910313803 podStartE2EDuration="3.313100351s" podCreationTimestamp="2026-02-17 16:32:21 +0000 UTC" firstStartedPulling="2026-02-17 16:32:22.209299675 +0000 UTC m=+1750.963388417" lastFinishedPulling="2026-02-17 16:32:23.612086233 +0000 UTC m=+1752.366174965" observedRunningTime="2026-02-17 16:32:24.297369587 +0000 UTC m=+1753.051458319" watchObservedRunningTime="2026-02-17 16:32:24.313100351 +0000 UTC m=+1753.067189103" Feb 17 16:32:24 crc kubenswrapper[4672]: I0217 16:32:24.945650 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:32:24 crc kubenswrapper[4672]: E0217 16:32:24.946124 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:32:26 crc kubenswrapper[4672]: E0217 16:32:26.948721 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:32:28 crc kubenswrapper[4672]: E0217 16:32:28.947039 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:32:34 crc kubenswrapper[4672]: I0217 16:32:34.053161 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1287-account-create-update-x77sb"] Feb 17 16:32:34 crc kubenswrapper[4672]: I0217 16:32:34.067253 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-1287-account-create-update-x77sb"] Feb 17 16:32:35 crc kubenswrapper[4672]: I0217 16:32:35.051396 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-slqs6"] Feb 17 16:32:35 crc kubenswrapper[4672]: I0217 16:32:35.064819 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-688a-account-create-update-x7qhx"] Feb 17 16:32:35 crc kubenswrapper[4672]: I0217 16:32:35.074086 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-slqs6"] Feb 17 16:32:35 crc kubenswrapper[4672]: I0217 16:32:35.083746 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-688a-account-create-update-x7qhx"] Feb 17 16:32:35 crc kubenswrapper[4672]: I0217 16:32:35.963766 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f" path="/var/lib/kubelet/pods/52fb83c9-1cc2-42fe-85d3-2cb95ec4d41f/volumes" Feb 17 16:32:35 crc kubenswrapper[4672]: I0217 16:32:35.965277 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f74d64d-8fed-4be4-8d98-174760a351c0" path="/var/lib/kubelet/pods/6f74d64d-8fed-4be4-8d98-174760a351c0/volumes" Feb 17 16:32:35 crc kubenswrapper[4672]: I0217 16:32:35.966447 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77264531-415d-45b2-8009-2f1106313532" path="/var/lib/kubelet/pods/77264531-415d-45b2-8009-2f1106313532/volumes" Feb 17 16:32:36 crc kubenswrapper[4672]: I0217 16:32:36.035314 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-tkd5l"] Feb 17 16:32:36 crc kubenswrapper[4672]: I0217 16:32:36.044106 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-ecb0-account-create-update-tcklb"] Feb 17 16:32:36 crc kubenswrapper[4672]: I0217 16:32:36.052692 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-vcfsn"] Feb 17 16:32:36 crc kubenswrapper[4672]: I0217 16:32:36.063070 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-ecb0-account-create-update-tcklb"] Feb 17 16:32:36 crc kubenswrapper[4672]: I0217 16:32:36.072404 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-tkd5l"] Feb 17 16:32:36 crc kubenswrapper[4672]: I0217 16:32:36.085722 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-vcfsn"] Feb 17 16:32:37 crc kubenswrapper[4672]: I0217 16:32:37.959646 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="319add3d-105e-415b-88e8-b42b594b72da" path="/var/lib/kubelet/pods/319add3d-105e-415b-88e8-b42b594b72da/volumes" Feb 17 16:32:37 crc kubenswrapper[4672]: I0217 16:32:37.962337 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66b89c41-8aa4-4151-81ef-39e5c8d17d32" path="/var/lib/kubelet/pods/66b89c41-8aa4-4151-81ef-39e5c8d17d32/volumes" Feb 17 16:32:37 crc kubenswrapper[4672]: I0217 16:32:37.963731 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69254987-f071-4542-8b6f-dca3a9333b96" path="/var/lib/kubelet/pods/69254987-f071-4542-8b6f-dca3a9333b96/volumes" Feb 17 16:32:38 crc kubenswrapper[4672]: I0217 16:32:38.945343 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:32:38 crc kubenswrapper[4672]: E0217 16:32:38.946021 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:32:40 crc kubenswrapper[4672]: E0217 16:32:40.947442 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:32:40 crc kubenswrapper[4672]: E0217 16:32:40.948390 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:32:49 crc kubenswrapper[4672]: I0217 16:32:49.945812 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:32:49 crc kubenswrapper[4672]: E0217 16:32:49.946816 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:32:52 crc kubenswrapper[4672]: E0217 16:32:52.074406 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:32:52 crc kubenswrapper[4672]: E0217 16:32:52.074464 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:32:52 crc kubenswrapper[4672]: E0217 16:32:52.074614 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq9ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-qrhj8_openstack(dc5471f5-2491-4841-be45-09c8f14b35c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:32:52 crc kubenswrapper[4672]: E0217 16:32:52.075775 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:32:53 crc kubenswrapper[4672]: E0217 16:32:53.947866 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:33:02 crc kubenswrapper[4672]: I0217 16:33:02.944683 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:33:02 crc kubenswrapper[4672]: E0217 16:33:02.945681 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:33:02 crc kubenswrapper[4672]: E0217 16:33:02.946505 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:33:04 crc kubenswrapper[4672]: E0217 16:33:04.949708 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.043362 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-1ccd-account-create-update-s59r2"] Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.055810 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-8bfa-account-create-update-v499m"] Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.066854 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-xv2td"] Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.079789 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-670a-account-create-update-7268z"] Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.106990 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-dgcpz"] Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.117115 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-8bfa-account-create-update-v499m"] Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.126702 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-rxdz9"] Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.135806 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-1ccd-account-create-update-s59r2"] Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.146183 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-bjfpp"] Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.161654 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ba30-account-create-update-mkpmg"] Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.174938 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-dgcpz"] Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.191240 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-670a-account-create-update-7268z"] Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.202722 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-xv2td"] Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.214809 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ba30-account-create-update-mkpmg"] Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.226703 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-rxdz9"] Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.243615 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-bjfpp"] Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.247400 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-hrfjv"] Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.261870 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-hrfjv"] Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.968533 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06f2f111-8f19-433e-bb63-b57167c82e19" path="/var/lib/kubelet/pods/06f2f111-8f19-433e-bb63-b57167c82e19/volumes" Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.969817 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23ac1021-6630-413d-8f59-2ee8de8b22f6" path="/var/lib/kubelet/pods/23ac1021-6630-413d-8f59-2ee8de8b22f6/volumes" Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.971795 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7508897a-e56b-444c-87c2-9d1cbc41170f" path="/var/lib/kubelet/pods/7508897a-e56b-444c-87c2-9d1cbc41170f/volumes" Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.972701 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76e79bb4-99d3-4b9b-b496-f50ec996f5d4" path="/var/lib/kubelet/pods/76e79bb4-99d3-4b9b-b496-f50ec996f5d4/volumes" Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.974165 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aad1e993-49a2-4984-9bf0-11c2a4190fd3" path="/var/lib/kubelet/pods/aad1e993-49a2-4984-9bf0-11c2a4190fd3/volumes" Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.975215 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaf29db5-1c89-453b-8632-65a429e68374" path="/var/lib/kubelet/pods/aaf29db5-1c89-453b-8632-65a429e68374/volumes" Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.976137 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab366b61-1428-4608-8ac3-2bb8063e88f2" path="/var/lib/kubelet/pods/ab366b61-1428-4608-8ac3-2bb8063e88f2/volumes" Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.977598 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfcd0d26-153d-463d-b38b-35b9fdbe6a53" path="/var/lib/kubelet/pods/dfcd0d26-153d-463d-b38b-35b9fdbe6a53/volumes" Feb 17 16:33:05 crc kubenswrapper[4672]: I0217 16:33:05.978442 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb24c242-f54a-40ba-8c88-f3dbed463abd" path="/var/lib/kubelet/pods/fb24c242-f54a-40ba-8c88-f3dbed463abd/volumes" Feb 17 16:33:07 crc kubenswrapper[4672]: I0217 16:33:07.039780 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-p284t"] Feb 17 16:33:07 crc kubenswrapper[4672]: I0217 16:33:07.061648 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-p284t"] Feb 17 16:33:07 crc kubenswrapper[4672]: I0217 16:33:07.959092 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53f2f5a7-17a6-4145-8b1b-f15d7a5309ac" path="/var/lib/kubelet/pods/53f2f5a7-17a6-4145-8b1b-f15d7a5309ac/volumes" Feb 17 16:33:09 crc kubenswrapper[4672]: I0217 16:33:09.031966 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-7fmd6"] Feb 17 16:33:09 crc kubenswrapper[4672]: I0217 16:33:09.045366 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-7fmd6"] Feb 17 16:33:09 crc kubenswrapper[4672]: I0217 16:33:09.958669 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a856bff-885d-46ef-8ce3-300c89cfae1f" path="/var/lib/kubelet/pods/6a856bff-885d-46ef-8ce3-300c89cfae1f/volumes" Feb 17 16:33:13 crc kubenswrapper[4672]: E0217 16:33:13.949166 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:33:15 crc kubenswrapper[4672]: I0217 16:33:15.621420 4672 scope.go:117] "RemoveContainer" containerID="76e9bd1049165f6bd7eb93befcb324ba1380c8a63d6328b44eecf19d9624fe89" Feb 17 16:33:15 crc kubenswrapper[4672]: I0217 16:33:15.664322 4672 scope.go:117] "RemoveContainer" containerID="7dee2536b419de0589ef67fccc8660203bca4f9361bd99d65d873831aa236b7c" Feb 17 16:33:15 crc kubenswrapper[4672]: I0217 16:33:15.728083 4672 scope.go:117] "RemoveContainer" containerID="6829a2817ffaca776f8c96dbc0fbf6f639a72fe83e59f4e17909f920b30ced24" Feb 17 16:33:15 crc kubenswrapper[4672]: I0217 16:33:15.782561 4672 scope.go:117] "RemoveContainer" containerID="5ff3e15f00da0b56a090b787d113eb457f30c50193e1cb0c76e09b12bba8f327" Feb 17 16:33:15 crc kubenswrapper[4672]: I0217 16:33:15.850288 4672 scope.go:117] "RemoveContainer" containerID="9ab2ef9cd2506f35d0d68c30fb6de002767aae3e73bd37b4b1784ed72a3083d5" Feb 17 16:33:15 crc kubenswrapper[4672]: I0217 16:33:15.900057 4672 scope.go:117] "RemoveContainer" containerID="27c1b61f7898fee06e951aa7fa606d09a4b72ab736d84e8ea4ecd6751612a292" Feb 17 16:33:15 crc kubenswrapper[4672]: I0217 16:33:15.933738 4672 scope.go:117] "RemoveContainer" containerID="360da191df09ca4264321fc7f5a648d7f7469215a92141e9fa0a06e4322f7eaf" Feb 17 16:33:15 crc kubenswrapper[4672]: I0217 16:33:15.975571 4672 scope.go:117] "RemoveContainer" containerID="593b1f1b31e25ce3dc314d8e597d0e806bd6c00dfdb06dd511cff7c5d02c8c9c" Feb 17 16:33:16 crc kubenswrapper[4672]: I0217 16:33:16.014244 4672 scope.go:117] "RemoveContainer" containerID="4396e8cb80ac538b2ebb15ebf5dcbf3dd7d714fdbfdb5bb35a0ed0b116530cd2" Feb 17 16:33:16 crc kubenswrapper[4672]: I0217 16:33:16.048155 4672 scope.go:117] "RemoveContainer" containerID="65eeb9157bdb3ade0b0de2e7089ead67024776eae335626b4d1b2730358502b3" Feb 17 16:33:16 crc kubenswrapper[4672]: I0217 16:33:16.083836 4672 scope.go:117] "RemoveContainer" containerID="f896f1fefbeeba338e7b53e25c15c071a98a9186350929435ebe360430f88c52" Feb 17 16:33:16 crc kubenswrapper[4672]: I0217 16:33:16.114349 4672 scope.go:117] "RemoveContainer" containerID="a6b68fbc0dea0631fcd85b9189adbf348448827e893476ddaca4d4a50c7fe42a" Feb 17 16:33:16 crc kubenswrapper[4672]: I0217 16:33:16.152569 4672 scope.go:117] "RemoveContainer" containerID="137f065b32b3c823f9f7f3fbe8b833f28a0d48e3bcaa3f659271b98e9cc20b80" Feb 17 16:33:16 crc kubenswrapper[4672]: I0217 16:33:16.185927 4672 scope.go:117] "RemoveContainer" containerID="25ea71516ba9b87acf013fa346695c5e8bb69dd55df4cbc6f877c9ebcd43a9ba" Feb 17 16:33:16 crc kubenswrapper[4672]: I0217 16:33:16.255437 4672 scope.go:117] "RemoveContainer" containerID="86e446ebcf7786cb2860bea5fd7dea3f3b795218dcc2980195f54b2e5d27a3b2" Feb 17 16:33:16 crc kubenswrapper[4672]: I0217 16:33:16.344555 4672 scope.go:117] "RemoveContainer" containerID="539ecff9b25c888a8a0c59ec69ed05b6535c9bfc9f6358a2194a7eed93b07f25" Feb 17 16:33:16 crc kubenswrapper[4672]: I0217 16:33:16.366601 4672 scope.go:117] "RemoveContainer" containerID="c099632ec5c7fc7fc3d1d8ca66146aac9abea92c9d5ece61e05dbc7fae093aca" Feb 17 16:33:16 crc kubenswrapper[4672]: I0217 16:33:16.945541 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:33:16 crc kubenswrapper[4672]: E0217 16:33:16.945946 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:33:18 crc kubenswrapper[4672]: E0217 16:33:18.075139 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:33:18 crc kubenswrapper[4672]: E0217 16:33:18.075453 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:33:18 crc kubenswrapper[4672]: E0217 16:33:18.075698 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66h7h644h64ch5f8h565hfch5dh56chfdh8hfdh5b5h567h6dh665h557h74h665hcbh96h659h554h589h57fh5d9h55h564hcfh5dhffhfdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tx4bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9e58ce9b-ddd5-42bb-8e07-08a22c8871a5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:33:18 crc kubenswrapper[4672]: E0217 16:33:18.077496 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:33:28 crc kubenswrapper[4672]: E0217 16:33:28.949496 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:33:28 crc kubenswrapper[4672]: E0217 16:33:28.950197 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:33:30 crc kubenswrapper[4672]: I0217 16:33:30.945615 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:33:30 crc kubenswrapper[4672]: E0217 16:33:30.945905 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:33:39 crc kubenswrapper[4672]: I0217 16:33:39.038212 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-sk2p2"] Feb 17 16:33:39 crc kubenswrapper[4672]: I0217 16:33:39.052636 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-sk2p2"] Feb 17 16:33:39 crc kubenswrapper[4672]: I0217 16:33:39.958208 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af496dd6-1cd8-4f50-b4e0-b96466c6eac4" path="/var/lib/kubelet/pods/af496dd6-1cd8-4f50-b4e0-b96466c6eac4/volumes" Feb 17 16:33:40 crc kubenswrapper[4672]: E0217 16:33:40.948366 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:33:42 crc kubenswrapper[4672]: E0217 16:33:42.946071 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:33:43 crc kubenswrapper[4672]: I0217 16:33:43.945018 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:33:43 crc kubenswrapper[4672]: E0217 16:33:43.945349 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:33:53 crc kubenswrapper[4672]: E0217 16:33:53.948334 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:33:56 crc kubenswrapper[4672]: I0217 16:33:56.075773 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-72t4g"] Feb 17 16:33:56 crc kubenswrapper[4672]: I0217 16:33:56.085101 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-w8fzc"] Feb 17 16:33:56 crc kubenswrapper[4672]: I0217 16:33:56.097224 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-72t4g"] Feb 17 16:33:56 crc kubenswrapper[4672]: I0217 16:33:56.105338 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-w8fzc"] Feb 17 16:33:56 crc kubenswrapper[4672]: I0217 16:33:56.944733 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:33:56 crc kubenswrapper[4672]: E0217 16:33:56.945495 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:33:57 crc kubenswrapper[4672]: E0217 16:33:57.948350 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:33:57 crc kubenswrapper[4672]: I0217 16:33:57.960961 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d098964-5b23-460e-bb88-42ed525b84ed" path="/var/lib/kubelet/pods/1d098964-5b23-460e-bb88-42ed525b84ed/volumes" Feb 17 16:33:57 crc kubenswrapper[4672]: I0217 16:33:57.961757 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="649147ca-1dbd-4260-8d7c-8077186059f1" path="/var/lib/kubelet/pods/649147ca-1dbd-4260-8d7c-8077186059f1/volumes" Feb 17 16:33:58 crc kubenswrapper[4672]: I0217 16:33:58.044271 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-5spsr"] Feb 17 16:33:58 crc kubenswrapper[4672]: I0217 16:33:58.064656 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-5spsr"] Feb 17 16:33:59 crc kubenswrapper[4672]: I0217 16:33:59.032452 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-4vtt8"] Feb 17 16:33:59 crc kubenswrapper[4672]: I0217 16:33:59.047723 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-4vtt8"] Feb 17 16:33:59 crc kubenswrapper[4672]: I0217 16:33:59.960347 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="352f61db-51f9-425a-9ee2-78f681033626" path="/var/lib/kubelet/pods/352f61db-51f9-425a-9ee2-78f681033626/volumes" Feb 17 16:33:59 crc kubenswrapper[4672]: I0217 16:33:59.961831 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a68f7c0-293c-434c-8e63-c6855ba4d822" path="/var/lib/kubelet/pods/4a68f7c0-293c-434c-8e63-c6855ba4d822/volumes" Feb 17 16:34:06 crc kubenswrapper[4672]: E0217 16:34:06.947618 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:34:08 crc kubenswrapper[4672]: I0217 16:34:08.945673 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:34:08 crc kubenswrapper[4672]: E0217 16:34:08.946298 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:34:10 crc kubenswrapper[4672]: E0217 16:34:10.116655 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:34:17 crc kubenswrapper[4672]: I0217 16:34:17.035239 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-45n2r"] Feb 17 16:34:17 crc kubenswrapper[4672]: I0217 16:34:17.045319 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-45n2r"] Feb 17 16:34:17 crc kubenswrapper[4672]: I0217 16:34:17.177467 4672 scope.go:117] "RemoveContainer" containerID="fc791be504d7919ae381e82cce9626958016ce9136a99d702f5f1a07fd157209" Feb 17 16:34:17 crc kubenswrapper[4672]: I0217 16:34:17.273745 4672 scope.go:117] "RemoveContainer" containerID="0a8ca4ed628b1d7249393b041a19d78fb3e129bea47af3a23f562581869139a0" Feb 17 16:34:17 crc kubenswrapper[4672]: I0217 16:34:17.315621 4672 scope.go:117] "RemoveContainer" containerID="30718458db68d0f429d661b7899a51b27814db48281672e25b55a8fceeeb4bc1" Feb 17 16:34:17 crc kubenswrapper[4672]: I0217 16:34:17.360918 4672 scope.go:117] "RemoveContainer" containerID="5e465adee39fc2fcc0779a67dab0cd184e31994bcad65a8ca61dfbe0edcf675c" Feb 17 16:34:17 crc kubenswrapper[4672]: I0217 16:34:17.417481 4672 scope.go:117] "RemoveContainer" containerID="440037cfb042404133cd6e1415ee9574a5f59fc811a735253ecc04485a5ea597" Feb 17 16:34:17 crc kubenswrapper[4672]: I0217 16:34:17.963179 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f4819e1-9f5d-4b90-9a97-97c8ac76cc77" path="/var/lib/kubelet/pods/9f4819e1-9f5d-4b90-9a97-97c8ac76cc77/volumes" Feb 17 16:34:19 crc kubenswrapper[4672]: I0217 16:34:19.946118 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:34:19 crc kubenswrapper[4672]: E0217 16:34:19.947257 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:34:19 crc kubenswrapper[4672]: E0217 16:34:19.948494 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:34:22 crc kubenswrapper[4672]: E0217 16:34:22.946800 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:34:32 crc kubenswrapper[4672]: I0217 16:34:32.945568 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:34:33 crc kubenswrapper[4672]: I0217 16:34:33.754101 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerStarted","Data":"158298a2a36ba607ce0910ea7f9e6b7d51481499aa0d19f04c8d953ba6d1effc"} Feb 17 16:34:33 crc kubenswrapper[4672]: E0217 16:34:33.948762 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:34:35 crc kubenswrapper[4672]: E0217 16:34:35.949564 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:34:45 crc kubenswrapper[4672]: E0217 16:34:45.948144 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:34:48 crc kubenswrapper[4672]: E0217 16:34:48.947280 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:34:56 crc kubenswrapper[4672]: E0217 16:34:56.947727 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:34:57 crc kubenswrapper[4672]: I0217 16:34:57.051017 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-vkcbs"] Feb 17 16:34:57 crc kubenswrapper[4672]: I0217 16:34:57.062984 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-vkcbs"] Feb 17 16:34:57 crc kubenswrapper[4672]: I0217 16:34:57.958536 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9" path="/var/lib/kubelet/pods/abb9a2a6-5ab3-4c3a-8d4e-523a92a3e6b9/volumes" Feb 17 16:34:58 crc kubenswrapper[4672]: I0217 16:34:58.033870 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-6cbf-account-create-update-2hf29"] Feb 17 16:34:58 crc kubenswrapper[4672]: I0217 16:34:58.061904 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-6cbf-account-create-update-2hf29"] Feb 17 16:34:59 crc kubenswrapper[4672]: I0217 16:34:59.032899 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-84dtj"] Feb 17 16:34:59 crc kubenswrapper[4672]: I0217 16:34:59.045505 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-wtfr9"] Feb 17 16:34:59 crc kubenswrapper[4672]: I0217 16:34:59.060904 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a99c-account-create-update-f7kt2"] Feb 17 16:34:59 crc kubenswrapper[4672]: I0217 16:34:59.069921 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-e3bf-account-create-update-5gj9s"] Feb 17 16:34:59 crc kubenswrapper[4672]: I0217 16:34:59.077881 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-84dtj"] Feb 17 16:34:59 crc kubenswrapper[4672]: I0217 16:34:59.086112 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-wtfr9"] Feb 17 16:34:59 crc kubenswrapper[4672]: I0217 16:34:59.094086 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a99c-account-create-update-f7kt2"] Feb 17 16:34:59 crc kubenswrapper[4672]: I0217 16:34:59.101865 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-e3bf-account-create-update-5gj9s"] Feb 17 16:34:59 crc kubenswrapper[4672]: I0217 16:34:59.959364 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a5a3d1d-ff96-486d-bb60-b8c390c738e9" path="/var/lib/kubelet/pods/8a5a3d1d-ff96-486d-bb60-b8c390c738e9/volumes" Feb 17 16:34:59 crc kubenswrapper[4672]: I0217 16:34:59.960789 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90804f80-91b4-44bb-b3c1-04c56c687c65" path="/var/lib/kubelet/pods/90804f80-91b4-44bb-b3c1-04c56c687c65/volumes" Feb 17 16:34:59 crc kubenswrapper[4672]: I0217 16:34:59.961473 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2f5d855-0026-4e76-969c-87603f5fe608" path="/var/lib/kubelet/pods/a2f5d855-0026-4e76-969c-87603f5fe608/volumes" Feb 17 16:34:59 crc kubenswrapper[4672]: I0217 16:34:59.962296 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b947707f-716d-48ae-9151-a27658bc5a91" path="/var/lib/kubelet/pods/b947707f-716d-48ae-9151-a27658bc5a91/volumes" Feb 17 16:34:59 crc kubenswrapper[4672]: I0217 16:34:59.965527 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d00fc6a7-229f-4a5f-af9a-8b39b110b5ad" path="/var/lib/kubelet/pods/d00fc6a7-229f-4a5f-af9a-8b39b110b5ad/volumes" Feb 17 16:35:01 crc kubenswrapper[4672]: E0217 16:35:01.973411 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:35:07 crc kubenswrapper[4672]: E0217 16:35:07.948410 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:35:15 crc kubenswrapper[4672]: E0217 16:35:15.947452 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:35:17 crc kubenswrapper[4672]: I0217 16:35:17.570805 4672 scope.go:117] "RemoveContainer" containerID="99152be0a5d138bdb9afcbc9ad4a3d6d4231280e5c4e3754de962fdecc48ecdd" Feb 17 16:35:17 crc kubenswrapper[4672]: I0217 16:35:17.604160 4672 scope.go:117] "RemoveContainer" containerID="e15cdb408b2113a0efca62534226d10d0f394eb65bf4d018046b16597a210adb" Feb 17 16:35:17 crc kubenswrapper[4672]: I0217 16:35:17.676978 4672 scope.go:117] "RemoveContainer" containerID="bcde18a66e4281c3d0ad1da47a163fd8192986163cb6c145ea94051ce5ce0488" Feb 17 16:35:17 crc kubenswrapper[4672]: I0217 16:35:17.731071 4672 scope.go:117] "RemoveContainer" containerID="da2357b4e4ebc0b9c24c13cb1be6aa1344fddc17e78a794a9b103a054861fdc3" Feb 17 16:35:17 crc kubenswrapper[4672]: I0217 16:35:17.768565 4672 scope.go:117] "RemoveContainer" containerID="07b0589eea8d6b5abdee4373cb76c4f3f74040bbe13f2f2a89f34fb8e44bef77" Feb 17 16:35:17 crc kubenswrapper[4672]: I0217 16:35:17.820637 4672 scope.go:117] "RemoveContainer" containerID="fe2741f0ffcaa8b2beeaef5d27ce0186dfdbe811b0f5e893708f048dbc9d5d99" Feb 17 16:35:17 crc kubenswrapper[4672]: I0217 16:35:17.882209 4672 scope.go:117] "RemoveContainer" containerID="0ded762023c5a0f104b5b57988b861026e9ead73d89c2f2bb27a9658ed5f7c03" Feb 17 16:35:21 crc kubenswrapper[4672]: E0217 16:35:21.956479 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:35:27 crc kubenswrapper[4672]: I0217 16:35:27.035248 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-g7n24"] Feb 17 16:35:27 crc kubenswrapper[4672]: I0217 16:35:27.047087 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-g7n24"] Feb 17 16:35:27 crc kubenswrapper[4672]: I0217 16:35:27.955477 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="348b9f8c-3534-40ae-9a6d-989fd1db076d" path="/var/lib/kubelet/pods/348b9f8c-3534-40ae-9a6d-989fd1db076d/volumes" Feb 17 16:35:29 crc kubenswrapper[4672]: E0217 16:35:29.947654 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:35:35 crc kubenswrapper[4672]: E0217 16:35:35.949703 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:35:40 crc kubenswrapper[4672]: I0217 16:35:40.597937 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-slb5x"] Feb 17 16:35:40 crc kubenswrapper[4672]: I0217 16:35:40.600390 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-slb5x" Feb 17 16:35:40 crc kubenswrapper[4672]: I0217 16:35:40.615345 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-slb5x"] Feb 17 16:35:40 crc kubenswrapper[4672]: I0217 16:35:40.698918 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnqkv\" (UniqueName: \"kubernetes.io/projected/526befb5-6b6b-4fa1-a09b-634deaa93c1b-kube-api-access-nnqkv\") pod \"certified-operators-slb5x\" (UID: \"526befb5-6b6b-4fa1-a09b-634deaa93c1b\") " pod="openshift-marketplace/certified-operators-slb5x" Feb 17 16:35:40 crc kubenswrapper[4672]: I0217 16:35:40.699114 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/526befb5-6b6b-4fa1-a09b-634deaa93c1b-catalog-content\") pod \"certified-operators-slb5x\" (UID: \"526befb5-6b6b-4fa1-a09b-634deaa93c1b\") " pod="openshift-marketplace/certified-operators-slb5x" Feb 17 16:35:40 crc kubenswrapper[4672]: I0217 16:35:40.699145 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/526befb5-6b6b-4fa1-a09b-634deaa93c1b-utilities\") pod \"certified-operators-slb5x\" (UID: \"526befb5-6b6b-4fa1-a09b-634deaa93c1b\") " pod="openshift-marketplace/certified-operators-slb5x" Feb 17 16:35:40 crc kubenswrapper[4672]: I0217 16:35:40.801586 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnqkv\" (UniqueName: \"kubernetes.io/projected/526befb5-6b6b-4fa1-a09b-634deaa93c1b-kube-api-access-nnqkv\") pod \"certified-operators-slb5x\" (UID: \"526befb5-6b6b-4fa1-a09b-634deaa93c1b\") " pod="openshift-marketplace/certified-operators-slb5x" Feb 17 16:35:40 crc kubenswrapper[4672]: I0217 16:35:40.801753 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/526befb5-6b6b-4fa1-a09b-634deaa93c1b-catalog-content\") pod \"certified-operators-slb5x\" (UID: \"526befb5-6b6b-4fa1-a09b-634deaa93c1b\") " pod="openshift-marketplace/certified-operators-slb5x" Feb 17 16:35:40 crc kubenswrapper[4672]: I0217 16:35:40.801780 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/526befb5-6b6b-4fa1-a09b-634deaa93c1b-utilities\") pod \"certified-operators-slb5x\" (UID: \"526befb5-6b6b-4fa1-a09b-634deaa93c1b\") " pod="openshift-marketplace/certified-operators-slb5x" Feb 17 16:35:40 crc kubenswrapper[4672]: I0217 16:35:40.802322 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/526befb5-6b6b-4fa1-a09b-634deaa93c1b-catalog-content\") pod \"certified-operators-slb5x\" (UID: \"526befb5-6b6b-4fa1-a09b-634deaa93c1b\") " pod="openshift-marketplace/certified-operators-slb5x" Feb 17 16:35:40 crc kubenswrapper[4672]: I0217 16:35:40.802379 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/526befb5-6b6b-4fa1-a09b-634deaa93c1b-utilities\") pod \"certified-operators-slb5x\" (UID: \"526befb5-6b6b-4fa1-a09b-634deaa93c1b\") " pod="openshift-marketplace/certified-operators-slb5x" Feb 17 16:35:40 crc kubenswrapper[4672]: I0217 16:35:40.826334 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnqkv\" (UniqueName: \"kubernetes.io/projected/526befb5-6b6b-4fa1-a09b-634deaa93c1b-kube-api-access-nnqkv\") pod \"certified-operators-slb5x\" (UID: \"526befb5-6b6b-4fa1-a09b-634deaa93c1b\") " pod="openshift-marketplace/certified-operators-slb5x" Feb 17 16:35:40 crc kubenswrapper[4672]: I0217 16:35:40.922965 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-slb5x" Feb 17 16:35:41 crc kubenswrapper[4672]: I0217 16:35:41.482795 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-slb5x"] Feb 17 16:35:42 crc kubenswrapper[4672]: I0217 16:35:42.420982 4672 generic.go:334] "Generic (PLEG): container finished" podID="526befb5-6b6b-4fa1-a09b-634deaa93c1b" containerID="a42d8dd199da0b568f0dd96b3cda8d29a0cd5f1e124e5ddf7add795764730b40" exitCode=0 Feb 17 16:35:42 crc kubenswrapper[4672]: I0217 16:35:42.421033 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slb5x" event={"ID":"526befb5-6b6b-4fa1-a09b-634deaa93c1b","Type":"ContainerDied","Data":"a42d8dd199da0b568f0dd96b3cda8d29a0cd5f1e124e5ddf7add795764730b40"} Feb 17 16:35:42 crc kubenswrapper[4672]: I0217 16:35:42.421296 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slb5x" event={"ID":"526befb5-6b6b-4fa1-a09b-634deaa93c1b","Type":"ContainerStarted","Data":"1dc9c581a6f3ad0f6111b0287be36450fa0ad6ee58902073db4980b9a2bb1edb"} Feb 17 16:35:42 crc kubenswrapper[4672]: I0217 16:35:42.423881 4672 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:35:42 crc kubenswrapper[4672]: E0217 16:35:42.946552 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:35:44 crc kubenswrapper[4672]: I0217 16:35:44.444733 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slb5x" event={"ID":"526befb5-6b6b-4fa1-a09b-634deaa93c1b","Type":"ContainerStarted","Data":"11dd95cc88824a325c31781ebbeaea9af3822c1337658c1c3e362df1f8dbffdf"} Feb 17 16:35:47 crc kubenswrapper[4672]: I0217 16:35:47.482971 4672 generic.go:334] "Generic (PLEG): container finished" podID="526befb5-6b6b-4fa1-a09b-634deaa93c1b" containerID="11dd95cc88824a325c31781ebbeaea9af3822c1337658c1c3e362df1f8dbffdf" exitCode=0 Feb 17 16:35:47 crc kubenswrapper[4672]: I0217 16:35:47.483048 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slb5x" event={"ID":"526befb5-6b6b-4fa1-a09b-634deaa93c1b","Type":"ContainerDied","Data":"11dd95cc88824a325c31781ebbeaea9af3822c1337658c1c3e362df1f8dbffdf"} Feb 17 16:35:48 crc kubenswrapper[4672]: I0217 16:35:48.496905 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slb5x" event={"ID":"526befb5-6b6b-4fa1-a09b-634deaa93c1b","Type":"ContainerStarted","Data":"e6aed0c578a36fccda120f4105d738fb13d2aef88bd9b1b8da83ca735183ceef"} Feb 17 16:35:48 crc kubenswrapper[4672]: I0217 16:35:48.527874 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-slb5x" podStartSLOduration=2.9947640890000002 podStartE2EDuration="8.527851451s" podCreationTimestamp="2026-02-17 16:35:40 +0000 UTC" firstStartedPulling="2026-02-17 16:35:42.423638054 +0000 UTC m=+1951.177726786" lastFinishedPulling="2026-02-17 16:35:47.956725406 +0000 UTC m=+1956.710814148" observedRunningTime="2026-02-17 16:35:48.514869509 +0000 UTC m=+1957.268958261" watchObservedRunningTime="2026-02-17 16:35:48.527851451 +0000 UTC m=+1957.281940193" Feb 17 16:35:49 crc kubenswrapper[4672]: E0217 16:35:49.946936 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:35:50 crc kubenswrapper[4672]: I0217 16:35:50.923733 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-slb5x" Feb 17 16:35:50 crc kubenswrapper[4672]: I0217 16:35:50.923787 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-slb5x" Feb 17 16:35:50 crc kubenswrapper[4672]: I0217 16:35:50.978697 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-slb5x" Feb 17 16:35:53 crc kubenswrapper[4672]: E0217 16:35:53.947267 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:35:54 crc kubenswrapper[4672]: I0217 16:35:54.056254 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-c5jfb"] Feb 17 16:35:54 crc kubenswrapper[4672]: I0217 16:35:54.068559 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-c5jfb"] Feb 17 16:35:55 crc kubenswrapper[4672]: I0217 16:35:55.959208 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d89089a9-0ddc-4c81-a639-dd9dcf7e9163" path="/var/lib/kubelet/pods/d89089a9-0ddc-4c81-a639-dd9dcf7e9163/volumes" Feb 17 16:36:00 crc kubenswrapper[4672]: I0217 16:36:00.031900 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4jq6r"] Feb 17 16:36:00 crc kubenswrapper[4672]: I0217 16:36:00.042610 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4jq6r"] Feb 17 16:36:00 crc kubenswrapper[4672]: E0217 16:36:00.947084 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:36:00 crc kubenswrapper[4672]: I0217 16:36:00.981625 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-slb5x" Feb 17 16:36:01 crc kubenswrapper[4672]: I0217 16:36:01.964392 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="032644e0-8b08-4138-8e14-aee003b214d2" path="/var/lib/kubelet/pods/032644e0-8b08-4138-8e14-aee003b214d2/volumes" Feb 17 16:36:01 crc kubenswrapper[4672]: I0217 16:36:01.978605 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-slb5x"] Feb 17 16:36:01 crc kubenswrapper[4672]: I0217 16:36:01.978975 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-slb5x" podUID="526befb5-6b6b-4fa1-a09b-634deaa93c1b" containerName="registry-server" containerID="cri-o://e6aed0c578a36fccda120f4105d738fb13d2aef88bd9b1b8da83ca735183ceef" gracePeriod=2 Feb 17 16:36:02 crc kubenswrapper[4672]: I0217 16:36:02.580916 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-slb5x" Feb 17 16:36:02 crc kubenswrapper[4672]: I0217 16:36:02.644330 4672 generic.go:334] "Generic (PLEG): container finished" podID="526befb5-6b6b-4fa1-a09b-634deaa93c1b" containerID="e6aed0c578a36fccda120f4105d738fb13d2aef88bd9b1b8da83ca735183ceef" exitCode=0 Feb 17 16:36:02 crc kubenswrapper[4672]: I0217 16:36:02.644371 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-slb5x" Feb 17 16:36:02 crc kubenswrapper[4672]: I0217 16:36:02.644422 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slb5x" event={"ID":"526befb5-6b6b-4fa1-a09b-634deaa93c1b","Type":"ContainerDied","Data":"e6aed0c578a36fccda120f4105d738fb13d2aef88bd9b1b8da83ca735183ceef"} Feb 17 16:36:02 crc kubenswrapper[4672]: I0217 16:36:02.645056 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-slb5x" event={"ID":"526befb5-6b6b-4fa1-a09b-634deaa93c1b","Type":"ContainerDied","Data":"1dc9c581a6f3ad0f6111b0287be36450fa0ad6ee58902073db4980b9a2bb1edb"} Feb 17 16:36:02 crc kubenswrapper[4672]: I0217 16:36:02.645118 4672 scope.go:117] "RemoveContainer" containerID="e6aed0c578a36fccda120f4105d738fb13d2aef88bd9b1b8da83ca735183ceef" Feb 17 16:36:02 crc kubenswrapper[4672]: I0217 16:36:02.665107 4672 scope.go:117] "RemoveContainer" containerID="11dd95cc88824a325c31781ebbeaea9af3822c1337658c1c3e362df1f8dbffdf" Feb 17 16:36:02 crc kubenswrapper[4672]: I0217 16:36:02.700504 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/526befb5-6b6b-4fa1-a09b-634deaa93c1b-utilities\") pod \"526befb5-6b6b-4fa1-a09b-634deaa93c1b\" (UID: \"526befb5-6b6b-4fa1-a09b-634deaa93c1b\") " Feb 17 16:36:02 crc kubenswrapper[4672]: I0217 16:36:02.700748 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnqkv\" (UniqueName: \"kubernetes.io/projected/526befb5-6b6b-4fa1-a09b-634deaa93c1b-kube-api-access-nnqkv\") pod \"526befb5-6b6b-4fa1-a09b-634deaa93c1b\" (UID: \"526befb5-6b6b-4fa1-a09b-634deaa93c1b\") " Feb 17 16:36:02 crc kubenswrapper[4672]: I0217 16:36:02.700811 4672 scope.go:117] "RemoveContainer" containerID="a42d8dd199da0b568f0dd96b3cda8d29a0cd5f1e124e5ddf7add795764730b40" Feb 17 16:36:02 crc kubenswrapper[4672]: I0217 16:36:02.700861 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/526befb5-6b6b-4fa1-a09b-634deaa93c1b-catalog-content\") pod \"526befb5-6b6b-4fa1-a09b-634deaa93c1b\" (UID: \"526befb5-6b6b-4fa1-a09b-634deaa93c1b\") " Feb 17 16:36:02 crc kubenswrapper[4672]: I0217 16:36:02.702133 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/526befb5-6b6b-4fa1-a09b-634deaa93c1b-utilities" (OuterVolumeSpecName: "utilities") pod "526befb5-6b6b-4fa1-a09b-634deaa93c1b" (UID: "526befb5-6b6b-4fa1-a09b-634deaa93c1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:36:02 crc kubenswrapper[4672]: I0217 16:36:02.715886 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/526befb5-6b6b-4fa1-a09b-634deaa93c1b-kube-api-access-nnqkv" (OuterVolumeSpecName: "kube-api-access-nnqkv") pod "526befb5-6b6b-4fa1-a09b-634deaa93c1b" (UID: "526befb5-6b6b-4fa1-a09b-634deaa93c1b"). InnerVolumeSpecName "kube-api-access-nnqkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:36:02 crc kubenswrapper[4672]: I0217 16:36:02.749129 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/526befb5-6b6b-4fa1-a09b-634deaa93c1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "526befb5-6b6b-4fa1-a09b-634deaa93c1b" (UID: "526befb5-6b6b-4fa1-a09b-634deaa93c1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:36:02 crc kubenswrapper[4672]: I0217 16:36:02.792715 4672 scope.go:117] "RemoveContainer" containerID="e6aed0c578a36fccda120f4105d738fb13d2aef88bd9b1b8da83ca735183ceef" Feb 17 16:36:02 crc kubenswrapper[4672]: E0217 16:36:02.793285 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6aed0c578a36fccda120f4105d738fb13d2aef88bd9b1b8da83ca735183ceef\": container with ID starting with e6aed0c578a36fccda120f4105d738fb13d2aef88bd9b1b8da83ca735183ceef not found: ID does not exist" containerID="e6aed0c578a36fccda120f4105d738fb13d2aef88bd9b1b8da83ca735183ceef" Feb 17 16:36:02 crc kubenswrapper[4672]: I0217 16:36:02.793325 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6aed0c578a36fccda120f4105d738fb13d2aef88bd9b1b8da83ca735183ceef"} err="failed to get container status \"e6aed0c578a36fccda120f4105d738fb13d2aef88bd9b1b8da83ca735183ceef\": rpc error: code = NotFound desc = could not find container \"e6aed0c578a36fccda120f4105d738fb13d2aef88bd9b1b8da83ca735183ceef\": container with ID starting with e6aed0c578a36fccda120f4105d738fb13d2aef88bd9b1b8da83ca735183ceef not found: ID does not exist" Feb 17 16:36:02 crc kubenswrapper[4672]: I0217 16:36:02.793349 4672 scope.go:117] "RemoveContainer" containerID="11dd95cc88824a325c31781ebbeaea9af3822c1337658c1c3e362df1f8dbffdf" Feb 17 16:36:02 crc kubenswrapper[4672]: E0217 16:36:02.793619 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11dd95cc88824a325c31781ebbeaea9af3822c1337658c1c3e362df1f8dbffdf\": container with ID starting with 11dd95cc88824a325c31781ebbeaea9af3822c1337658c1c3e362df1f8dbffdf not found: ID does not exist" containerID="11dd95cc88824a325c31781ebbeaea9af3822c1337658c1c3e362df1f8dbffdf" Feb 17 16:36:02 crc kubenswrapper[4672]: I0217 16:36:02.793647 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11dd95cc88824a325c31781ebbeaea9af3822c1337658c1c3e362df1f8dbffdf"} err="failed to get container status \"11dd95cc88824a325c31781ebbeaea9af3822c1337658c1c3e362df1f8dbffdf\": rpc error: code = NotFound desc = could not find container \"11dd95cc88824a325c31781ebbeaea9af3822c1337658c1c3e362df1f8dbffdf\": container with ID starting with 11dd95cc88824a325c31781ebbeaea9af3822c1337658c1c3e362df1f8dbffdf not found: ID does not exist" Feb 17 16:36:02 crc kubenswrapper[4672]: I0217 16:36:02.793664 4672 scope.go:117] "RemoveContainer" containerID="a42d8dd199da0b568f0dd96b3cda8d29a0cd5f1e124e5ddf7add795764730b40" Feb 17 16:36:02 crc kubenswrapper[4672]: E0217 16:36:02.794043 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a42d8dd199da0b568f0dd96b3cda8d29a0cd5f1e124e5ddf7add795764730b40\": container with ID starting with a42d8dd199da0b568f0dd96b3cda8d29a0cd5f1e124e5ddf7add795764730b40 not found: ID does not exist" containerID="a42d8dd199da0b568f0dd96b3cda8d29a0cd5f1e124e5ddf7add795764730b40" Feb 17 16:36:02 crc kubenswrapper[4672]: I0217 16:36:02.794067 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a42d8dd199da0b568f0dd96b3cda8d29a0cd5f1e124e5ddf7add795764730b40"} err="failed to get container status \"a42d8dd199da0b568f0dd96b3cda8d29a0cd5f1e124e5ddf7add795764730b40\": rpc error: code = NotFound desc = could not find container \"a42d8dd199da0b568f0dd96b3cda8d29a0cd5f1e124e5ddf7add795764730b40\": container with ID starting with a42d8dd199da0b568f0dd96b3cda8d29a0cd5f1e124e5ddf7add795764730b40 not found: ID does not exist" Feb 17 16:36:02 crc kubenswrapper[4672]: I0217 16:36:02.803856 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/526befb5-6b6b-4fa1-a09b-634deaa93c1b-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:36:02 crc kubenswrapper[4672]: I0217 16:36:02.803898 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnqkv\" (UniqueName: \"kubernetes.io/projected/526befb5-6b6b-4fa1-a09b-634deaa93c1b-kube-api-access-nnqkv\") on node \"crc\" DevicePath \"\"" Feb 17 16:36:02 crc kubenswrapper[4672]: I0217 16:36:02.803912 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/526befb5-6b6b-4fa1-a09b-634deaa93c1b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:36:03 crc kubenswrapper[4672]: I0217 16:36:03.014254 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-slb5x"] Feb 17 16:36:03 crc kubenswrapper[4672]: I0217 16:36:03.029365 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-slb5x"] Feb 17 16:36:03 crc kubenswrapper[4672]: I0217 16:36:03.958921 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="526befb5-6b6b-4fa1-a09b-634deaa93c1b" path="/var/lib/kubelet/pods/526befb5-6b6b-4fa1-a09b-634deaa93c1b/volumes" Feb 17 16:36:06 crc kubenswrapper[4672]: E0217 16:36:06.948163 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:36:13 crc kubenswrapper[4672]: E0217 16:36:13.948294 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:36:18 crc kubenswrapper[4672]: I0217 16:36:18.039238 4672 scope.go:117] "RemoveContainer" containerID="8f2a24d95a39e2bdc52a59a549c1d20dc5cd9223153269c654366b9b645808b5" Feb 17 16:36:18 crc kubenswrapper[4672]: I0217 16:36:18.114729 4672 scope.go:117] "RemoveContainer" containerID="b493038888d06908685ef6a56b380cf4b3ea8e5e5b0673760d178413b0c5d528" Feb 17 16:36:18 crc kubenswrapper[4672]: I0217 16:36:18.161598 4672 scope.go:117] "RemoveContainer" containerID="74545cce8d094ef3f457ff7851b63c1ecf4112b386b93e028a52cbbbe186b9d0" Feb 17 16:36:21 crc kubenswrapper[4672]: E0217 16:36:21.956229 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:36:24 crc kubenswrapper[4672]: E0217 16:36:24.946829 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:36:35 crc kubenswrapper[4672]: E0217 16:36:35.947734 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:36:36 crc kubenswrapper[4672]: E0217 16:36:36.947372 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:36:42 crc kubenswrapper[4672]: I0217 16:36:42.039249 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-87krq"] Feb 17 16:36:42 crc kubenswrapper[4672]: I0217 16:36:42.046243 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-87krq"] Feb 17 16:36:43 crc kubenswrapper[4672]: I0217 16:36:43.959875 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6b2f22c-613e-4774-b353-a90ff22bfba3" path="/var/lib/kubelet/pods/b6b2f22c-613e-4774-b353-a90ff22bfba3/volumes" Feb 17 16:36:50 crc kubenswrapper[4672]: E0217 16:36:50.947803 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:36:51 crc kubenswrapper[4672]: E0217 16:36:51.954692 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:36:57 crc kubenswrapper[4672]: I0217 16:36:57.565735 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:36:57 crc kubenswrapper[4672]: I0217 16:36:57.566236 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:37:03 crc kubenswrapper[4672]: E0217 16:37:03.947392 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:37:03 crc kubenswrapper[4672]: E0217 16:37:03.948140 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:37:15 crc kubenswrapper[4672]: E0217 16:37:15.947737 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:37:17 crc kubenswrapper[4672]: E0217 16:37:17.947826 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:37:18 crc kubenswrapper[4672]: I0217 16:37:18.324528 4672 scope.go:117] "RemoveContainer" containerID="c9c5ab7c921496df8eca632d56907825bf43bb3644be5f77f64d5cb3bd894d99" Feb 17 16:37:27 crc kubenswrapper[4672]: I0217 16:37:27.565992 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:37:27 crc kubenswrapper[4672]: I0217 16:37:27.566609 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:37:28 crc kubenswrapper[4672]: E0217 16:37:28.947721 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:37:29 crc kubenswrapper[4672]: E0217 16:37:29.946234 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:37:40 crc kubenswrapper[4672]: E0217 16:37:40.947948 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:37:41 crc kubenswrapper[4672]: E0217 16:37:41.952673 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:37:42 crc kubenswrapper[4672]: I0217 16:37:42.023446 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pwzck"] Feb 17 16:37:42 crc kubenswrapper[4672]: E0217 16:37:42.023977 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="526befb5-6b6b-4fa1-a09b-634deaa93c1b" containerName="extract-utilities" Feb 17 16:37:42 crc kubenswrapper[4672]: I0217 16:37:42.023997 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="526befb5-6b6b-4fa1-a09b-634deaa93c1b" containerName="extract-utilities" Feb 17 16:37:42 crc kubenswrapper[4672]: E0217 16:37:42.024017 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="526befb5-6b6b-4fa1-a09b-634deaa93c1b" containerName="registry-server" Feb 17 16:37:42 crc kubenswrapper[4672]: I0217 16:37:42.024025 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="526befb5-6b6b-4fa1-a09b-634deaa93c1b" containerName="registry-server" Feb 17 16:37:42 crc kubenswrapper[4672]: E0217 16:37:42.024043 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="526befb5-6b6b-4fa1-a09b-634deaa93c1b" containerName="extract-content" Feb 17 16:37:42 crc kubenswrapper[4672]: I0217 16:37:42.024072 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="526befb5-6b6b-4fa1-a09b-634deaa93c1b" containerName="extract-content" Feb 17 16:37:42 crc kubenswrapper[4672]: I0217 16:37:42.024304 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="526befb5-6b6b-4fa1-a09b-634deaa93c1b" containerName="registry-server" Feb 17 16:37:42 crc kubenswrapper[4672]: I0217 16:37:42.026249 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwzck" Feb 17 16:37:42 crc kubenswrapper[4672]: I0217 16:37:42.039099 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pwzck"] Feb 17 16:37:42 crc kubenswrapper[4672]: I0217 16:37:42.187227 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76zpf\" (UniqueName: \"kubernetes.io/projected/bb6baf03-277f-4eb3-b22e-ff73af698c20-kube-api-access-76zpf\") pod \"redhat-operators-pwzck\" (UID: \"bb6baf03-277f-4eb3-b22e-ff73af698c20\") " pod="openshift-marketplace/redhat-operators-pwzck" Feb 17 16:37:42 crc kubenswrapper[4672]: I0217 16:37:42.187288 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb6baf03-277f-4eb3-b22e-ff73af698c20-catalog-content\") pod \"redhat-operators-pwzck\" (UID: \"bb6baf03-277f-4eb3-b22e-ff73af698c20\") " pod="openshift-marketplace/redhat-operators-pwzck" Feb 17 16:37:42 crc kubenswrapper[4672]: I0217 16:37:42.187368 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb6baf03-277f-4eb3-b22e-ff73af698c20-utilities\") pod \"redhat-operators-pwzck\" (UID: \"bb6baf03-277f-4eb3-b22e-ff73af698c20\") " pod="openshift-marketplace/redhat-operators-pwzck" Feb 17 16:37:42 crc kubenswrapper[4672]: I0217 16:37:42.289423 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76zpf\" (UniqueName: \"kubernetes.io/projected/bb6baf03-277f-4eb3-b22e-ff73af698c20-kube-api-access-76zpf\") pod \"redhat-operators-pwzck\" (UID: \"bb6baf03-277f-4eb3-b22e-ff73af698c20\") " pod="openshift-marketplace/redhat-operators-pwzck" Feb 17 16:37:42 crc kubenswrapper[4672]: I0217 16:37:42.289498 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb6baf03-277f-4eb3-b22e-ff73af698c20-catalog-content\") pod \"redhat-operators-pwzck\" (UID: \"bb6baf03-277f-4eb3-b22e-ff73af698c20\") " pod="openshift-marketplace/redhat-operators-pwzck" Feb 17 16:37:42 crc kubenswrapper[4672]: I0217 16:37:42.289607 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb6baf03-277f-4eb3-b22e-ff73af698c20-utilities\") pod \"redhat-operators-pwzck\" (UID: \"bb6baf03-277f-4eb3-b22e-ff73af698c20\") " pod="openshift-marketplace/redhat-operators-pwzck" Feb 17 16:37:42 crc kubenswrapper[4672]: I0217 16:37:42.290065 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb6baf03-277f-4eb3-b22e-ff73af698c20-catalog-content\") pod \"redhat-operators-pwzck\" (UID: \"bb6baf03-277f-4eb3-b22e-ff73af698c20\") " pod="openshift-marketplace/redhat-operators-pwzck" Feb 17 16:37:42 crc kubenswrapper[4672]: I0217 16:37:42.290084 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb6baf03-277f-4eb3-b22e-ff73af698c20-utilities\") pod \"redhat-operators-pwzck\" (UID: \"bb6baf03-277f-4eb3-b22e-ff73af698c20\") " pod="openshift-marketplace/redhat-operators-pwzck" Feb 17 16:37:42 crc kubenswrapper[4672]: I0217 16:37:42.309726 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76zpf\" (UniqueName: \"kubernetes.io/projected/bb6baf03-277f-4eb3-b22e-ff73af698c20-kube-api-access-76zpf\") pod \"redhat-operators-pwzck\" (UID: \"bb6baf03-277f-4eb3-b22e-ff73af698c20\") " pod="openshift-marketplace/redhat-operators-pwzck" Feb 17 16:37:42 crc kubenswrapper[4672]: I0217 16:37:42.350664 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwzck" Feb 17 16:37:42 crc kubenswrapper[4672]: I0217 16:37:42.808810 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pwzck"] Feb 17 16:37:43 crc kubenswrapper[4672]: I0217 16:37:43.653839 4672 generic.go:334] "Generic (PLEG): container finished" podID="bb6baf03-277f-4eb3-b22e-ff73af698c20" containerID="d73b906bec5464eef52f4ce4c8bf2877247963ba1c404d0aeaf429cca7df7ab8" exitCode=0 Feb 17 16:37:43 crc kubenswrapper[4672]: I0217 16:37:43.653884 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwzck" event={"ID":"bb6baf03-277f-4eb3-b22e-ff73af698c20","Type":"ContainerDied","Data":"d73b906bec5464eef52f4ce4c8bf2877247963ba1c404d0aeaf429cca7df7ab8"} Feb 17 16:37:43 crc kubenswrapper[4672]: I0217 16:37:43.654261 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwzck" event={"ID":"bb6baf03-277f-4eb3-b22e-ff73af698c20","Type":"ContainerStarted","Data":"a1d13b88a48dfc5e04c48e029cce8ea9e1e52cbe947b7d11fdb768e94b8cd0ce"} Feb 17 16:37:44 crc kubenswrapper[4672]: I0217 16:37:44.667172 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwzck" event={"ID":"bb6baf03-277f-4eb3-b22e-ff73af698c20","Type":"ContainerStarted","Data":"8847f2db5a8677c2728acbe6d094030f9c649eabadb44a552bac8e9791c18cee"} Feb 17 16:37:47 crc kubenswrapper[4672]: I0217 16:37:47.398889 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bv6gm"] Feb 17 16:37:47 crc kubenswrapper[4672]: I0217 16:37:47.401911 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bv6gm" Feb 17 16:37:47 crc kubenswrapper[4672]: I0217 16:37:47.419690 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bv6gm"] Feb 17 16:37:47 crc kubenswrapper[4672]: I0217 16:37:47.603740 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c21930b-1c76-4ed2-96dd-d926a6f847ca-utilities\") pod \"redhat-marketplace-bv6gm\" (UID: \"5c21930b-1c76-4ed2-96dd-d926a6f847ca\") " pod="openshift-marketplace/redhat-marketplace-bv6gm" Feb 17 16:37:47 crc kubenswrapper[4672]: I0217 16:37:47.603832 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l6bp\" (UniqueName: \"kubernetes.io/projected/5c21930b-1c76-4ed2-96dd-d926a6f847ca-kube-api-access-2l6bp\") pod \"redhat-marketplace-bv6gm\" (UID: \"5c21930b-1c76-4ed2-96dd-d926a6f847ca\") " pod="openshift-marketplace/redhat-marketplace-bv6gm" Feb 17 16:37:47 crc kubenswrapper[4672]: I0217 16:37:47.604750 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c21930b-1c76-4ed2-96dd-d926a6f847ca-catalog-content\") pod \"redhat-marketplace-bv6gm\" (UID: \"5c21930b-1c76-4ed2-96dd-d926a6f847ca\") " pod="openshift-marketplace/redhat-marketplace-bv6gm" Feb 17 16:37:47 crc kubenswrapper[4672]: I0217 16:37:47.706464 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c21930b-1c76-4ed2-96dd-d926a6f847ca-catalog-content\") pod \"redhat-marketplace-bv6gm\" (UID: \"5c21930b-1c76-4ed2-96dd-d926a6f847ca\") " pod="openshift-marketplace/redhat-marketplace-bv6gm" Feb 17 16:37:47 crc kubenswrapper[4672]: I0217 16:37:47.706898 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c21930b-1c76-4ed2-96dd-d926a6f847ca-utilities\") pod \"redhat-marketplace-bv6gm\" (UID: \"5c21930b-1c76-4ed2-96dd-d926a6f847ca\") " pod="openshift-marketplace/redhat-marketplace-bv6gm" Feb 17 16:37:47 crc kubenswrapper[4672]: I0217 16:37:47.706953 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l6bp\" (UniqueName: \"kubernetes.io/projected/5c21930b-1c76-4ed2-96dd-d926a6f847ca-kube-api-access-2l6bp\") pod \"redhat-marketplace-bv6gm\" (UID: \"5c21930b-1c76-4ed2-96dd-d926a6f847ca\") " pod="openshift-marketplace/redhat-marketplace-bv6gm" Feb 17 16:37:47 crc kubenswrapper[4672]: I0217 16:37:47.911799 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c21930b-1c76-4ed2-96dd-d926a6f847ca-catalog-content\") pod \"redhat-marketplace-bv6gm\" (UID: \"5c21930b-1c76-4ed2-96dd-d926a6f847ca\") " pod="openshift-marketplace/redhat-marketplace-bv6gm" Feb 17 16:37:47 crc kubenswrapper[4672]: I0217 16:37:47.911862 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c21930b-1c76-4ed2-96dd-d926a6f847ca-utilities\") pod \"redhat-marketplace-bv6gm\" (UID: \"5c21930b-1c76-4ed2-96dd-d926a6f847ca\") " pod="openshift-marketplace/redhat-marketplace-bv6gm" Feb 17 16:37:47 crc kubenswrapper[4672]: I0217 16:37:47.931698 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l6bp\" (UniqueName: \"kubernetes.io/projected/5c21930b-1c76-4ed2-96dd-d926a6f847ca-kube-api-access-2l6bp\") pod \"redhat-marketplace-bv6gm\" (UID: \"5c21930b-1c76-4ed2-96dd-d926a6f847ca\") " pod="openshift-marketplace/redhat-marketplace-bv6gm" Feb 17 16:37:48 crc kubenswrapper[4672]: I0217 16:37:48.023423 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bv6gm" Feb 17 16:37:48 crc kubenswrapper[4672]: W0217 16:37:48.501228 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c21930b_1c76_4ed2_96dd_d926a6f847ca.slice/crio-2d5f2bad2e804f80a3a4086bcc6cd8235e1e10a1b43219412676f2b24ef1afc6 WatchSource:0}: Error finding container 2d5f2bad2e804f80a3a4086bcc6cd8235e1e10a1b43219412676f2b24ef1afc6: Status 404 returned error can't find the container with id 2d5f2bad2e804f80a3a4086bcc6cd8235e1e10a1b43219412676f2b24ef1afc6 Feb 17 16:37:48 crc kubenswrapper[4672]: I0217 16:37:48.507447 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bv6gm"] Feb 17 16:37:48 crc kubenswrapper[4672]: I0217 16:37:48.705393 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv6gm" event={"ID":"5c21930b-1c76-4ed2-96dd-d926a6f847ca","Type":"ContainerStarted","Data":"2d5f2bad2e804f80a3a4086bcc6cd8235e1e10a1b43219412676f2b24ef1afc6"} Feb 17 16:37:49 crc kubenswrapper[4672]: I0217 16:37:49.716285 4672 generic.go:334] "Generic (PLEG): container finished" podID="5c21930b-1c76-4ed2-96dd-d926a6f847ca" containerID="3427bae713c77c6224dca77152c5c5338c69340e7c5e4f086b74165ddf050531" exitCode=0 Feb 17 16:37:49 crc kubenswrapper[4672]: I0217 16:37:49.716487 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv6gm" event={"ID":"5c21930b-1c76-4ed2-96dd-d926a6f847ca","Type":"ContainerDied","Data":"3427bae713c77c6224dca77152c5c5338c69340e7c5e4f086b74165ddf050531"} Feb 17 16:37:51 crc kubenswrapper[4672]: I0217 16:37:51.742248 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv6gm" event={"ID":"5c21930b-1c76-4ed2-96dd-d926a6f847ca","Type":"ContainerStarted","Data":"f0f5b47d10c485fb32d78535719e9d7ecbc624dcd12d5e03e43ea6bdf6625888"} Feb 17 16:37:52 crc kubenswrapper[4672]: E0217 16:37:52.041215 4672 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb6baf03_277f_4eb3_b22e_ff73af698c20.slice/crio-8847f2db5a8677c2728acbe6d094030f9c649eabadb44a552bac8e9791c18cee.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb6baf03_277f_4eb3_b22e_ff73af698c20.slice/crio-conmon-8847f2db5a8677c2728acbe6d094030f9c649eabadb44a552bac8e9791c18cee.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:37:52 crc kubenswrapper[4672]: I0217 16:37:52.755854 4672 generic.go:334] "Generic (PLEG): container finished" podID="5c21930b-1c76-4ed2-96dd-d926a6f847ca" containerID="f0f5b47d10c485fb32d78535719e9d7ecbc624dcd12d5e03e43ea6bdf6625888" exitCode=0 Feb 17 16:37:52 crc kubenswrapper[4672]: I0217 16:37:52.755915 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv6gm" event={"ID":"5c21930b-1c76-4ed2-96dd-d926a6f847ca","Type":"ContainerDied","Data":"f0f5b47d10c485fb32d78535719e9d7ecbc624dcd12d5e03e43ea6bdf6625888"} Feb 17 16:37:52 crc kubenswrapper[4672]: I0217 16:37:52.761984 4672 generic.go:334] "Generic (PLEG): container finished" podID="bb6baf03-277f-4eb3-b22e-ff73af698c20" containerID="8847f2db5a8677c2728acbe6d094030f9c649eabadb44a552bac8e9791c18cee" exitCode=0 Feb 17 16:37:52 crc kubenswrapper[4672]: I0217 16:37:52.762043 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwzck" event={"ID":"bb6baf03-277f-4eb3-b22e-ff73af698c20","Type":"ContainerDied","Data":"8847f2db5a8677c2728acbe6d094030f9c649eabadb44a552bac8e9791c18cee"} Feb 17 16:37:53 crc kubenswrapper[4672]: E0217 16:37:53.992622 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:37:54 crc kubenswrapper[4672]: E0217 16:37:54.120341 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:37:54 crc kubenswrapper[4672]: E0217 16:37:54.120400 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:37:54 crc kubenswrapper[4672]: E0217 16:37:54.120559 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq9ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-qrhj8_openstack(dc5471f5-2491-4841-be45-09c8f14b35c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:37:54 crc kubenswrapper[4672]: E0217 16:37:54.121909 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:37:54 crc kubenswrapper[4672]: I0217 16:37:54.782994 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv6gm" event={"ID":"5c21930b-1c76-4ed2-96dd-d926a6f847ca","Type":"ContainerStarted","Data":"e80d5ea8d5d3ea8556721204f98667304f6ad21aa9ad406754bd837fbf9db497"} Feb 17 16:37:54 crc kubenswrapper[4672]: I0217 16:37:54.786434 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwzck" event={"ID":"bb6baf03-277f-4eb3-b22e-ff73af698c20","Type":"ContainerStarted","Data":"71e0dc04b709aaa801edaf60e8ba2ff810115765a0bbe0f7a3471db19fca9d30"} Feb 17 16:37:54 crc kubenswrapper[4672]: I0217 16:37:54.806804 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bv6gm" podStartSLOduration=3.944894342 podStartE2EDuration="7.806781108s" podCreationTimestamp="2026-02-17 16:37:47 +0000 UTC" firstStartedPulling="2026-02-17 16:37:49.718181139 +0000 UTC m=+2078.472269881" lastFinishedPulling="2026-02-17 16:37:53.580067925 +0000 UTC m=+2082.334156647" observedRunningTime="2026-02-17 16:37:54.799669101 +0000 UTC m=+2083.553757843" watchObservedRunningTime="2026-02-17 16:37:54.806781108 +0000 UTC m=+2083.560869850" Feb 17 16:37:54 crc kubenswrapper[4672]: I0217 16:37:54.830818 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pwzck" podStartSLOduration=2.400473625 podStartE2EDuration="12.830800292s" podCreationTimestamp="2026-02-17 16:37:42 +0000 UTC" firstStartedPulling="2026-02-17 16:37:43.657292133 +0000 UTC m=+2072.411380875" lastFinishedPulling="2026-02-17 16:37:54.08761881 +0000 UTC m=+2082.841707542" observedRunningTime="2026-02-17 16:37:54.815699994 +0000 UTC m=+2083.569788736" watchObservedRunningTime="2026-02-17 16:37:54.830800292 +0000 UTC m=+2083.584889024" Feb 17 16:37:57 crc kubenswrapper[4672]: I0217 16:37:57.566345 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:37:57 crc kubenswrapper[4672]: I0217 16:37:57.567168 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:37:57 crc kubenswrapper[4672]: I0217 16:37:57.567249 4672 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:37:57 crc kubenswrapper[4672]: I0217 16:37:57.568474 4672 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"158298a2a36ba607ce0910ea7f9e6b7d51481499aa0d19f04c8d953ba6d1effc"} pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:37:57 crc kubenswrapper[4672]: I0217 16:37:57.568614 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" containerID="cri-o://158298a2a36ba607ce0910ea7f9e6b7d51481499aa0d19f04c8d953ba6d1effc" gracePeriod=600 Feb 17 16:37:57 crc kubenswrapper[4672]: I0217 16:37:57.819609 4672 generic.go:334] "Generic (PLEG): container finished" podID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerID="158298a2a36ba607ce0910ea7f9e6b7d51481499aa0d19f04c8d953ba6d1effc" exitCode=0 Feb 17 16:37:57 crc kubenswrapper[4672]: I0217 16:37:57.819652 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerDied","Data":"158298a2a36ba607ce0910ea7f9e6b7d51481499aa0d19f04c8d953ba6d1effc"} Feb 17 16:37:57 crc kubenswrapper[4672]: I0217 16:37:57.819686 4672 scope.go:117] "RemoveContainer" containerID="788d6fae0de977927563b863088aef42316f3581ec13b8d2264de7cde8aac261" Feb 17 16:37:58 crc kubenswrapper[4672]: I0217 16:37:58.023984 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bv6gm" Feb 17 16:37:58 crc kubenswrapper[4672]: I0217 16:37:58.024419 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bv6gm" Feb 17 16:37:58 crc kubenswrapper[4672]: I0217 16:37:58.081414 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bv6gm" Feb 17 16:37:58 crc kubenswrapper[4672]: I0217 16:37:58.833210 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerStarted","Data":"5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1"} Feb 17 16:37:58 crc kubenswrapper[4672]: I0217 16:37:58.896133 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bv6gm" Feb 17 16:37:58 crc kubenswrapper[4672]: I0217 16:37:58.950910 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bv6gm"] Feb 17 16:38:00 crc kubenswrapper[4672]: I0217 16:38:00.850206 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bv6gm" podUID="5c21930b-1c76-4ed2-96dd-d926a6f847ca" containerName="registry-server" containerID="cri-o://e80d5ea8d5d3ea8556721204f98667304f6ad21aa9ad406754bd837fbf9db497" gracePeriod=2 Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.434192 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bv6gm" Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.527959 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l6bp\" (UniqueName: \"kubernetes.io/projected/5c21930b-1c76-4ed2-96dd-d926a6f847ca-kube-api-access-2l6bp\") pod \"5c21930b-1c76-4ed2-96dd-d926a6f847ca\" (UID: \"5c21930b-1c76-4ed2-96dd-d926a6f847ca\") " Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.528339 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c21930b-1c76-4ed2-96dd-d926a6f847ca-utilities\") pod \"5c21930b-1c76-4ed2-96dd-d926a6f847ca\" (UID: \"5c21930b-1c76-4ed2-96dd-d926a6f847ca\") " Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.528480 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c21930b-1c76-4ed2-96dd-d926a6f847ca-catalog-content\") pod \"5c21930b-1c76-4ed2-96dd-d926a6f847ca\" (UID: \"5c21930b-1c76-4ed2-96dd-d926a6f847ca\") " Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.529226 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c21930b-1c76-4ed2-96dd-d926a6f847ca-utilities" (OuterVolumeSpecName: "utilities") pod "5c21930b-1c76-4ed2-96dd-d926a6f847ca" (UID: "5c21930b-1c76-4ed2-96dd-d926a6f847ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.532820 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c21930b-1c76-4ed2-96dd-d926a6f847ca-kube-api-access-2l6bp" (OuterVolumeSpecName: "kube-api-access-2l6bp") pod "5c21930b-1c76-4ed2-96dd-d926a6f847ca" (UID: "5c21930b-1c76-4ed2-96dd-d926a6f847ca"). InnerVolumeSpecName "kube-api-access-2l6bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.553072 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c21930b-1c76-4ed2-96dd-d926a6f847ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c21930b-1c76-4ed2-96dd-d926a6f847ca" (UID: "5c21930b-1c76-4ed2-96dd-d926a6f847ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.630868 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c21930b-1c76-4ed2-96dd-d926a6f847ca-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.630905 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l6bp\" (UniqueName: \"kubernetes.io/projected/5c21930b-1c76-4ed2-96dd-d926a6f847ca-kube-api-access-2l6bp\") on node \"crc\" DevicePath \"\"" Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.630915 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c21930b-1c76-4ed2-96dd-d926a6f847ca-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.862811 4672 generic.go:334] "Generic (PLEG): container finished" podID="5c21930b-1c76-4ed2-96dd-d926a6f847ca" containerID="e80d5ea8d5d3ea8556721204f98667304f6ad21aa9ad406754bd837fbf9db497" exitCode=0 Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.862856 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv6gm" event={"ID":"5c21930b-1c76-4ed2-96dd-d926a6f847ca","Type":"ContainerDied","Data":"e80d5ea8d5d3ea8556721204f98667304f6ad21aa9ad406754bd837fbf9db497"} Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.862877 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bv6gm" Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.862899 4672 scope.go:117] "RemoveContainer" containerID="e80d5ea8d5d3ea8556721204f98667304f6ad21aa9ad406754bd837fbf9db497" Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.862886 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv6gm" event={"ID":"5c21930b-1c76-4ed2-96dd-d926a6f847ca","Type":"ContainerDied","Data":"2d5f2bad2e804f80a3a4086bcc6cd8235e1e10a1b43219412676f2b24ef1afc6"} Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.894437 4672 scope.go:117] "RemoveContainer" containerID="f0f5b47d10c485fb32d78535719e9d7ecbc624dcd12d5e03e43ea6bdf6625888" Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.904249 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bv6gm"] Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.916046 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bv6gm"] Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.925909 4672 scope.go:117] "RemoveContainer" containerID="3427bae713c77c6224dca77152c5c5338c69340e7c5e4f086b74165ddf050531" Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.957145 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c21930b-1c76-4ed2-96dd-d926a6f847ca" path="/var/lib/kubelet/pods/5c21930b-1c76-4ed2-96dd-d926a6f847ca/volumes" Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.964426 4672 scope.go:117] "RemoveContainer" containerID="e80d5ea8d5d3ea8556721204f98667304f6ad21aa9ad406754bd837fbf9db497" Feb 17 16:38:01 crc kubenswrapper[4672]: E0217 16:38:01.964976 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e80d5ea8d5d3ea8556721204f98667304f6ad21aa9ad406754bd837fbf9db497\": container with ID starting with e80d5ea8d5d3ea8556721204f98667304f6ad21aa9ad406754bd837fbf9db497 not found: ID does not exist" containerID="e80d5ea8d5d3ea8556721204f98667304f6ad21aa9ad406754bd837fbf9db497" Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.965030 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e80d5ea8d5d3ea8556721204f98667304f6ad21aa9ad406754bd837fbf9db497"} err="failed to get container status \"e80d5ea8d5d3ea8556721204f98667304f6ad21aa9ad406754bd837fbf9db497\": rpc error: code = NotFound desc = could not find container \"e80d5ea8d5d3ea8556721204f98667304f6ad21aa9ad406754bd837fbf9db497\": container with ID starting with e80d5ea8d5d3ea8556721204f98667304f6ad21aa9ad406754bd837fbf9db497 not found: ID does not exist" Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.965059 4672 scope.go:117] "RemoveContainer" containerID="f0f5b47d10c485fb32d78535719e9d7ecbc624dcd12d5e03e43ea6bdf6625888" Feb 17 16:38:01 crc kubenswrapper[4672]: E0217 16:38:01.965497 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0f5b47d10c485fb32d78535719e9d7ecbc624dcd12d5e03e43ea6bdf6625888\": container with ID starting with f0f5b47d10c485fb32d78535719e9d7ecbc624dcd12d5e03e43ea6bdf6625888 not found: ID does not exist" containerID="f0f5b47d10c485fb32d78535719e9d7ecbc624dcd12d5e03e43ea6bdf6625888" Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.965540 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0f5b47d10c485fb32d78535719e9d7ecbc624dcd12d5e03e43ea6bdf6625888"} err="failed to get container status \"f0f5b47d10c485fb32d78535719e9d7ecbc624dcd12d5e03e43ea6bdf6625888\": rpc error: code = NotFound desc = could not find container \"f0f5b47d10c485fb32d78535719e9d7ecbc624dcd12d5e03e43ea6bdf6625888\": container with ID starting with f0f5b47d10c485fb32d78535719e9d7ecbc624dcd12d5e03e43ea6bdf6625888 not found: ID does not exist" Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.965557 4672 scope.go:117] "RemoveContainer" containerID="3427bae713c77c6224dca77152c5c5338c69340e7c5e4f086b74165ddf050531" Feb 17 16:38:01 crc kubenswrapper[4672]: E0217 16:38:01.965860 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3427bae713c77c6224dca77152c5c5338c69340e7c5e4f086b74165ddf050531\": container with ID starting with 3427bae713c77c6224dca77152c5c5338c69340e7c5e4f086b74165ddf050531 not found: ID does not exist" containerID="3427bae713c77c6224dca77152c5c5338c69340e7c5e4f086b74165ddf050531" Feb 17 16:38:01 crc kubenswrapper[4672]: I0217 16:38:01.965881 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3427bae713c77c6224dca77152c5c5338c69340e7c5e4f086b74165ddf050531"} err="failed to get container status \"3427bae713c77c6224dca77152c5c5338c69340e7c5e4f086b74165ddf050531\": rpc error: code = NotFound desc = could not find container \"3427bae713c77c6224dca77152c5c5338c69340e7c5e4f086b74165ddf050531\": container with ID starting with 3427bae713c77c6224dca77152c5c5338c69340e7c5e4f086b74165ddf050531 not found: ID does not exist" Feb 17 16:38:02 crc kubenswrapper[4672]: I0217 16:38:02.351843 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pwzck" Feb 17 16:38:02 crc kubenswrapper[4672]: I0217 16:38:02.352751 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pwzck" Feb 17 16:38:03 crc kubenswrapper[4672]: I0217 16:38:03.423788 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pwzck" podUID="bb6baf03-277f-4eb3-b22e-ff73af698c20" containerName="registry-server" probeResult="failure" output=< Feb 17 16:38:03 crc kubenswrapper[4672]: timeout: failed to connect service ":50051" within 1s Feb 17 16:38:03 crc kubenswrapper[4672]: > Feb 17 16:38:04 crc kubenswrapper[4672]: E0217 16:38:04.948157 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:38:06 crc kubenswrapper[4672]: E0217 16:38:06.947960 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:38:12 crc kubenswrapper[4672]: I0217 16:38:12.429639 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pwzck" Feb 17 16:38:12 crc kubenswrapper[4672]: I0217 16:38:12.485122 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pwzck" Feb 17 16:38:13 crc kubenswrapper[4672]: I0217 16:38:13.229350 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pwzck"] Feb 17 16:38:13 crc kubenswrapper[4672]: I0217 16:38:13.991372 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pwzck" podUID="bb6baf03-277f-4eb3-b22e-ff73af698c20" containerName="registry-server" containerID="cri-o://71e0dc04b709aaa801edaf60e8ba2ff810115765a0bbe0f7a3471db19fca9d30" gracePeriod=2 Feb 17 16:38:14 crc kubenswrapper[4672]: I0217 16:38:14.541087 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwzck" Feb 17 16:38:14 crc kubenswrapper[4672]: I0217 16:38:14.735084 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb6baf03-277f-4eb3-b22e-ff73af698c20-catalog-content\") pod \"bb6baf03-277f-4eb3-b22e-ff73af698c20\" (UID: \"bb6baf03-277f-4eb3-b22e-ff73af698c20\") " Feb 17 16:38:14 crc kubenswrapper[4672]: I0217 16:38:14.735292 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76zpf\" (UniqueName: \"kubernetes.io/projected/bb6baf03-277f-4eb3-b22e-ff73af698c20-kube-api-access-76zpf\") pod \"bb6baf03-277f-4eb3-b22e-ff73af698c20\" (UID: \"bb6baf03-277f-4eb3-b22e-ff73af698c20\") " Feb 17 16:38:14 crc kubenswrapper[4672]: I0217 16:38:14.735321 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb6baf03-277f-4eb3-b22e-ff73af698c20-utilities\") pod \"bb6baf03-277f-4eb3-b22e-ff73af698c20\" (UID: \"bb6baf03-277f-4eb3-b22e-ff73af698c20\") " Feb 17 16:38:14 crc kubenswrapper[4672]: I0217 16:38:14.736101 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb6baf03-277f-4eb3-b22e-ff73af698c20-utilities" (OuterVolumeSpecName: "utilities") pod "bb6baf03-277f-4eb3-b22e-ff73af698c20" (UID: "bb6baf03-277f-4eb3-b22e-ff73af698c20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:38:14 crc kubenswrapper[4672]: I0217 16:38:14.736619 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb6baf03-277f-4eb3-b22e-ff73af698c20-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:38:14 crc kubenswrapper[4672]: I0217 16:38:14.743694 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb6baf03-277f-4eb3-b22e-ff73af698c20-kube-api-access-76zpf" (OuterVolumeSpecName: "kube-api-access-76zpf") pod "bb6baf03-277f-4eb3-b22e-ff73af698c20" (UID: "bb6baf03-277f-4eb3-b22e-ff73af698c20"). InnerVolumeSpecName "kube-api-access-76zpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:38:14 crc kubenswrapper[4672]: I0217 16:38:14.838938 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76zpf\" (UniqueName: \"kubernetes.io/projected/bb6baf03-277f-4eb3-b22e-ff73af698c20-kube-api-access-76zpf\") on node \"crc\" DevicePath \"\"" Feb 17 16:38:14 crc kubenswrapper[4672]: I0217 16:38:14.856009 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb6baf03-277f-4eb3-b22e-ff73af698c20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb6baf03-277f-4eb3-b22e-ff73af698c20" (UID: "bb6baf03-277f-4eb3-b22e-ff73af698c20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:38:14 crc kubenswrapper[4672]: I0217 16:38:14.940569 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb6baf03-277f-4eb3-b22e-ff73af698c20-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:38:15 crc kubenswrapper[4672]: I0217 16:38:15.006410 4672 generic.go:334] "Generic (PLEG): container finished" podID="bb6baf03-277f-4eb3-b22e-ff73af698c20" containerID="71e0dc04b709aaa801edaf60e8ba2ff810115765a0bbe0f7a3471db19fca9d30" exitCode=0 Feb 17 16:38:15 crc kubenswrapper[4672]: I0217 16:38:15.006500 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwzck" Feb 17 16:38:15 crc kubenswrapper[4672]: I0217 16:38:15.007632 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwzck" event={"ID":"bb6baf03-277f-4eb3-b22e-ff73af698c20","Type":"ContainerDied","Data":"71e0dc04b709aaa801edaf60e8ba2ff810115765a0bbe0f7a3471db19fca9d30"} Feb 17 16:38:15 crc kubenswrapper[4672]: I0217 16:38:15.007732 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwzck" event={"ID":"bb6baf03-277f-4eb3-b22e-ff73af698c20","Type":"ContainerDied","Data":"a1d13b88a48dfc5e04c48e029cce8ea9e1e52cbe947b7d11fdb768e94b8cd0ce"} Feb 17 16:38:15 crc kubenswrapper[4672]: I0217 16:38:15.007767 4672 scope.go:117] "RemoveContainer" containerID="71e0dc04b709aaa801edaf60e8ba2ff810115765a0bbe0f7a3471db19fca9d30" Feb 17 16:38:15 crc kubenswrapper[4672]: I0217 16:38:15.031491 4672 scope.go:117] "RemoveContainer" containerID="8847f2db5a8677c2728acbe6d094030f9c649eabadb44a552bac8e9791c18cee" Feb 17 16:38:15 crc kubenswrapper[4672]: I0217 16:38:15.069750 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pwzck"] Feb 17 16:38:15 crc kubenswrapper[4672]: I0217 16:38:15.070918 4672 scope.go:117] "RemoveContainer" containerID="d73b906bec5464eef52f4ce4c8bf2877247963ba1c404d0aeaf429cca7df7ab8" Feb 17 16:38:15 crc kubenswrapper[4672]: I0217 16:38:15.081775 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pwzck"] Feb 17 16:38:15 crc kubenswrapper[4672]: I0217 16:38:15.127138 4672 scope.go:117] "RemoveContainer" containerID="71e0dc04b709aaa801edaf60e8ba2ff810115765a0bbe0f7a3471db19fca9d30" Feb 17 16:38:15 crc kubenswrapper[4672]: E0217 16:38:15.127761 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71e0dc04b709aaa801edaf60e8ba2ff810115765a0bbe0f7a3471db19fca9d30\": container with ID starting with 71e0dc04b709aaa801edaf60e8ba2ff810115765a0bbe0f7a3471db19fca9d30 not found: ID does not exist" containerID="71e0dc04b709aaa801edaf60e8ba2ff810115765a0bbe0f7a3471db19fca9d30" Feb 17 16:38:15 crc kubenswrapper[4672]: I0217 16:38:15.127812 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71e0dc04b709aaa801edaf60e8ba2ff810115765a0bbe0f7a3471db19fca9d30"} err="failed to get container status \"71e0dc04b709aaa801edaf60e8ba2ff810115765a0bbe0f7a3471db19fca9d30\": rpc error: code = NotFound desc = could not find container \"71e0dc04b709aaa801edaf60e8ba2ff810115765a0bbe0f7a3471db19fca9d30\": container with ID starting with 71e0dc04b709aaa801edaf60e8ba2ff810115765a0bbe0f7a3471db19fca9d30 not found: ID does not exist" Feb 17 16:38:15 crc kubenswrapper[4672]: I0217 16:38:15.127844 4672 scope.go:117] "RemoveContainer" containerID="8847f2db5a8677c2728acbe6d094030f9c649eabadb44a552bac8e9791c18cee" Feb 17 16:38:15 crc kubenswrapper[4672]: E0217 16:38:15.128871 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8847f2db5a8677c2728acbe6d094030f9c649eabadb44a552bac8e9791c18cee\": container with ID starting with 8847f2db5a8677c2728acbe6d094030f9c649eabadb44a552bac8e9791c18cee not found: ID does not exist" containerID="8847f2db5a8677c2728acbe6d094030f9c649eabadb44a552bac8e9791c18cee" Feb 17 16:38:15 crc kubenswrapper[4672]: I0217 16:38:15.128926 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8847f2db5a8677c2728acbe6d094030f9c649eabadb44a552bac8e9791c18cee"} err="failed to get container status \"8847f2db5a8677c2728acbe6d094030f9c649eabadb44a552bac8e9791c18cee\": rpc error: code = NotFound desc = could not find container \"8847f2db5a8677c2728acbe6d094030f9c649eabadb44a552bac8e9791c18cee\": container with ID starting with 8847f2db5a8677c2728acbe6d094030f9c649eabadb44a552bac8e9791c18cee not found: ID does not exist" Feb 17 16:38:15 crc kubenswrapper[4672]: I0217 16:38:15.128952 4672 scope.go:117] "RemoveContainer" containerID="d73b906bec5464eef52f4ce4c8bf2877247963ba1c404d0aeaf429cca7df7ab8" Feb 17 16:38:15 crc kubenswrapper[4672]: E0217 16:38:15.129273 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d73b906bec5464eef52f4ce4c8bf2877247963ba1c404d0aeaf429cca7df7ab8\": container with ID starting with d73b906bec5464eef52f4ce4c8bf2877247963ba1c404d0aeaf429cca7df7ab8 not found: ID does not exist" containerID="d73b906bec5464eef52f4ce4c8bf2877247963ba1c404d0aeaf429cca7df7ab8" Feb 17 16:38:15 crc kubenswrapper[4672]: I0217 16:38:15.129315 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d73b906bec5464eef52f4ce4c8bf2877247963ba1c404d0aeaf429cca7df7ab8"} err="failed to get container status \"d73b906bec5464eef52f4ce4c8bf2877247963ba1c404d0aeaf429cca7df7ab8\": rpc error: code = NotFound desc = could not find container \"d73b906bec5464eef52f4ce4c8bf2877247963ba1c404d0aeaf429cca7df7ab8\": container with ID starting with d73b906bec5464eef52f4ce4c8bf2877247963ba1c404d0aeaf429cca7df7ab8 not found: ID does not exist" Feb 17 16:38:15 crc kubenswrapper[4672]: I0217 16:38:15.962541 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb6baf03-277f-4eb3-b22e-ff73af698c20" path="/var/lib/kubelet/pods/bb6baf03-277f-4eb3-b22e-ff73af698c20/volumes" Feb 17 16:38:18 crc kubenswrapper[4672]: E0217 16:38:18.946325 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:38:22 crc kubenswrapper[4672]: E0217 16:38:22.096820 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:38:22 crc kubenswrapper[4672]: E0217 16:38:22.097447 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:38:22 crc kubenswrapper[4672]: E0217 16:38:22.097643 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66h7h644h64ch5f8h565hfch5dh56chfdh8hfdh5b5h567h6dh665h557h74h665hcbh96h659h554h589h57fh5d9h55h564hcfh5dhffhfdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tx4bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9e58ce9b-ddd5-42bb-8e07-08a22c8871a5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:38:22 crc kubenswrapper[4672]: E0217 16:38:22.098856 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:38:32 crc kubenswrapper[4672]: E0217 16:38:32.947642 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:38:33 crc kubenswrapper[4672]: E0217 16:38:33.946038 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:38:37 crc kubenswrapper[4672]: I0217 16:38:37.483500 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-54w9s"] Feb 17 16:38:37 crc kubenswrapper[4672]: E0217 16:38:37.484547 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb6baf03-277f-4eb3-b22e-ff73af698c20" containerName="registry-server" Feb 17 16:38:37 crc kubenswrapper[4672]: I0217 16:38:37.484564 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb6baf03-277f-4eb3-b22e-ff73af698c20" containerName="registry-server" Feb 17 16:38:37 crc kubenswrapper[4672]: E0217 16:38:37.484590 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c21930b-1c76-4ed2-96dd-d926a6f847ca" containerName="extract-utilities" Feb 17 16:38:37 crc kubenswrapper[4672]: I0217 16:38:37.484600 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c21930b-1c76-4ed2-96dd-d926a6f847ca" containerName="extract-utilities" Feb 17 16:38:37 crc kubenswrapper[4672]: E0217 16:38:37.484615 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb6baf03-277f-4eb3-b22e-ff73af698c20" containerName="extract-content" Feb 17 16:38:37 crc kubenswrapper[4672]: I0217 16:38:37.484624 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb6baf03-277f-4eb3-b22e-ff73af698c20" containerName="extract-content" Feb 17 16:38:37 crc kubenswrapper[4672]: E0217 16:38:37.484648 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb6baf03-277f-4eb3-b22e-ff73af698c20" containerName="extract-utilities" Feb 17 16:38:37 crc kubenswrapper[4672]: I0217 16:38:37.484657 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb6baf03-277f-4eb3-b22e-ff73af698c20" containerName="extract-utilities" Feb 17 16:38:37 crc kubenswrapper[4672]: E0217 16:38:37.484686 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c21930b-1c76-4ed2-96dd-d926a6f847ca" containerName="extract-content" Feb 17 16:38:37 crc kubenswrapper[4672]: I0217 16:38:37.484694 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c21930b-1c76-4ed2-96dd-d926a6f847ca" containerName="extract-content" Feb 17 16:38:37 crc kubenswrapper[4672]: E0217 16:38:37.484713 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c21930b-1c76-4ed2-96dd-d926a6f847ca" containerName="registry-server" Feb 17 16:38:37 crc kubenswrapper[4672]: I0217 16:38:37.484722 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c21930b-1c76-4ed2-96dd-d926a6f847ca" containerName="registry-server" Feb 17 16:38:37 crc kubenswrapper[4672]: I0217 16:38:37.484967 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c21930b-1c76-4ed2-96dd-d926a6f847ca" containerName="registry-server" Feb 17 16:38:37 crc kubenswrapper[4672]: I0217 16:38:37.484994 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb6baf03-277f-4eb3-b22e-ff73af698c20" containerName="registry-server" Feb 17 16:38:37 crc kubenswrapper[4672]: I0217 16:38:37.489943 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54w9s" Feb 17 16:38:37 crc kubenswrapper[4672]: I0217 16:38:37.498588 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-54w9s"] Feb 17 16:38:37 crc kubenswrapper[4672]: I0217 16:38:37.554635 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9hn2\" (UniqueName: \"kubernetes.io/projected/c884aee4-c794-4386-a6d6-cd81b54b33d9-kube-api-access-m9hn2\") pod \"community-operators-54w9s\" (UID: \"c884aee4-c794-4386-a6d6-cd81b54b33d9\") " pod="openshift-marketplace/community-operators-54w9s" Feb 17 16:38:37 crc kubenswrapper[4672]: I0217 16:38:37.554747 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c884aee4-c794-4386-a6d6-cd81b54b33d9-catalog-content\") pod \"community-operators-54w9s\" (UID: \"c884aee4-c794-4386-a6d6-cd81b54b33d9\") " pod="openshift-marketplace/community-operators-54w9s" Feb 17 16:38:37 crc kubenswrapper[4672]: I0217 16:38:37.554838 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c884aee4-c794-4386-a6d6-cd81b54b33d9-utilities\") pod \"community-operators-54w9s\" (UID: \"c884aee4-c794-4386-a6d6-cd81b54b33d9\") " pod="openshift-marketplace/community-operators-54w9s" Feb 17 16:38:37 crc kubenswrapper[4672]: I0217 16:38:37.656869 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9hn2\" (UniqueName: \"kubernetes.io/projected/c884aee4-c794-4386-a6d6-cd81b54b33d9-kube-api-access-m9hn2\") pod \"community-operators-54w9s\" (UID: \"c884aee4-c794-4386-a6d6-cd81b54b33d9\") " pod="openshift-marketplace/community-operators-54w9s" Feb 17 16:38:37 crc kubenswrapper[4672]: I0217 16:38:37.656990 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c884aee4-c794-4386-a6d6-cd81b54b33d9-catalog-content\") pod \"community-operators-54w9s\" (UID: \"c884aee4-c794-4386-a6d6-cd81b54b33d9\") " pod="openshift-marketplace/community-operators-54w9s" Feb 17 16:38:37 crc kubenswrapper[4672]: I0217 16:38:37.657083 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c884aee4-c794-4386-a6d6-cd81b54b33d9-utilities\") pod \"community-operators-54w9s\" (UID: \"c884aee4-c794-4386-a6d6-cd81b54b33d9\") " pod="openshift-marketplace/community-operators-54w9s" Feb 17 16:38:37 crc kubenswrapper[4672]: I0217 16:38:37.657410 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c884aee4-c794-4386-a6d6-cd81b54b33d9-catalog-content\") pod \"community-operators-54w9s\" (UID: \"c884aee4-c794-4386-a6d6-cd81b54b33d9\") " pod="openshift-marketplace/community-operators-54w9s" Feb 17 16:38:37 crc kubenswrapper[4672]: I0217 16:38:37.657430 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c884aee4-c794-4386-a6d6-cd81b54b33d9-utilities\") pod \"community-operators-54w9s\" (UID: \"c884aee4-c794-4386-a6d6-cd81b54b33d9\") " pod="openshift-marketplace/community-operators-54w9s" Feb 17 16:38:37 crc kubenswrapper[4672]: I0217 16:38:37.689760 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9hn2\" (UniqueName: \"kubernetes.io/projected/c884aee4-c794-4386-a6d6-cd81b54b33d9-kube-api-access-m9hn2\") pod \"community-operators-54w9s\" (UID: \"c884aee4-c794-4386-a6d6-cd81b54b33d9\") " pod="openshift-marketplace/community-operators-54w9s" Feb 17 16:38:37 crc kubenswrapper[4672]: I0217 16:38:37.812827 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54w9s" Feb 17 16:38:38 crc kubenswrapper[4672]: I0217 16:38:38.421404 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-54w9s"] Feb 17 16:38:38 crc kubenswrapper[4672]: W0217 16:38:38.434317 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc884aee4_c794_4386_a6d6_cd81b54b33d9.slice/crio-4cc164701603d596a18545d302cb3e3df2edd393174b8f86058608f8b47f870e WatchSource:0}: Error finding container 4cc164701603d596a18545d302cb3e3df2edd393174b8f86058608f8b47f870e: Status 404 returned error can't find the container with id 4cc164701603d596a18545d302cb3e3df2edd393174b8f86058608f8b47f870e Feb 17 16:38:39 crc kubenswrapper[4672]: I0217 16:38:39.266548 4672 generic.go:334] "Generic (PLEG): container finished" podID="c884aee4-c794-4386-a6d6-cd81b54b33d9" containerID="36eb9d58f76b1768059bbf640925b9874ddde58bdd7d68ae073231cd619cdcc7" exitCode=0 Feb 17 16:38:39 crc kubenswrapper[4672]: I0217 16:38:39.266946 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54w9s" event={"ID":"c884aee4-c794-4386-a6d6-cd81b54b33d9","Type":"ContainerDied","Data":"36eb9d58f76b1768059bbf640925b9874ddde58bdd7d68ae073231cd619cdcc7"} Feb 17 16:38:39 crc kubenswrapper[4672]: I0217 16:38:39.266984 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54w9s" event={"ID":"c884aee4-c794-4386-a6d6-cd81b54b33d9","Type":"ContainerStarted","Data":"4cc164701603d596a18545d302cb3e3df2edd393174b8f86058608f8b47f870e"} Feb 17 16:38:40 crc kubenswrapper[4672]: I0217 16:38:40.280572 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54w9s" event={"ID":"c884aee4-c794-4386-a6d6-cd81b54b33d9","Type":"ContainerStarted","Data":"cd02753d63a7a9cc1ad64ec1fd3cb7a846dd709f095ee4ea8cf25ad06f33a67f"} Feb 17 16:38:42 crc kubenswrapper[4672]: I0217 16:38:42.308492 4672 generic.go:334] "Generic (PLEG): container finished" podID="c884aee4-c794-4386-a6d6-cd81b54b33d9" containerID="cd02753d63a7a9cc1ad64ec1fd3cb7a846dd709f095ee4ea8cf25ad06f33a67f" exitCode=0 Feb 17 16:38:42 crc kubenswrapper[4672]: I0217 16:38:42.308606 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54w9s" event={"ID":"c884aee4-c794-4386-a6d6-cd81b54b33d9","Type":"ContainerDied","Data":"cd02753d63a7a9cc1ad64ec1fd3cb7a846dd709f095ee4ea8cf25ad06f33a67f"} Feb 17 16:38:43 crc kubenswrapper[4672]: I0217 16:38:43.323325 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54w9s" event={"ID":"c884aee4-c794-4386-a6d6-cd81b54b33d9","Type":"ContainerStarted","Data":"1c795bc60e22a2c3655c2420ad11b7ec87b243390a2eb6df70131964cdce2257"} Feb 17 16:38:43 crc kubenswrapper[4672]: I0217 16:38:43.345969 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-54w9s" podStartSLOduration=2.908561331 podStartE2EDuration="6.345950104s" podCreationTimestamp="2026-02-17 16:38:37 +0000 UTC" firstStartedPulling="2026-02-17 16:38:39.26921291 +0000 UTC m=+2128.023301642" lastFinishedPulling="2026-02-17 16:38:42.706601673 +0000 UTC m=+2131.460690415" observedRunningTime="2026-02-17 16:38:43.342908894 +0000 UTC m=+2132.096997626" watchObservedRunningTime="2026-02-17 16:38:43.345950104 +0000 UTC m=+2132.100038846" Feb 17 16:38:44 crc kubenswrapper[4672]: E0217 16:38:44.947563 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:38:47 crc kubenswrapper[4672]: I0217 16:38:47.813249 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-54w9s" Feb 17 16:38:47 crc kubenswrapper[4672]: I0217 16:38:47.813317 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-54w9s" Feb 17 16:38:47 crc kubenswrapper[4672]: I0217 16:38:47.868056 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-54w9s" Feb 17 16:38:47 crc kubenswrapper[4672]: E0217 16:38:47.949572 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:38:48 crc kubenswrapper[4672]: I0217 16:38:48.431257 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-54w9s" Feb 17 16:38:48 crc kubenswrapper[4672]: I0217 16:38:48.481330 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-54w9s"] Feb 17 16:38:50 crc kubenswrapper[4672]: I0217 16:38:50.394354 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-54w9s" podUID="c884aee4-c794-4386-a6d6-cd81b54b33d9" containerName="registry-server" containerID="cri-o://1c795bc60e22a2c3655c2420ad11b7ec87b243390a2eb6df70131964cdce2257" gracePeriod=2 Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.083078 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54w9s" Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.190788 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c884aee4-c794-4386-a6d6-cd81b54b33d9-utilities\") pod \"c884aee4-c794-4386-a6d6-cd81b54b33d9\" (UID: \"c884aee4-c794-4386-a6d6-cd81b54b33d9\") " Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.190982 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c884aee4-c794-4386-a6d6-cd81b54b33d9-catalog-content\") pod \"c884aee4-c794-4386-a6d6-cd81b54b33d9\" (UID: \"c884aee4-c794-4386-a6d6-cd81b54b33d9\") " Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.191202 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9hn2\" (UniqueName: \"kubernetes.io/projected/c884aee4-c794-4386-a6d6-cd81b54b33d9-kube-api-access-m9hn2\") pod \"c884aee4-c794-4386-a6d6-cd81b54b33d9\" (UID: \"c884aee4-c794-4386-a6d6-cd81b54b33d9\") " Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.193459 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c884aee4-c794-4386-a6d6-cd81b54b33d9-utilities" (OuterVolumeSpecName: "utilities") pod "c884aee4-c794-4386-a6d6-cd81b54b33d9" (UID: "c884aee4-c794-4386-a6d6-cd81b54b33d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.202606 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c884aee4-c794-4386-a6d6-cd81b54b33d9-kube-api-access-m9hn2" (OuterVolumeSpecName: "kube-api-access-m9hn2") pod "c884aee4-c794-4386-a6d6-cd81b54b33d9" (UID: "c884aee4-c794-4386-a6d6-cd81b54b33d9"). InnerVolumeSpecName "kube-api-access-m9hn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.296591 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c884aee4-c794-4386-a6d6-cd81b54b33d9-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.296646 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9hn2\" (UniqueName: \"kubernetes.io/projected/c884aee4-c794-4386-a6d6-cd81b54b33d9-kube-api-access-m9hn2\") on node \"crc\" DevicePath \"\"" Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.297305 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c884aee4-c794-4386-a6d6-cd81b54b33d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c884aee4-c794-4386-a6d6-cd81b54b33d9" (UID: "c884aee4-c794-4386-a6d6-cd81b54b33d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.399262 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c884aee4-c794-4386-a6d6-cd81b54b33d9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.407461 4672 generic.go:334] "Generic (PLEG): container finished" podID="c884aee4-c794-4386-a6d6-cd81b54b33d9" containerID="1c795bc60e22a2c3655c2420ad11b7ec87b243390a2eb6df70131964cdce2257" exitCode=0 Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.407523 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54w9s" event={"ID":"c884aee4-c794-4386-a6d6-cd81b54b33d9","Type":"ContainerDied","Data":"1c795bc60e22a2c3655c2420ad11b7ec87b243390a2eb6df70131964cdce2257"} Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.407565 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54w9s" event={"ID":"c884aee4-c794-4386-a6d6-cd81b54b33d9","Type":"ContainerDied","Data":"4cc164701603d596a18545d302cb3e3df2edd393174b8f86058608f8b47f870e"} Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.407587 4672 scope.go:117] "RemoveContainer" containerID="1c795bc60e22a2c3655c2420ad11b7ec87b243390a2eb6df70131964cdce2257" Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.408122 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54w9s" Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.451601 4672 scope.go:117] "RemoveContainer" containerID="cd02753d63a7a9cc1ad64ec1fd3cb7a846dd709f095ee4ea8cf25ad06f33a67f" Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.458408 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-54w9s"] Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.470835 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-54w9s"] Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.485713 4672 scope.go:117] "RemoveContainer" containerID="36eb9d58f76b1768059bbf640925b9874ddde58bdd7d68ae073231cd619cdcc7" Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.548787 4672 scope.go:117] "RemoveContainer" containerID="1c795bc60e22a2c3655c2420ad11b7ec87b243390a2eb6df70131964cdce2257" Feb 17 16:38:51 crc kubenswrapper[4672]: E0217 16:38:51.549415 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c795bc60e22a2c3655c2420ad11b7ec87b243390a2eb6df70131964cdce2257\": container with ID starting with 1c795bc60e22a2c3655c2420ad11b7ec87b243390a2eb6df70131964cdce2257 not found: ID does not exist" containerID="1c795bc60e22a2c3655c2420ad11b7ec87b243390a2eb6df70131964cdce2257" Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.549466 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c795bc60e22a2c3655c2420ad11b7ec87b243390a2eb6df70131964cdce2257"} err="failed to get container status \"1c795bc60e22a2c3655c2420ad11b7ec87b243390a2eb6df70131964cdce2257\": rpc error: code = NotFound desc = could not find container \"1c795bc60e22a2c3655c2420ad11b7ec87b243390a2eb6df70131964cdce2257\": container with ID starting with 1c795bc60e22a2c3655c2420ad11b7ec87b243390a2eb6df70131964cdce2257 not found: ID does not exist" Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.549501 4672 scope.go:117] "RemoveContainer" containerID="cd02753d63a7a9cc1ad64ec1fd3cb7a846dd709f095ee4ea8cf25ad06f33a67f" Feb 17 16:38:51 crc kubenswrapper[4672]: E0217 16:38:51.550104 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd02753d63a7a9cc1ad64ec1fd3cb7a846dd709f095ee4ea8cf25ad06f33a67f\": container with ID starting with cd02753d63a7a9cc1ad64ec1fd3cb7a846dd709f095ee4ea8cf25ad06f33a67f not found: ID does not exist" containerID="cd02753d63a7a9cc1ad64ec1fd3cb7a846dd709f095ee4ea8cf25ad06f33a67f" Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.550156 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd02753d63a7a9cc1ad64ec1fd3cb7a846dd709f095ee4ea8cf25ad06f33a67f"} err="failed to get container status \"cd02753d63a7a9cc1ad64ec1fd3cb7a846dd709f095ee4ea8cf25ad06f33a67f\": rpc error: code = NotFound desc = could not find container \"cd02753d63a7a9cc1ad64ec1fd3cb7a846dd709f095ee4ea8cf25ad06f33a67f\": container with ID starting with cd02753d63a7a9cc1ad64ec1fd3cb7a846dd709f095ee4ea8cf25ad06f33a67f not found: ID does not exist" Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.550190 4672 scope.go:117] "RemoveContainer" containerID="36eb9d58f76b1768059bbf640925b9874ddde58bdd7d68ae073231cd619cdcc7" Feb 17 16:38:51 crc kubenswrapper[4672]: E0217 16:38:51.550559 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36eb9d58f76b1768059bbf640925b9874ddde58bdd7d68ae073231cd619cdcc7\": container with ID starting with 36eb9d58f76b1768059bbf640925b9874ddde58bdd7d68ae073231cd619cdcc7 not found: ID does not exist" containerID="36eb9d58f76b1768059bbf640925b9874ddde58bdd7d68ae073231cd619cdcc7" Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.550582 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36eb9d58f76b1768059bbf640925b9874ddde58bdd7d68ae073231cd619cdcc7"} err="failed to get container status \"36eb9d58f76b1768059bbf640925b9874ddde58bdd7d68ae073231cd619cdcc7\": rpc error: code = NotFound desc = could not find container \"36eb9d58f76b1768059bbf640925b9874ddde58bdd7d68ae073231cd619cdcc7\": container with ID starting with 36eb9d58f76b1768059bbf640925b9874ddde58bdd7d68ae073231cd619cdcc7 not found: ID does not exist" Feb 17 16:38:51 crc kubenswrapper[4672]: I0217 16:38:51.959913 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c884aee4-c794-4386-a6d6-cd81b54b33d9" path="/var/lib/kubelet/pods/c884aee4-c794-4386-a6d6-cd81b54b33d9/volumes" Feb 17 16:38:55 crc kubenswrapper[4672]: E0217 16:38:55.947921 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:38:59 crc kubenswrapper[4672]: E0217 16:38:59.949552 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:39:10 crc kubenswrapper[4672]: E0217 16:39:10.947893 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:39:12 crc kubenswrapper[4672]: E0217 16:39:12.948255 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:39:24 crc kubenswrapper[4672]: E0217 16:39:24.948189 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:39:24 crc kubenswrapper[4672]: E0217 16:39:24.948261 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:39:36 crc kubenswrapper[4672]: E0217 16:39:36.947863 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:39:38 crc kubenswrapper[4672]: E0217 16:39:38.946304 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:39:47 crc kubenswrapper[4672]: I0217 16:39:47.947068 4672 generic.go:334] "Generic (PLEG): container finished" podID="945f70cb-9394-43c9-b44c-c6ef7d021f78" containerID="afcadeadafefb4a59bc9a152fcc109de13de1238df4c19268fa5eda8a80f60e7" exitCode=2 Feb 17 16:39:47 crc kubenswrapper[4672]: I0217 16:39:47.960682 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2" event={"ID":"945f70cb-9394-43c9-b44c-c6ef7d021f78","Type":"ContainerDied","Data":"afcadeadafefb4a59bc9a152fcc109de13de1238df4c19268fa5eda8a80f60e7"} Feb 17 16:39:48 crc kubenswrapper[4672]: E0217 16:39:48.948597 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:39:49 crc kubenswrapper[4672]: I0217 16:39:49.611890 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2" Feb 17 16:39:49 crc kubenswrapper[4672]: I0217 16:39:49.719374 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/945f70cb-9394-43c9-b44c-c6ef7d021f78-inventory\") pod \"945f70cb-9394-43c9-b44c-c6ef7d021f78\" (UID: \"945f70cb-9394-43c9-b44c-c6ef7d021f78\") " Feb 17 16:39:49 crc kubenswrapper[4672]: I0217 16:39:49.719866 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/945f70cb-9394-43c9-b44c-c6ef7d021f78-ssh-key-openstack-edpm-ipam\") pod \"945f70cb-9394-43c9-b44c-c6ef7d021f78\" (UID: \"945f70cb-9394-43c9-b44c-c6ef7d021f78\") " Feb 17 16:39:49 crc kubenswrapper[4672]: I0217 16:39:49.719972 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wtgn\" (UniqueName: \"kubernetes.io/projected/945f70cb-9394-43c9-b44c-c6ef7d021f78-kube-api-access-9wtgn\") pod \"945f70cb-9394-43c9-b44c-c6ef7d021f78\" (UID: \"945f70cb-9394-43c9-b44c-c6ef7d021f78\") " Feb 17 16:39:49 crc kubenswrapper[4672]: I0217 16:39:49.727609 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/945f70cb-9394-43c9-b44c-c6ef7d021f78-kube-api-access-9wtgn" (OuterVolumeSpecName: "kube-api-access-9wtgn") pod "945f70cb-9394-43c9-b44c-c6ef7d021f78" (UID: "945f70cb-9394-43c9-b44c-c6ef7d021f78"). InnerVolumeSpecName "kube-api-access-9wtgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:39:49 crc kubenswrapper[4672]: I0217 16:39:49.749955 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/945f70cb-9394-43c9-b44c-c6ef7d021f78-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "945f70cb-9394-43c9-b44c-c6ef7d021f78" (UID: "945f70cb-9394-43c9-b44c-c6ef7d021f78"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:39:49 crc kubenswrapper[4672]: I0217 16:39:49.762965 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/945f70cb-9394-43c9-b44c-c6ef7d021f78-inventory" (OuterVolumeSpecName: "inventory") pod "945f70cb-9394-43c9-b44c-c6ef7d021f78" (UID: "945f70cb-9394-43c9-b44c-c6ef7d021f78"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:39:49 crc kubenswrapper[4672]: I0217 16:39:49.822150 4672 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/945f70cb-9394-43c9-b44c-c6ef7d021f78-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:49 crc kubenswrapper[4672]: I0217 16:39:49.822196 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wtgn\" (UniqueName: \"kubernetes.io/projected/945f70cb-9394-43c9-b44c-c6ef7d021f78-kube-api-access-9wtgn\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:49 crc kubenswrapper[4672]: I0217 16:39:49.822207 4672 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/945f70cb-9394-43c9-b44c-c6ef7d021f78-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:49 crc kubenswrapper[4672]: I0217 16:39:49.964459 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2" event={"ID":"945f70cb-9394-43c9-b44c-c6ef7d021f78","Type":"ContainerDied","Data":"28b0c2116c9048d5da069e8949a5fd6d345fa66e8cd2749952060405fd03ba7e"} Feb 17 16:39:49 crc kubenswrapper[4672]: I0217 16:39:49.964502 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28b0c2116c9048d5da069e8949a5fd6d345fa66e8cd2749952060405fd03ba7e" Feb 17 16:39:49 crc kubenswrapper[4672]: I0217 16:39:49.964566 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2" Feb 17 16:39:51 crc kubenswrapper[4672]: E0217 16:39:51.955072 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.030830 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj"] Feb 17 16:39:57 crc kubenswrapper[4672]: E0217 16:39:57.033317 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c884aee4-c794-4386-a6d6-cd81b54b33d9" containerName="registry-server" Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.033489 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c884aee4-c794-4386-a6d6-cd81b54b33d9" containerName="registry-server" Feb 17 16:39:57 crc kubenswrapper[4672]: E0217 16:39:57.033681 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="945f70cb-9394-43c9-b44c-c6ef7d021f78" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.033823 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="945f70cb-9394-43c9-b44c-c6ef7d021f78" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:39:57 crc kubenswrapper[4672]: E0217 16:39:57.034009 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c884aee4-c794-4386-a6d6-cd81b54b33d9" containerName="extract-utilities" Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.034229 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c884aee4-c794-4386-a6d6-cd81b54b33d9" containerName="extract-utilities" Feb 17 16:39:57 crc kubenswrapper[4672]: E0217 16:39:57.034556 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c884aee4-c794-4386-a6d6-cd81b54b33d9" containerName="extract-content" Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.034749 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c884aee4-c794-4386-a6d6-cd81b54b33d9" containerName="extract-content" Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.035656 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="c884aee4-c794-4386-a6d6-cd81b54b33d9" containerName="registry-server" Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.035882 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="945f70cb-9394-43c9-b44c-c6ef7d021f78" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.037679 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj" Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.040817 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.040857 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj"] Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.040877 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.042793 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6sng" Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.050302 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.076214 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e25af450-196c-4035-96b6-5148862bca0d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj\" (UID: \"e25af450-196c-4035-96b6-5148862bca0d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj" Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.076260 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e25af450-196c-4035-96b6-5148862bca0d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj\" (UID: \"e25af450-196c-4035-96b6-5148862bca0d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj" Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.076307 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqknt\" (UniqueName: \"kubernetes.io/projected/e25af450-196c-4035-96b6-5148862bca0d-kube-api-access-lqknt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj\" (UID: \"e25af450-196c-4035-96b6-5148862bca0d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj" Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.178911 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e25af450-196c-4035-96b6-5148862bca0d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj\" (UID: \"e25af450-196c-4035-96b6-5148862bca0d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj" Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.178970 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e25af450-196c-4035-96b6-5148862bca0d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj\" (UID: \"e25af450-196c-4035-96b6-5148862bca0d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj" Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.179045 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqknt\" (UniqueName: \"kubernetes.io/projected/e25af450-196c-4035-96b6-5148862bca0d-kube-api-access-lqknt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj\" (UID: \"e25af450-196c-4035-96b6-5148862bca0d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj" Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.186174 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e25af450-196c-4035-96b6-5148862bca0d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj\" (UID: \"e25af450-196c-4035-96b6-5148862bca0d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj" Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.186195 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e25af450-196c-4035-96b6-5148862bca0d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj\" (UID: \"e25af450-196c-4035-96b6-5148862bca0d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj" Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.198665 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqknt\" (UniqueName: \"kubernetes.io/projected/e25af450-196c-4035-96b6-5148862bca0d-kube-api-access-lqknt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj\" (UID: \"e25af450-196c-4035-96b6-5148862bca0d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj" Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.376641 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj" Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.567959 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:39:57 crc kubenswrapper[4672]: I0217 16:39:57.568260 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:39:58 crc kubenswrapper[4672]: I0217 16:39:58.036496 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj"] Feb 17 16:39:58 crc kubenswrapper[4672]: W0217 16:39:58.051176 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode25af450_196c_4035_96b6_5148862bca0d.slice/crio-46d3733ef6e81d2e575a240680b63d91bbd9c9624c9fa68735daa460a513e11c WatchSource:0}: Error finding container 46d3733ef6e81d2e575a240680b63d91bbd9c9624c9fa68735daa460a513e11c: Status 404 returned error can't find the container with id 46d3733ef6e81d2e575a240680b63d91bbd9c9624c9fa68735daa460a513e11c Feb 17 16:39:59 crc kubenswrapper[4672]: I0217 16:39:59.062040 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj" event={"ID":"e25af450-196c-4035-96b6-5148862bca0d","Type":"ContainerStarted","Data":"770710319f243745b2cdb557bb7dea33df4b0a20f2d45038251e0f4510b817b0"} Feb 17 16:39:59 crc kubenswrapper[4672]: I0217 16:39:59.062652 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj" event={"ID":"e25af450-196c-4035-96b6-5148862bca0d","Type":"ContainerStarted","Data":"46d3733ef6e81d2e575a240680b63d91bbd9c9624c9fa68735daa460a513e11c"} Feb 17 16:39:59 crc kubenswrapper[4672]: I0217 16:39:59.087636 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj" podStartSLOduration=1.452291286 podStartE2EDuration="2.087615062s" podCreationTimestamp="2026-02-17 16:39:57 +0000 UTC" firstStartedPulling="2026-02-17 16:39:58.05454606 +0000 UTC m=+2206.808634792" lastFinishedPulling="2026-02-17 16:39:58.689869826 +0000 UTC m=+2207.443958568" observedRunningTime="2026-02-17 16:39:59.081285135 +0000 UTC m=+2207.835373867" watchObservedRunningTime="2026-02-17 16:39:59.087615062 +0000 UTC m=+2207.841703794" Feb 17 16:40:02 crc kubenswrapper[4672]: E0217 16:40:02.947170 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:40:05 crc kubenswrapper[4672]: E0217 16:40:05.948509 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:40:13 crc kubenswrapper[4672]: E0217 16:40:13.948311 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:40:17 crc kubenswrapper[4672]: E0217 16:40:17.947058 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:40:27 crc kubenswrapper[4672]: I0217 16:40:27.565739 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:40:27 crc kubenswrapper[4672]: I0217 16:40:27.566319 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:40:27 crc kubenswrapper[4672]: E0217 16:40:27.946742 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:40:30 crc kubenswrapper[4672]: E0217 16:40:30.948359 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:40:38 crc kubenswrapper[4672]: E0217 16:40:38.947467 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:40:43 crc kubenswrapper[4672]: E0217 16:40:43.946661 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:40:52 crc kubenswrapper[4672]: E0217 16:40:52.947661 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:40:55 crc kubenswrapper[4672]: E0217 16:40:55.967339 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:40:57 crc kubenswrapper[4672]: I0217 16:40:57.566120 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:40:57 crc kubenswrapper[4672]: I0217 16:40:57.566598 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:40:57 crc kubenswrapper[4672]: I0217 16:40:57.566657 4672 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:40:57 crc kubenswrapper[4672]: I0217 16:40:57.567360 4672 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1"} pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:40:57 crc kubenswrapper[4672]: I0217 16:40:57.567422 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" containerID="cri-o://5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" gracePeriod=600 Feb 17 16:40:57 crc kubenswrapper[4672]: E0217 16:40:57.694264 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:40:58 crc kubenswrapper[4672]: I0217 16:40:58.639932 4672 generic.go:334] "Generic (PLEG): container finished" podID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" exitCode=0 Feb 17 16:40:58 crc kubenswrapper[4672]: I0217 16:40:58.640001 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerDied","Data":"5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1"} Feb 17 16:40:58 crc kubenswrapper[4672]: I0217 16:40:58.640207 4672 scope.go:117] "RemoveContainer" containerID="158298a2a36ba607ce0910ea7f9e6b7d51481499aa0d19f04c8d953ba6d1effc" Feb 17 16:40:58 crc kubenswrapper[4672]: I0217 16:40:58.640866 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:40:58 crc kubenswrapper[4672]: E0217 16:40:58.641115 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:41:07 crc kubenswrapper[4672]: E0217 16:41:07.947415 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:41:08 crc kubenswrapper[4672]: E0217 16:41:08.947900 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:41:13 crc kubenswrapper[4672]: I0217 16:41:13.946105 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:41:13 crc kubenswrapper[4672]: E0217 16:41:13.947037 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:41:20 crc kubenswrapper[4672]: E0217 16:41:20.948140 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:41:21 crc kubenswrapper[4672]: E0217 16:41:21.964069 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:41:27 crc kubenswrapper[4672]: I0217 16:41:27.946065 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:41:27 crc kubenswrapper[4672]: E0217 16:41:27.946987 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:41:32 crc kubenswrapper[4672]: E0217 16:41:32.948655 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:41:35 crc kubenswrapper[4672]: E0217 16:41:35.947319 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:41:42 crc kubenswrapper[4672]: I0217 16:41:42.945194 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:41:42 crc kubenswrapper[4672]: E0217 16:41:42.946017 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:41:44 crc kubenswrapper[4672]: E0217 16:41:44.950440 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:41:47 crc kubenswrapper[4672]: E0217 16:41:47.947844 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:41:55 crc kubenswrapper[4672]: I0217 16:41:55.945290 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:41:55 crc kubenswrapper[4672]: E0217 16:41:55.946317 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:41:58 crc kubenswrapper[4672]: E0217 16:41:58.948520 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:41:59 crc kubenswrapper[4672]: E0217 16:41:59.948466 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:42:08 crc kubenswrapper[4672]: I0217 16:42:08.944953 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:42:08 crc kubenswrapper[4672]: E0217 16:42:08.945748 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:42:11 crc kubenswrapper[4672]: E0217 16:42:11.970342 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:42:11 crc kubenswrapper[4672]: E0217 16:42:11.970541 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:42:22 crc kubenswrapper[4672]: E0217 16:42:22.947060 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:42:23 crc kubenswrapper[4672]: I0217 16:42:23.945803 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:42:23 crc kubenswrapper[4672]: E0217 16:42:23.946694 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:42:26 crc kubenswrapper[4672]: E0217 16:42:26.947435 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:42:36 crc kubenswrapper[4672]: E0217 16:42:36.948315 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:42:37 crc kubenswrapper[4672]: I0217 16:42:37.945442 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:42:37 crc kubenswrapper[4672]: E0217 16:42:37.946056 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:42:38 crc kubenswrapper[4672]: E0217 16:42:38.946903 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:42:47 crc kubenswrapper[4672]: E0217 16:42:47.947066 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:42:49 crc kubenswrapper[4672]: I0217 16:42:49.945915 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:42:49 crc kubenswrapper[4672]: E0217 16:42:49.948701 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:42:49 crc kubenswrapper[4672]: E0217 16:42:49.949007 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:43:01 crc kubenswrapper[4672]: E0217 16:43:01.960045 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:43:02 crc kubenswrapper[4672]: I0217 16:43:02.945736 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:43:02 crc kubenswrapper[4672]: E0217 16:43:02.946359 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:43:04 crc kubenswrapper[4672]: I0217 16:43:04.949431 4672 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:43:05 crc kubenswrapper[4672]: E0217 16:43:05.090093 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:43:05 crc kubenswrapper[4672]: E0217 16:43:05.090171 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:43:05 crc kubenswrapper[4672]: E0217 16:43:05.090326 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq9ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-qrhj8_openstack(dc5471f5-2491-4841-be45-09c8f14b35c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:43:05 crc kubenswrapper[4672]: E0217 16:43:05.091561 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:43:14 crc kubenswrapper[4672]: I0217 16:43:14.945817 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:43:14 crc kubenswrapper[4672]: E0217 16:43:14.946536 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:43:15 crc kubenswrapper[4672]: E0217 16:43:15.949229 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:43:17 crc kubenswrapper[4672]: E0217 16:43:17.948319 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:43:27 crc kubenswrapper[4672]: I0217 16:43:27.945699 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:43:27 crc kubenswrapper[4672]: E0217 16:43:27.946813 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:43:28 crc kubenswrapper[4672]: E0217 16:43:28.951432 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:43:31 crc kubenswrapper[4672]: E0217 16:43:31.077047 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:43:31 crc kubenswrapper[4672]: E0217 16:43:31.078276 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:43:31 crc kubenswrapper[4672]: E0217 16:43:31.078677 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66h7h644h64ch5f8h565hfch5dh56chfdh8hfdh5b5h567h6dh665h557h74h665hcbh96h659h554h589h57fh5d9h55h564hcfh5dhffhfdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tx4bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9e58ce9b-ddd5-42bb-8e07-08a22c8871a5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:43:31 crc kubenswrapper[4672]: E0217 16:43:31.080130 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:43:42 crc kubenswrapper[4672]: I0217 16:43:42.945359 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:43:42 crc kubenswrapper[4672]: E0217 16:43:42.946239 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:43:43 crc kubenswrapper[4672]: E0217 16:43:43.950757 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:43:45 crc kubenswrapper[4672]: E0217 16:43:45.947069 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:43:53 crc kubenswrapper[4672]: I0217 16:43:53.945794 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:43:53 crc kubenswrapper[4672]: E0217 16:43:53.946686 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:43:55 crc kubenswrapper[4672]: E0217 16:43:55.951980 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:43:56 crc kubenswrapper[4672]: E0217 16:43:56.946747 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:44:04 crc kubenswrapper[4672]: I0217 16:44:04.946142 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:44:04 crc kubenswrapper[4672]: E0217 16:44:04.947477 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:44:06 crc kubenswrapper[4672]: E0217 16:44:06.950444 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:44:09 crc kubenswrapper[4672]: E0217 16:44:09.948381 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:44:17 crc kubenswrapper[4672]: I0217 16:44:17.946248 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:44:17 crc kubenswrapper[4672]: E0217 16:44:17.947247 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:44:18 crc kubenswrapper[4672]: E0217 16:44:18.947230 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:44:21 crc kubenswrapper[4672]: E0217 16:44:21.965899 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:44:28 crc kubenswrapper[4672]: I0217 16:44:28.944977 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:44:28 crc kubenswrapper[4672]: E0217 16:44:28.945886 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:44:33 crc kubenswrapper[4672]: E0217 16:44:33.948432 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:44:36 crc kubenswrapper[4672]: E0217 16:44:36.947439 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:44:42 crc kubenswrapper[4672]: I0217 16:44:42.945728 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:44:42 crc kubenswrapper[4672]: E0217 16:44:42.946823 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:44:48 crc kubenswrapper[4672]: E0217 16:44:48.946801 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:44:48 crc kubenswrapper[4672]: E0217 16:44:48.947357 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:44:53 crc kubenswrapper[4672]: I0217 16:44:53.945771 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:44:53 crc kubenswrapper[4672]: E0217 16:44:53.947430 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:45:00 crc kubenswrapper[4672]: I0217 16:45:00.148414 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-zzvc2"] Feb 17 16:45:00 crc kubenswrapper[4672]: I0217 16:45:00.150450 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-zzvc2" Feb 17 16:45:00 crc kubenswrapper[4672]: I0217 16:45:00.152477 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:45:00 crc kubenswrapper[4672]: I0217 16:45:00.153271 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:45:00 crc kubenswrapper[4672]: I0217 16:45:00.157645 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-zzvc2"] Feb 17 16:45:00 crc kubenswrapper[4672]: I0217 16:45:00.279236 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b62f278-9a4a-4cb6-b093-ca74d724e523-config-volume\") pod \"collect-profiles-29522445-zzvc2\" (UID: \"3b62f278-9a4a-4cb6-b093-ca74d724e523\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-zzvc2" Feb 17 16:45:00 crc kubenswrapper[4672]: I0217 16:45:00.279744 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv4rw\" (UniqueName: \"kubernetes.io/projected/3b62f278-9a4a-4cb6-b093-ca74d724e523-kube-api-access-bv4rw\") pod \"collect-profiles-29522445-zzvc2\" (UID: \"3b62f278-9a4a-4cb6-b093-ca74d724e523\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-zzvc2" Feb 17 16:45:00 crc kubenswrapper[4672]: I0217 16:45:00.279881 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b62f278-9a4a-4cb6-b093-ca74d724e523-secret-volume\") pod \"collect-profiles-29522445-zzvc2\" (UID: \"3b62f278-9a4a-4cb6-b093-ca74d724e523\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-zzvc2" Feb 17 16:45:00 crc kubenswrapper[4672]: I0217 16:45:00.381984 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv4rw\" (UniqueName: \"kubernetes.io/projected/3b62f278-9a4a-4cb6-b093-ca74d724e523-kube-api-access-bv4rw\") pod \"collect-profiles-29522445-zzvc2\" (UID: \"3b62f278-9a4a-4cb6-b093-ca74d724e523\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-zzvc2" Feb 17 16:45:00 crc kubenswrapper[4672]: I0217 16:45:00.382078 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b62f278-9a4a-4cb6-b093-ca74d724e523-secret-volume\") pod \"collect-profiles-29522445-zzvc2\" (UID: \"3b62f278-9a4a-4cb6-b093-ca74d724e523\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-zzvc2" Feb 17 16:45:00 crc kubenswrapper[4672]: I0217 16:45:00.382127 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b62f278-9a4a-4cb6-b093-ca74d724e523-config-volume\") pod \"collect-profiles-29522445-zzvc2\" (UID: \"3b62f278-9a4a-4cb6-b093-ca74d724e523\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-zzvc2" Feb 17 16:45:00 crc kubenswrapper[4672]: I0217 16:45:00.383117 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b62f278-9a4a-4cb6-b093-ca74d724e523-config-volume\") pod \"collect-profiles-29522445-zzvc2\" (UID: \"3b62f278-9a4a-4cb6-b093-ca74d724e523\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-zzvc2" Feb 17 16:45:00 crc kubenswrapper[4672]: I0217 16:45:00.389223 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b62f278-9a4a-4cb6-b093-ca74d724e523-secret-volume\") pod \"collect-profiles-29522445-zzvc2\" (UID: \"3b62f278-9a4a-4cb6-b093-ca74d724e523\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-zzvc2" Feb 17 16:45:00 crc kubenswrapper[4672]: I0217 16:45:00.409749 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv4rw\" (UniqueName: \"kubernetes.io/projected/3b62f278-9a4a-4cb6-b093-ca74d724e523-kube-api-access-bv4rw\") pod \"collect-profiles-29522445-zzvc2\" (UID: \"3b62f278-9a4a-4cb6-b093-ca74d724e523\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-zzvc2" Feb 17 16:45:00 crc kubenswrapper[4672]: I0217 16:45:00.483695 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-zzvc2" Feb 17 16:45:00 crc kubenswrapper[4672]: I0217 16:45:00.962310 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-zzvc2"] Feb 17 16:45:01 crc kubenswrapper[4672]: I0217 16:45:01.197860 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-zzvc2" event={"ID":"3b62f278-9a4a-4cb6-b093-ca74d724e523","Type":"ContainerStarted","Data":"00fa081013508a49f9a5a83672028d386d87720ba554d5ec9bde76fbb3bf7565"} Feb 17 16:45:01 crc kubenswrapper[4672]: I0217 16:45:01.197909 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-zzvc2" event={"ID":"3b62f278-9a4a-4cb6-b093-ca74d724e523","Type":"ContainerStarted","Data":"32178980f3fe45e86aa067829cac4b71d1b7fc839b50dac565e15c4e0bdb7654"} Feb 17 16:45:01 crc kubenswrapper[4672]: I0217 16:45:01.225151 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-zzvc2" podStartSLOduration=1.2251312460000001 podStartE2EDuration="1.225131246s" podCreationTimestamp="2026-02-17 16:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:45:01.219052646 +0000 UTC m=+2509.973141398" watchObservedRunningTime="2026-02-17 16:45:01.225131246 +0000 UTC m=+2509.979219978" Feb 17 16:45:01 crc kubenswrapper[4672]: E0217 16:45:01.985971 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:45:02 crc kubenswrapper[4672]: I0217 16:45:02.210706 4672 generic.go:334] "Generic (PLEG): container finished" podID="3b62f278-9a4a-4cb6-b093-ca74d724e523" containerID="00fa081013508a49f9a5a83672028d386d87720ba554d5ec9bde76fbb3bf7565" exitCode=0 Feb 17 16:45:02 crc kubenswrapper[4672]: I0217 16:45:02.210779 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-zzvc2" event={"ID":"3b62f278-9a4a-4cb6-b093-ca74d724e523","Type":"ContainerDied","Data":"00fa081013508a49f9a5a83672028d386d87720ba554d5ec9bde76fbb3bf7565"} Feb 17 16:45:02 crc kubenswrapper[4672]: E0217 16:45:02.948574 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:45:03 crc kubenswrapper[4672]: I0217 16:45:03.574124 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-zzvc2" Feb 17 16:45:03 crc kubenswrapper[4672]: I0217 16:45:03.658290 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv4rw\" (UniqueName: \"kubernetes.io/projected/3b62f278-9a4a-4cb6-b093-ca74d724e523-kube-api-access-bv4rw\") pod \"3b62f278-9a4a-4cb6-b093-ca74d724e523\" (UID: \"3b62f278-9a4a-4cb6-b093-ca74d724e523\") " Feb 17 16:45:03 crc kubenswrapper[4672]: I0217 16:45:03.658475 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b62f278-9a4a-4cb6-b093-ca74d724e523-config-volume\") pod \"3b62f278-9a4a-4cb6-b093-ca74d724e523\" (UID: \"3b62f278-9a4a-4cb6-b093-ca74d724e523\") " Feb 17 16:45:03 crc kubenswrapper[4672]: I0217 16:45:03.658600 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b62f278-9a4a-4cb6-b093-ca74d724e523-secret-volume\") pod \"3b62f278-9a4a-4cb6-b093-ca74d724e523\" (UID: \"3b62f278-9a4a-4cb6-b093-ca74d724e523\") " Feb 17 16:45:03 crc kubenswrapper[4672]: I0217 16:45:03.659285 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b62f278-9a4a-4cb6-b093-ca74d724e523-config-volume" (OuterVolumeSpecName: "config-volume") pod "3b62f278-9a4a-4cb6-b093-ca74d724e523" (UID: "3b62f278-9a4a-4cb6-b093-ca74d724e523"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:45:03 crc kubenswrapper[4672]: I0217 16:45:03.663814 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b62f278-9a4a-4cb6-b093-ca74d724e523-kube-api-access-bv4rw" (OuterVolumeSpecName: "kube-api-access-bv4rw") pod "3b62f278-9a4a-4cb6-b093-ca74d724e523" (UID: "3b62f278-9a4a-4cb6-b093-ca74d724e523"). InnerVolumeSpecName "kube-api-access-bv4rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:45:03 crc kubenswrapper[4672]: I0217 16:45:03.664201 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b62f278-9a4a-4cb6-b093-ca74d724e523-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3b62f278-9a4a-4cb6-b093-ca74d724e523" (UID: "3b62f278-9a4a-4cb6-b093-ca74d724e523"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:45:03 crc kubenswrapper[4672]: I0217 16:45:03.761027 4672 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b62f278-9a4a-4cb6-b093-ca74d724e523-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:45:03 crc kubenswrapper[4672]: I0217 16:45:03.761862 4672 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b62f278-9a4a-4cb6-b093-ca74d724e523-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:45:03 crc kubenswrapper[4672]: I0217 16:45:03.761972 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv4rw\" (UniqueName: \"kubernetes.io/projected/3b62f278-9a4a-4cb6-b093-ca74d724e523-kube-api-access-bv4rw\") on node \"crc\" DevicePath \"\"" Feb 17 16:45:04 crc kubenswrapper[4672]: I0217 16:45:04.247400 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-zzvc2" event={"ID":"3b62f278-9a4a-4cb6-b093-ca74d724e523","Type":"ContainerDied","Data":"32178980f3fe45e86aa067829cac4b71d1b7fc839b50dac565e15c4e0bdb7654"} Feb 17 16:45:04 crc kubenswrapper[4672]: I0217 16:45:04.251583 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32178980f3fe45e86aa067829cac4b71d1b7fc839b50dac565e15c4e0bdb7654" Feb 17 16:45:04 crc kubenswrapper[4672]: I0217 16:45:04.247501 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-zzvc2" Feb 17 16:45:04 crc kubenswrapper[4672]: I0217 16:45:04.313577 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-b79hz"] Feb 17 16:45:04 crc kubenswrapper[4672]: I0217 16:45:04.324207 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-b79hz"] Feb 17 16:45:05 crc kubenswrapper[4672]: I0217 16:45:05.957934 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2180d9e0-3678-4bdd-84aa-0dba230aa4e3" path="/var/lib/kubelet/pods/2180d9e0-3678-4bdd-84aa-0dba230aa4e3/volumes" Feb 17 16:45:07 crc kubenswrapper[4672]: I0217 16:45:07.945165 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:45:07 crc kubenswrapper[4672]: E0217 16:45:07.945502 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:45:13 crc kubenswrapper[4672]: E0217 16:45:13.948039 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:45:16 crc kubenswrapper[4672]: E0217 16:45:16.947180 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:45:18 crc kubenswrapper[4672]: I0217 16:45:18.644317 4672 scope.go:117] "RemoveContainer" containerID="05f802e5ab0a5bcc44ea9f95953f50154f395bfe8d4a7775d34ed6c635f654c3" Feb 17 16:45:19 crc kubenswrapper[4672]: I0217 16:45:19.945738 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:45:19 crc kubenswrapper[4672]: E0217 16:45:19.946310 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:45:27 crc kubenswrapper[4672]: E0217 16:45:27.947293 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:45:28 crc kubenswrapper[4672]: E0217 16:45:28.947801 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:45:33 crc kubenswrapper[4672]: I0217 16:45:33.945596 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:45:33 crc kubenswrapper[4672]: E0217 16:45:33.947622 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:45:41 crc kubenswrapper[4672]: E0217 16:45:41.964813 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:45:41 crc kubenswrapper[4672]: E0217 16:45:41.965882 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:45:46 crc kubenswrapper[4672]: I0217 16:45:46.945663 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:45:46 crc kubenswrapper[4672]: E0217 16:45:46.946658 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:45:55 crc kubenswrapper[4672]: E0217 16:45:55.948724 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:45:55 crc kubenswrapper[4672]: E0217 16:45:55.948870 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:45:59 crc kubenswrapper[4672]: I0217 16:45:59.945598 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:46:00 crc kubenswrapper[4672]: I0217 16:46:00.851549 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerStarted","Data":"e1b57867ae3b2d0f7ae69d5114a296a48281c1419c2e4d2752760b9d915f000f"} Feb 17 16:46:09 crc kubenswrapper[4672]: E0217 16:46:09.947391 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:46:10 crc kubenswrapper[4672]: E0217 16:46:10.948873 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:46:11 crc kubenswrapper[4672]: I0217 16:46:11.964019 4672 generic.go:334] "Generic (PLEG): container finished" podID="e25af450-196c-4035-96b6-5148862bca0d" containerID="770710319f243745b2cdb557bb7dea33df4b0a20f2d45038251e0f4510b817b0" exitCode=2 Feb 17 16:46:11 crc kubenswrapper[4672]: I0217 16:46:11.967115 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj" event={"ID":"e25af450-196c-4035-96b6-5148862bca0d","Type":"ContainerDied","Data":"770710319f243745b2cdb557bb7dea33df4b0a20f2d45038251e0f4510b817b0"} Feb 17 16:46:13 crc kubenswrapper[4672]: I0217 16:46:13.462991 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj" Feb 17 16:46:13 crc kubenswrapper[4672]: I0217 16:46:13.578114 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e25af450-196c-4035-96b6-5148862bca0d-inventory\") pod \"e25af450-196c-4035-96b6-5148862bca0d\" (UID: \"e25af450-196c-4035-96b6-5148862bca0d\") " Feb 17 16:46:13 crc kubenswrapper[4672]: I0217 16:46:13.578293 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqknt\" (UniqueName: \"kubernetes.io/projected/e25af450-196c-4035-96b6-5148862bca0d-kube-api-access-lqknt\") pod \"e25af450-196c-4035-96b6-5148862bca0d\" (UID: \"e25af450-196c-4035-96b6-5148862bca0d\") " Feb 17 16:46:13 crc kubenswrapper[4672]: I0217 16:46:13.578714 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e25af450-196c-4035-96b6-5148862bca0d-ssh-key-openstack-edpm-ipam\") pod \"e25af450-196c-4035-96b6-5148862bca0d\" (UID: \"e25af450-196c-4035-96b6-5148862bca0d\") " Feb 17 16:46:13 crc kubenswrapper[4672]: I0217 16:46:13.584301 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e25af450-196c-4035-96b6-5148862bca0d-kube-api-access-lqknt" (OuterVolumeSpecName: "kube-api-access-lqknt") pod "e25af450-196c-4035-96b6-5148862bca0d" (UID: "e25af450-196c-4035-96b6-5148862bca0d"). InnerVolumeSpecName "kube-api-access-lqknt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:46:13 crc kubenswrapper[4672]: I0217 16:46:13.607339 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e25af450-196c-4035-96b6-5148862bca0d-inventory" (OuterVolumeSpecName: "inventory") pod "e25af450-196c-4035-96b6-5148862bca0d" (UID: "e25af450-196c-4035-96b6-5148862bca0d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:46:13 crc kubenswrapper[4672]: I0217 16:46:13.609784 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e25af450-196c-4035-96b6-5148862bca0d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e25af450-196c-4035-96b6-5148862bca0d" (UID: "e25af450-196c-4035-96b6-5148862bca0d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:46:13 crc kubenswrapper[4672]: I0217 16:46:13.682202 4672 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e25af450-196c-4035-96b6-5148862bca0d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:46:13 crc kubenswrapper[4672]: I0217 16:46:13.682255 4672 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e25af450-196c-4035-96b6-5148862bca0d-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:46:13 crc kubenswrapper[4672]: I0217 16:46:13.682268 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqknt\" (UniqueName: \"kubernetes.io/projected/e25af450-196c-4035-96b6-5148862bca0d-kube-api-access-lqknt\") on node \"crc\" DevicePath \"\"" Feb 17 16:46:13 crc kubenswrapper[4672]: I0217 16:46:13.982468 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj" event={"ID":"e25af450-196c-4035-96b6-5148862bca0d","Type":"ContainerDied","Data":"46d3733ef6e81d2e575a240680b63d91bbd9c9624c9fa68735daa460a513e11c"} Feb 17 16:46:13 crc kubenswrapper[4672]: I0217 16:46:13.982906 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46d3733ef6e81d2e575a240680b63d91bbd9c9624c9fa68735daa460a513e11c" Feb 17 16:46:13 crc kubenswrapper[4672]: I0217 16:46:13.982532 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj" Feb 17 16:46:20 crc kubenswrapper[4672]: E0217 16:46:20.954499 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:46:24 crc kubenswrapper[4672]: E0217 16:46:24.946573 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:46:31 crc kubenswrapper[4672]: I0217 16:46:31.037283 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zsr64"] Feb 17 16:46:31 crc kubenswrapper[4672]: E0217 16:46:31.038586 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b62f278-9a4a-4cb6-b093-ca74d724e523" containerName="collect-profiles" Feb 17 16:46:31 crc kubenswrapper[4672]: I0217 16:46:31.038609 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b62f278-9a4a-4cb6-b093-ca74d724e523" containerName="collect-profiles" Feb 17 16:46:31 crc kubenswrapper[4672]: E0217 16:46:31.038640 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e25af450-196c-4035-96b6-5148862bca0d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:46:31 crc kubenswrapper[4672]: I0217 16:46:31.038654 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="e25af450-196c-4035-96b6-5148862bca0d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:46:31 crc kubenswrapper[4672]: I0217 16:46:31.039011 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b62f278-9a4a-4cb6-b093-ca74d724e523" containerName="collect-profiles" Feb 17 16:46:31 crc kubenswrapper[4672]: I0217 16:46:31.039037 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="e25af450-196c-4035-96b6-5148862bca0d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:46:31 crc kubenswrapper[4672]: I0217 16:46:31.040207 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zsr64" Feb 17 16:46:31 crc kubenswrapper[4672]: I0217 16:46:31.042191 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6sng" Feb 17 16:46:31 crc kubenswrapper[4672]: I0217 16:46:31.043377 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:46:31 crc kubenswrapper[4672]: I0217 16:46:31.043809 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:46:31 crc kubenswrapper[4672]: I0217 16:46:31.044593 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:46:31 crc kubenswrapper[4672]: I0217 16:46:31.049813 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zsr64"] Feb 17 16:46:31 crc kubenswrapper[4672]: I0217 16:46:31.168796 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qktx6\" (UniqueName: \"kubernetes.io/projected/fcaca0dc-4760-43af-bc46-efcdc09d7164-kube-api-access-qktx6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zsr64\" (UID: \"fcaca0dc-4760-43af-bc46-efcdc09d7164\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zsr64" Feb 17 16:46:31 crc kubenswrapper[4672]: I0217 16:46:31.168847 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcaca0dc-4760-43af-bc46-efcdc09d7164-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zsr64\" (UID: \"fcaca0dc-4760-43af-bc46-efcdc09d7164\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zsr64" Feb 17 16:46:31 crc kubenswrapper[4672]: I0217 16:46:31.168937 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcaca0dc-4760-43af-bc46-efcdc09d7164-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zsr64\" (UID: \"fcaca0dc-4760-43af-bc46-efcdc09d7164\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zsr64" Feb 17 16:46:31 crc kubenswrapper[4672]: I0217 16:46:31.270912 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcaca0dc-4760-43af-bc46-efcdc09d7164-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zsr64\" (UID: \"fcaca0dc-4760-43af-bc46-efcdc09d7164\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zsr64" Feb 17 16:46:31 crc kubenswrapper[4672]: I0217 16:46:31.271098 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qktx6\" (UniqueName: \"kubernetes.io/projected/fcaca0dc-4760-43af-bc46-efcdc09d7164-kube-api-access-qktx6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zsr64\" (UID: \"fcaca0dc-4760-43af-bc46-efcdc09d7164\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zsr64" Feb 17 16:46:31 crc kubenswrapper[4672]: I0217 16:46:31.271121 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcaca0dc-4760-43af-bc46-efcdc09d7164-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zsr64\" (UID: \"fcaca0dc-4760-43af-bc46-efcdc09d7164\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zsr64" Feb 17 16:46:31 crc kubenswrapper[4672]: I0217 16:46:31.277308 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcaca0dc-4760-43af-bc46-efcdc09d7164-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zsr64\" (UID: \"fcaca0dc-4760-43af-bc46-efcdc09d7164\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zsr64" Feb 17 16:46:31 crc kubenswrapper[4672]: I0217 16:46:31.278026 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcaca0dc-4760-43af-bc46-efcdc09d7164-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zsr64\" (UID: \"fcaca0dc-4760-43af-bc46-efcdc09d7164\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zsr64" Feb 17 16:46:31 crc kubenswrapper[4672]: I0217 16:46:31.291421 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qktx6\" (UniqueName: \"kubernetes.io/projected/fcaca0dc-4760-43af-bc46-efcdc09d7164-kube-api-access-qktx6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zsr64\" (UID: \"fcaca0dc-4760-43af-bc46-efcdc09d7164\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zsr64" Feb 17 16:46:31 crc kubenswrapper[4672]: I0217 16:46:31.363001 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zsr64" Feb 17 16:46:31 crc kubenswrapper[4672]: I0217 16:46:31.877309 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zsr64"] Feb 17 16:46:31 crc kubenswrapper[4672]: W0217 16:46:31.880990 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcaca0dc_4760_43af_bc46_efcdc09d7164.slice/crio-d79c5b08871036304f0bdafd2118c436d50c346deed88239e48b1b94414e97f6 WatchSource:0}: Error finding container d79c5b08871036304f0bdafd2118c436d50c346deed88239e48b1b94414e97f6: Status 404 returned error can't find the container with id d79c5b08871036304f0bdafd2118c436d50c346deed88239e48b1b94414e97f6 Feb 17 16:46:31 crc kubenswrapper[4672]: E0217 16:46:31.955819 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:46:32 crc kubenswrapper[4672]: I0217 16:46:32.173165 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zsr64" event={"ID":"fcaca0dc-4760-43af-bc46-efcdc09d7164","Type":"ContainerStarted","Data":"d79c5b08871036304f0bdafd2118c436d50c346deed88239e48b1b94414e97f6"} Feb 17 16:46:33 crc kubenswrapper[4672]: I0217 16:46:33.185781 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zsr64" event={"ID":"fcaca0dc-4760-43af-bc46-efcdc09d7164","Type":"ContainerStarted","Data":"13724425a8d31b6ca6bcc56bfdd4d2c738395ff8dcddeb7f177c9dd0af2e5c98"} Feb 17 16:46:33 crc kubenswrapper[4672]: I0217 16:46:33.212566 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zsr64" podStartSLOduration=1.712318155 podStartE2EDuration="2.212546129s" podCreationTimestamp="2026-02-17 16:46:31 +0000 UTC" firstStartedPulling="2026-02-17 16:46:31.88355294 +0000 UTC m=+2600.637641672" lastFinishedPulling="2026-02-17 16:46:32.383780904 +0000 UTC m=+2601.137869646" observedRunningTime="2026-02-17 16:46:33.201284874 +0000 UTC m=+2601.955373606" watchObservedRunningTime="2026-02-17 16:46:33.212546129 +0000 UTC m=+2601.966634861" Feb 17 16:46:39 crc kubenswrapper[4672]: E0217 16:46:39.948503 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:46:43 crc kubenswrapper[4672]: E0217 16:46:43.946797 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:46:52 crc kubenswrapper[4672]: E0217 16:46:52.947537 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:46:56 crc kubenswrapper[4672]: E0217 16:46:56.948333 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:47:06 crc kubenswrapper[4672]: E0217 16:47:06.946799 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:47:11 crc kubenswrapper[4672]: E0217 16:47:11.957920 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:47:19 crc kubenswrapper[4672]: E0217 16:47:19.947632 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:47:26 crc kubenswrapper[4672]: E0217 16:47:26.948441 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:47:32 crc kubenswrapper[4672]: E0217 16:47:32.947973 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:47:40 crc kubenswrapper[4672]: E0217 16:47:40.948068 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:47:44 crc kubenswrapper[4672]: E0217 16:47:44.948670 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:47:55 crc kubenswrapper[4672]: E0217 16:47:55.947943 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:47:58 crc kubenswrapper[4672]: E0217 16:47:58.947125 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:48:03 crc kubenswrapper[4672]: I0217 16:48:03.121756 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wvv8b"] Feb 17 16:48:03 crc kubenswrapper[4672]: I0217 16:48:03.125293 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wvv8b" Feb 17 16:48:03 crc kubenswrapper[4672]: I0217 16:48:03.136132 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wvv8b"] Feb 17 16:48:03 crc kubenswrapper[4672]: I0217 16:48:03.266996 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d2acacf-9fb3-4f08-a7d0-797f8372a161-catalog-content\") pod \"redhat-operators-wvv8b\" (UID: \"9d2acacf-9fb3-4f08-a7d0-797f8372a161\") " pod="openshift-marketplace/redhat-operators-wvv8b" Feb 17 16:48:03 crc kubenswrapper[4672]: I0217 16:48:03.267144 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d2acacf-9fb3-4f08-a7d0-797f8372a161-utilities\") pod \"redhat-operators-wvv8b\" (UID: \"9d2acacf-9fb3-4f08-a7d0-797f8372a161\") " pod="openshift-marketplace/redhat-operators-wvv8b" Feb 17 16:48:03 crc kubenswrapper[4672]: I0217 16:48:03.267220 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldjdb\" (UniqueName: \"kubernetes.io/projected/9d2acacf-9fb3-4f08-a7d0-797f8372a161-kube-api-access-ldjdb\") pod \"redhat-operators-wvv8b\" (UID: \"9d2acacf-9fb3-4f08-a7d0-797f8372a161\") " pod="openshift-marketplace/redhat-operators-wvv8b" Feb 17 16:48:03 crc kubenswrapper[4672]: I0217 16:48:03.369583 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d2acacf-9fb3-4f08-a7d0-797f8372a161-catalog-content\") pod \"redhat-operators-wvv8b\" (UID: \"9d2acacf-9fb3-4f08-a7d0-797f8372a161\") " pod="openshift-marketplace/redhat-operators-wvv8b" Feb 17 16:48:03 crc kubenswrapper[4672]: I0217 16:48:03.369741 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d2acacf-9fb3-4f08-a7d0-797f8372a161-utilities\") pod \"redhat-operators-wvv8b\" (UID: \"9d2acacf-9fb3-4f08-a7d0-797f8372a161\") " pod="openshift-marketplace/redhat-operators-wvv8b" Feb 17 16:48:03 crc kubenswrapper[4672]: I0217 16:48:03.369821 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldjdb\" (UniqueName: \"kubernetes.io/projected/9d2acacf-9fb3-4f08-a7d0-797f8372a161-kube-api-access-ldjdb\") pod \"redhat-operators-wvv8b\" (UID: \"9d2acacf-9fb3-4f08-a7d0-797f8372a161\") " pod="openshift-marketplace/redhat-operators-wvv8b" Feb 17 16:48:03 crc kubenswrapper[4672]: I0217 16:48:03.370812 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d2acacf-9fb3-4f08-a7d0-797f8372a161-catalog-content\") pod \"redhat-operators-wvv8b\" (UID: \"9d2acacf-9fb3-4f08-a7d0-797f8372a161\") " pod="openshift-marketplace/redhat-operators-wvv8b" Feb 17 16:48:03 crc kubenswrapper[4672]: I0217 16:48:03.371084 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d2acacf-9fb3-4f08-a7d0-797f8372a161-utilities\") pod \"redhat-operators-wvv8b\" (UID: \"9d2acacf-9fb3-4f08-a7d0-797f8372a161\") " pod="openshift-marketplace/redhat-operators-wvv8b" Feb 17 16:48:03 crc kubenswrapper[4672]: I0217 16:48:03.394960 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldjdb\" (UniqueName: \"kubernetes.io/projected/9d2acacf-9fb3-4f08-a7d0-797f8372a161-kube-api-access-ldjdb\") pod \"redhat-operators-wvv8b\" (UID: \"9d2acacf-9fb3-4f08-a7d0-797f8372a161\") " pod="openshift-marketplace/redhat-operators-wvv8b" Feb 17 16:48:03 crc kubenswrapper[4672]: I0217 16:48:03.451449 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wvv8b" Feb 17 16:48:03 crc kubenswrapper[4672]: I0217 16:48:03.916187 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wvv8b"] Feb 17 16:48:04 crc kubenswrapper[4672]: I0217 16:48:04.300494 4672 generic.go:334] "Generic (PLEG): container finished" podID="9d2acacf-9fb3-4f08-a7d0-797f8372a161" containerID="ec54aba5a312f72ef933183ef96efd0344f45f7c5076bc4a1e7f1ca648e4f19c" exitCode=0 Feb 17 16:48:04 crc kubenswrapper[4672]: I0217 16:48:04.300563 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvv8b" event={"ID":"9d2acacf-9fb3-4f08-a7d0-797f8372a161","Type":"ContainerDied","Data":"ec54aba5a312f72ef933183ef96efd0344f45f7c5076bc4a1e7f1ca648e4f19c"} Feb 17 16:48:04 crc kubenswrapper[4672]: I0217 16:48:04.300870 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvv8b" event={"ID":"9d2acacf-9fb3-4f08-a7d0-797f8372a161","Type":"ContainerStarted","Data":"afe37b09d6393468468da06daa0a47131ae7d5613a1a8db47bbd2a596d4dc291"} Feb 17 16:48:05 crc kubenswrapper[4672]: I0217 16:48:05.312816 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvv8b" event={"ID":"9d2acacf-9fb3-4f08-a7d0-797f8372a161","Type":"ContainerStarted","Data":"0a9cbb736aeb6c9b5a3bdafc931aed76d8363eecee37b5069570c6ce815ad496"} Feb 17 16:48:07 crc kubenswrapper[4672]: I0217 16:48:07.331188 4672 generic.go:334] "Generic (PLEG): container finished" podID="9d2acacf-9fb3-4f08-a7d0-797f8372a161" containerID="0a9cbb736aeb6c9b5a3bdafc931aed76d8363eecee37b5069570c6ce815ad496" exitCode=0 Feb 17 16:48:07 crc kubenswrapper[4672]: I0217 16:48:07.331290 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvv8b" event={"ID":"9d2acacf-9fb3-4f08-a7d0-797f8372a161","Type":"ContainerDied","Data":"0a9cbb736aeb6c9b5a3bdafc931aed76d8363eecee37b5069570c6ce815ad496"} Feb 17 16:48:07 crc kubenswrapper[4672]: I0217 16:48:07.337273 4672 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:48:08 crc kubenswrapper[4672]: I0217 16:48:08.357767 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvv8b" event={"ID":"9d2acacf-9fb3-4f08-a7d0-797f8372a161","Type":"ContainerStarted","Data":"67d3b43eaac1303d96a0c5cfd9c4d1e4f045cefc4364ab15deda04d6b07866ae"} Feb 17 16:48:08 crc kubenswrapper[4672]: I0217 16:48:08.387459 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wvv8b" podStartSLOduration=1.904773571 podStartE2EDuration="5.38743353s" podCreationTimestamp="2026-02-17 16:48:03 +0000 UTC" firstStartedPulling="2026-02-17 16:48:04.302585781 +0000 UTC m=+2693.056674513" lastFinishedPulling="2026-02-17 16:48:07.78524574 +0000 UTC m=+2696.539334472" observedRunningTime="2026-02-17 16:48:08.381255528 +0000 UTC m=+2697.135344260" watchObservedRunningTime="2026-02-17 16:48:08.38743353 +0000 UTC m=+2697.141522262" Feb 17 16:48:08 crc kubenswrapper[4672]: E0217 16:48:08.948224 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:48:11 crc kubenswrapper[4672]: E0217 16:48:11.082070 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:48:11 crc kubenswrapper[4672]: E0217 16:48:11.082379 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:48:11 crc kubenswrapper[4672]: E0217 16:48:11.082559 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq9ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-qrhj8_openstack(dc5471f5-2491-4841-be45-09c8f14b35c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:48:11 crc kubenswrapper[4672]: E0217 16:48:11.083797 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:48:13 crc kubenswrapper[4672]: I0217 16:48:13.452915 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wvv8b" Feb 17 16:48:13 crc kubenswrapper[4672]: I0217 16:48:13.454542 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wvv8b" Feb 17 16:48:14 crc kubenswrapper[4672]: I0217 16:48:14.532863 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wvv8b" podUID="9d2acacf-9fb3-4f08-a7d0-797f8372a161" containerName="registry-server" probeResult="failure" output=< Feb 17 16:48:14 crc kubenswrapper[4672]: timeout: failed to connect service ":50051" within 1s Feb 17 16:48:14 crc kubenswrapper[4672]: > Feb 17 16:48:21 crc kubenswrapper[4672]: E0217 16:48:21.961778 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:48:21 crc kubenswrapper[4672]: E0217 16:48:21.961896 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:48:23 crc kubenswrapper[4672]: I0217 16:48:23.523480 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wvv8b" Feb 17 16:48:23 crc kubenswrapper[4672]: I0217 16:48:23.578429 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wvv8b" Feb 17 16:48:23 crc kubenswrapper[4672]: I0217 16:48:23.757001 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wvv8b"] Feb 17 16:48:25 crc kubenswrapper[4672]: I0217 16:48:25.519625 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wvv8b" podUID="9d2acacf-9fb3-4f08-a7d0-797f8372a161" containerName="registry-server" containerID="cri-o://67d3b43eaac1303d96a0c5cfd9c4d1e4f045cefc4364ab15deda04d6b07866ae" gracePeriod=2 Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.122001 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wvv8b" Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.278374 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldjdb\" (UniqueName: \"kubernetes.io/projected/9d2acacf-9fb3-4f08-a7d0-797f8372a161-kube-api-access-ldjdb\") pod \"9d2acacf-9fb3-4f08-a7d0-797f8372a161\" (UID: \"9d2acacf-9fb3-4f08-a7d0-797f8372a161\") " Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.278539 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d2acacf-9fb3-4f08-a7d0-797f8372a161-utilities\") pod \"9d2acacf-9fb3-4f08-a7d0-797f8372a161\" (UID: \"9d2acacf-9fb3-4f08-a7d0-797f8372a161\") " Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.278672 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d2acacf-9fb3-4f08-a7d0-797f8372a161-catalog-content\") pod \"9d2acacf-9fb3-4f08-a7d0-797f8372a161\" (UID: \"9d2acacf-9fb3-4f08-a7d0-797f8372a161\") " Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.280428 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d2acacf-9fb3-4f08-a7d0-797f8372a161-utilities" (OuterVolumeSpecName: "utilities") pod "9d2acacf-9fb3-4f08-a7d0-797f8372a161" (UID: "9d2acacf-9fb3-4f08-a7d0-797f8372a161"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.287965 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d2acacf-9fb3-4f08-a7d0-797f8372a161-kube-api-access-ldjdb" (OuterVolumeSpecName: "kube-api-access-ldjdb") pod "9d2acacf-9fb3-4f08-a7d0-797f8372a161" (UID: "9d2acacf-9fb3-4f08-a7d0-797f8372a161"). InnerVolumeSpecName "kube-api-access-ldjdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.380995 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldjdb\" (UniqueName: \"kubernetes.io/projected/9d2acacf-9fb3-4f08-a7d0-797f8372a161-kube-api-access-ldjdb\") on node \"crc\" DevicePath \"\"" Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.381031 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d2acacf-9fb3-4f08-a7d0-797f8372a161-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.411199 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d2acacf-9fb3-4f08-a7d0-797f8372a161-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d2acacf-9fb3-4f08-a7d0-797f8372a161" (UID: "9d2acacf-9fb3-4f08-a7d0-797f8372a161"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.483908 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d2acacf-9fb3-4f08-a7d0-797f8372a161-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.531580 4672 generic.go:334] "Generic (PLEG): container finished" podID="9d2acacf-9fb3-4f08-a7d0-797f8372a161" containerID="67d3b43eaac1303d96a0c5cfd9c4d1e4f045cefc4364ab15deda04d6b07866ae" exitCode=0 Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.531777 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvv8b" event={"ID":"9d2acacf-9fb3-4f08-a7d0-797f8372a161","Type":"ContainerDied","Data":"67d3b43eaac1303d96a0c5cfd9c4d1e4f045cefc4364ab15deda04d6b07866ae"} Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.532698 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvv8b" event={"ID":"9d2acacf-9fb3-4f08-a7d0-797f8372a161","Type":"ContainerDied","Data":"afe37b09d6393468468da06daa0a47131ae7d5613a1a8db47bbd2a596d4dc291"} Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.531879 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wvv8b" Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.532774 4672 scope.go:117] "RemoveContainer" containerID="67d3b43eaac1303d96a0c5cfd9c4d1e4f045cefc4364ab15deda04d6b07866ae" Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.599269 4672 scope.go:117] "RemoveContainer" containerID="0a9cbb736aeb6c9b5a3bdafc931aed76d8363eecee37b5069570c6ce815ad496" Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.622520 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wvv8b"] Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.638113 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wvv8b"] Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.645364 4672 scope.go:117] "RemoveContainer" containerID="ec54aba5a312f72ef933183ef96efd0344f45f7c5076bc4a1e7f1ca648e4f19c" Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.699975 4672 scope.go:117] "RemoveContainer" containerID="67d3b43eaac1303d96a0c5cfd9c4d1e4f045cefc4364ab15deda04d6b07866ae" Feb 17 16:48:26 crc kubenswrapper[4672]: E0217 16:48:26.701786 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67d3b43eaac1303d96a0c5cfd9c4d1e4f045cefc4364ab15deda04d6b07866ae\": container with ID starting with 67d3b43eaac1303d96a0c5cfd9c4d1e4f045cefc4364ab15deda04d6b07866ae not found: ID does not exist" containerID="67d3b43eaac1303d96a0c5cfd9c4d1e4f045cefc4364ab15deda04d6b07866ae" Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.701880 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67d3b43eaac1303d96a0c5cfd9c4d1e4f045cefc4364ab15deda04d6b07866ae"} err="failed to get container status \"67d3b43eaac1303d96a0c5cfd9c4d1e4f045cefc4364ab15deda04d6b07866ae\": rpc error: code = NotFound desc = could not find container \"67d3b43eaac1303d96a0c5cfd9c4d1e4f045cefc4364ab15deda04d6b07866ae\": container with ID starting with 67d3b43eaac1303d96a0c5cfd9c4d1e4f045cefc4364ab15deda04d6b07866ae not found: ID does not exist" Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.701902 4672 scope.go:117] "RemoveContainer" containerID="0a9cbb736aeb6c9b5a3bdafc931aed76d8363eecee37b5069570c6ce815ad496" Feb 17 16:48:26 crc kubenswrapper[4672]: E0217 16:48:26.702358 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a9cbb736aeb6c9b5a3bdafc931aed76d8363eecee37b5069570c6ce815ad496\": container with ID starting with 0a9cbb736aeb6c9b5a3bdafc931aed76d8363eecee37b5069570c6ce815ad496 not found: ID does not exist" containerID="0a9cbb736aeb6c9b5a3bdafc931aed76d8363eecee37b5069570c6ce815ad496" Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.702399 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a9cbb736aeb6c9b5a3bdafc931aed76d8363eecee37b5069570c6ce815ad496"} err="failed to get container status \"0a9cbb736aeb6c9b5a3bdafc931aed76d8363eecee37b5069570c6ce815ad496\": rpc error: code = NotFound desc = could not find container \"0a9cbb736aeb6c9b5a3bdafc931aed76d8363eecee37b5069570c6ce815ad496\": container with ID starting with 0a9cbb736aeb6c9b5a3bdafc931aed76d8363eecee37b5069570c6ce815ad496 not found: ID does not exist" Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.702424 4672 scope.go:117] "RemoveContainer" containerID="ec54aba5a312f72ef933183ef96efd0344f45f7c5076bc4a1e7f1ca648e4f19c" Feb 17 16:48:26 crc kubenswrapper[4672]: E0217 16:48:26.703591 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec54aba5a312f72ef933183ef96efd0344f45f7c5076bc4a1e7f1ca648e4f19c\": container with ID starting with ec54aba5a312f72ef933183ef96efd0344f45f7c5076bc4a1e7f1ca648e4f19c not found: ID does not exist" containerID="ec54aba5a312f72ef933183ef96efd0344f45f7c5076bc4a1e7f1ca648e4f19c" Feb 17 16:48:26 crc kubenswrapper[4672]: I0217 16:48:26.703613 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec54aba5a312f72ef933183ef96efd0344f45f7c5076bc4a1e7f1ca648e4f19c"} err="failed to get container status \"ec54aba5a312f72ef933183ef96efd0344f45f7c5076bc4a1e7f1ca648e4f19c\": rpc error: code = NotFound desc = could not find container \"ec54aba5a312f72ef933183ef96efd0344f45f7c5076bc4a1e7f1ca648e4f19c\": container with ID starting with ec54aba5a312f72ef933183ef96efd0344f45f7c5076bc4a1e7f1ca648e4f19c not found: ID does not exist" Feb 17 16:48:27 crc kubenswrapper[4672]: I0217 16:48:27.565674 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:48:27 crc kubenswrapper[4672]: I0217 16:48:27.566013 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:48:27 crc kubenswrapper[4672]: I0217 16:48:27.959856 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d2acacf-9fb3-4f08-a7d0-797f8372a161" path="/var/lib/kubelet/pods/9d2acacf-9fb3-4f08-a7d0-797f8372a161/volumes" Feb 17 16:48:32 crc kubenswrapper[4672]: E0217 16:48:32.947286 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:48:34 crc kubenswrapper[4672]: E0217 16:48:34.070989 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:48:34 crc kubenswrapper[4672]: E0217 16:48:34.071448 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:48:34 crc kubenswrapper[4672]: E0217 16:48:34.071655 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66h7h644h64ch5f8h565hfch5dh56chfdh8hfdh5b5h567h6dh665h557h74h665hcbh96h659h554h589h57fh5d9h55h564hcfh5dhffhfdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tx4bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9e58ce9b-ddd5-42bb-8e07-08a22c8871a5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:48:34 crc kubenswrapper[4672]: E0217 16:48:34.072928 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:48:43 crc kubenswrapper[4672]: E0217 16:48:43.947780 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:48:47 crc kubenswrapper[4672]: E0217 16:48:47.949002 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:48:56 crc kubenswrapper[4672]: E0217 16:48:56.948797 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:48:57 crc kubenswrapper[4672]: I0217 16:48:57.566846 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:48:57 crc kubenswrapper[4672]: I0217 16:48:57.566961 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:48:59 crc kubenswrapper[4672]: E0217 16:48:59.948355 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:49:10 crc kubenswrapper[4672]: E0217 16:49:10.947263 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:49:10 crc kubenswrapper[4672]: E0217 16:49:10.947554 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:49:13 crc kubenswrapper[4672]: I0217 16:49:13.374727 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-264v6"] Feb 17 16:49:13 crc kubenswrapper[4672]: E0217 16:49:13.375603 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2acacf-9fb3-4f08-a7d0-797f8372a161" containerName="registry-server" Feb 17 16:49:13 crc kubenswrapper[4672]: I0217 16:49:13.375620 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2acacf-9fb3-4f08-a7d0-797f8372a161" containerName="registry-server" Feb 17 16:49:13 crc kubenswrapper[4672]: E0217 16:49:13.375651 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2acacf-9fb3-4f08-a7d0-797f8372a161" containerName="extract-content" Feb 17 16:49:13 crc kubenswrapper[4672]: I0217 16:49:13.375660 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2acacf-9fb3-4f08-a7d0-797f8372a161" containerName="extract-content" Feb 17 16:49:13 crc kubenswrapper[4672]: E0217 16:49:13.375674 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2acacf-9fb3-4f08-a7d0-797f8372a161" containerName="extract-utilities" Feb 17 16:49:13 crc kubenswrapper[4672]: I0217 16:49:13.375684 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2acacf-9fb3-4f08-a7d0-797f8372a161" containerName="extract-utilities" Feb 17 16:49:13 crc kubenswrapper[4672]: I0217 16:49:13.375924 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2acacf-9fb3-4f08-a7d0-797f8372a161" containerName="registry-server" Feb 17 16:49:13 crc kubenswrapper[4672]: I0217 16:49:13.377802 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-264v6" Feb 17 16:49:13 crc kubenswrapper[4672]: I0217 16:49:13.400562 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-264v6"] Feb 17 16:49:13 crc kubenswrapper[4672]: I0217 16:49:13.502722 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dckt5\" (UniqueName: \"kubernetes.io/projected/7b1ab52d-1671-4118-b99c-a87fa9859f91-kube-api-access-dckt5\") pod \"community-operators-264v6\" (UID: \"7b1ab52d-1671-4118-b99c-a87fa9859f91\") " pod="openshift-marketplace/community-operators-264v6" Feb 17 16:49:13 crc kubenswrapper[4672]: I0217 16:49:13.503344 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b1ab52d-1671-4118-b99c-a87fa9859f91-utilities\") pod \"community-operators-264v6\" (UID: \"7b1ab52d-1671-4118-b99c-a87fa9859f91\") " pod="openshift-marketplace/community-operators-264v6" Feb 17 16:49:13 crc kubenswrapper[4672]: I0217 16:49:13.503561 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b1ab52d-1671-4118-b99c-a87fa9859f91-catalog-content\") pod \"community-operators-264v6\" (UID: \"7b1ab52d-1671-4118-b99c-a87fa9859f91\") " pod="openshift-marketplace/community-operators-264v6" Feb 17 16:49:13 crc kubenswrapper[4672]: I0217 16:49:13.605877 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b1ab52d-1671-4118-b99c-a87fa9859f91-utilities\") pod \"community-operators-264v6\" (UID: \"7b1ab52d-1671-4118-b99c-a87fa9859f91\") " pod="openshift-marketplace/community-operators-264v6" Feb 17 16:49:13 crc kubenswrapper[4672]: I0217 16:49:13.606037 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b1ab52d-1671-4118-b99c-a87fa9859f91-catalog-content\") pod \"community-operators-264v6\" (UID: \"7b1ab52d-1671-4118-b99c-a87fa9859f91\") " pod="openshift-marketplace/community-operators-264v6" Feb 17 16:49:13 crc kubenswrapper[4672]: I0217 16:49:13.606078 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dckt5\" (UniqueName: \"kubernetes.io/projected/7b1ab52d-1671-4118-b99c-a87fa9859f91-kube-api-access-dckt5\") pod \"community-operators-264v6\" (UID: \"7b1ab52d-1671-4118-b99c-a87fa9859f91\") " pod="openshift-marketplace/community-operators-264v6" Feb 17 16:49:13 crc kubenswrapper[4672]: I0217 16:49:13.606385 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b1ab52d-1671-4118-b99c-a87fa9859f91-utilities\") pod \"community-operators-264v6\" (UID: \"7b1ab52d-1671-4118-b99c-a87fa9859f91\") " pod="openshift-marketplace/community-operators-264v6" Feb 17 16:49:13 crc kubenswrapper[4672]: I0217 16:49:13.606623 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b1ab52d-1671-4118-b99c-a87fa9859f91-catalog-content\") pod \"community-operators-264v6\" (UID: \"7b1ab52d-1671-4118-b99c-a87fa9859f91\") " pod="openshift-marketplace/community-operators-264v6" Feb 17 16:49:13 crc kubenswrapper[4672]: I0217 16:49:13.627885 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dckt5\" (UniqueName: \"kubernetes.io/projected/7b1ab52d-1671-4118-b99c-a87fa9859f91-kube-api-access-dckt5\") pod \"community-operators-264v6\" (UID: \"7b1ab52d-1671-4118-b99c-a87fa9859f91\") " pod="openshift-marketplace/community-operators-264v6" Feb 17 16:49:13 crc kubenswrapper[4672]: I0217 16:49:13.696637 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-264v6" Feb 17 16:49:14 crc kubenswrapper[4672]: I0217 16:49:14.253097 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-264v6"] Feb 17 16:49:14 crc kubenswrapper[4672]: W0217 16:49:14.258506 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b1ab52d_1671_4118_b99c_a87fa9859f91.slice/crio-c6aa4e58be2994b7933e525f83a7674cd5a3e09fe65b313bd40ebe8dadc19a4c WatchSource:0}: Error finding container c6aa4e58be2994b7933e525f83a7674cd5a3e09fe65b313bd40ebe8dadc19a4c: Status 404 returned error can't find the container with id c6aa4e58be2994b7933e525f83a7674cd5a3e09fe65b313bd40ebe8dadc19a4c Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.008956 4672 generic.go:334] "Generic (PLEG): container finished" podID="7b1ab52d-1671-4118-b99c-a87fa9859f91" containerID="03859a1795602bee46a1113ae8b0cc2a7b4453c38c1accacdd5df919a8788361" exitCode=0 Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.009006 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-264v6" event={"ID":"7b1ab52d-1671-4118-b99c-a87fa9859f91","Type":"ContainerDied","Data":"03859a1795602bee46a1113ae8b0cc2a7b4453c38c1accacdd5df919a8788361"} Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.009323 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-264v6" event={"ID":"7b1ab52d-1671-4118-b99c-a87fa9859f91","Type":"ContainerStarted","Data":"c6aa4e58be2994b7933e525f83a7674cd5a3e09fe65b313bd40ebe8dadc19a4c"} Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.572283 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cqjtt"] Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.574788 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cqjtt" Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.590050 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cqjtt"] Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.658329 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f06d2f2e-8a88-49ca-9345-815de090431d-utilities\") pod \"certified-operators-cqjtt\" (UID: \"f06d2f2e-8a88-49ca-9345-815de090431d\") " pod="openshift-marketplace/certified-operators-cqjtt" Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.658601 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f06d2f2e-8a88-49ca-9345-815de090431d-catalog-content\") pod \"certified-operators-cqjtt\" (UID: \"f06d2f2e-8a88-49ca-9345-815de090431d\") " pod="openshift-marketplace/certified-operators-cqjtt" Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.658704 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gwb6\" (UniqueName: \"kubernetes.io/projected/f06d2f2e-8a88-49ca-9345-815de090431d-kube-api-access-8gwb6\") pod \"certified-operators-cqjtt\" (UID: \"f06d2f2e-8a88-49ca-9345-815de090431d\") " pod="openshift-marketplace/certified-operators-cqjtt" Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.760551 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f06d2f2e-8a88-49ca-9345-815de090431d-utilities\") pod \"certified-operators-cqjtt\" (UID: \"f06d2f2e-8a88-49ca-9345-815de090431d\") " pod="openshift-marketplace/certified-operators-cqjtt" Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.760624 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f06d2f2e-8a88-49ca-9345-815de090431d-catalog-content\") pod \"certified-operators-cqjtt\" (UID: \"f06d2f2e-8a88-49ca-9345-815de090431d\") " pod="openshift-marketplace/certified-operators-cqjtt" Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.760652 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gwb6\" (UniqueName: \"kubernetes.io/projected/f06d2f2e-8a88-49ca-9345-815de090431d-kube-api-access-8gwb6\") pod \"certified-operators-cqjtt\" (UID: \"f06d2f2e-8a88-49ca-9345-815de090431d\") " pod="openshift-marketplace/certified-operators-cqjtt" Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.761359 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f06d2f2e-8a88-49ca-9345-815de090431d-utilities\") pod \"certified-operators-cqjtt\" (UID: \"f06d2f2e-8a88-49ca-9345-815de090431d\") " pod="openshift-marketplace/certified-operators-cqjtt" Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.761680 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f06d2f2e-8a88-49ca-9345-815de090431d-catalog-content\") pod \"certified-operators-cqjtt\" (UID: \"f06d2f2e-8a88-49ca-9345-815de090431d\") " pod="openshift-marketplace/certified-operators-cqjtt" Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.784560 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qmhpr"] Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.787287 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmhpr" Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.787283 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gwb6\" (UniqueName: \"kubernetes.io/projected/f06d2f2e-8a88-49ca-9345-815de090431d-kube-api-access-8gwb6\") pod \"certified-operators-cqjtt\" (UID: \"f06d2f2e-8a88-49ca-9345-815de090431d\") " pod="openshift-marketplace/certified-operators-cqjtt" Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.807543 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmhpr"] Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.862260 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0476848c-e542-4f12-9fdf-4b5ef149886f-catalog-content\") pod \"redhat-marketplace-qmhpr\" (UID: \"0476848c-e542-4f12-9fdf-4b5ef149886f\") " pod="openshift-marketplace/redhat-marketplace-qmhpr" Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.862342 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vbj4\" (UniqueName: \"kubernetes.io/projected/0476848c-e542-4f12-9fdf-4b5ef149886f-kube-api-access-6vbj4\") pod \"redhat-marketplace-qmhpr\" (UID: \"0476848c-e542-4f12-9fdf-4b5ef149886f\") " pod="openshift-marketplace/redhat-marketplace-qmhpr" Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.862468 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0476848c-e542-4f12-9fdf-4b5ef149886f-utilities\") pod \"redhat-marketplace-qmhpr\" (UID: \"0476848c-e542-4f12-9fdf-4b5ef149886f\") " pod="openshift-marketplace/redhat-marketplace-qmhpr" Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.954639 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cqjtt" Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.963804 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0476848c-e542-4f12-9fdf-4b5ef149886f-utilities\") pod \"redhat-marketplace-qmhpr\" (UID: \"0476848c-e542-4f12-9fdf-4b5ef149886f\") " pod="openshift-marketplace/redhat-marketplace-qmhpr" Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.963974 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0476848c-e542-4f12-9fdf-4b5ef149886f-catalog-content\") pod \"redhat-marketplace-qmhpr\" (UID: \"0476848c-e542-4f12-9fdf-4b5ef149886f\") " pod="openshift-marketplace/redhat-marketplace-qmhpr" Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.964004 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vbj4\" (UniqueName: \"kubernetes.io/projected/0476848c-e542-4f12-9fdf-4b5ef149886f-kube-api-access-6vbj4\") pod \"redhat-marketplace-qmhpr\" (UID: \"0476848c-e542-4f12-9fdf-4b5ef149886f\") " pod="openshift-marketplace/redhat-marketplace-qmhpr" Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.964459 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0476848c-e542-4f12-9fdf-4b5ef149886f-catalog-content\") pod \"redhat-marketplace-qmhpr\" (UID: \"0476848c-e542-4f12-9fdf-4b5ef149886f\") " pod="openshift-marketplace/redhat-marketplace-qmhpr" Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.964748 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0476848c-e542-4f12-9fdf-4b5ef149886f-utilities\") pod \"redhat-marketplace-qmhpr\" (UID: \"0476848c-e542-4f12-9fdf-4b5ef149886f\") " pod="openshift-marketplace/redhat-marketplace-qmhpr" Feb 17 16:49:15 crc kubenswrapper[4672]: I0217 16:49:15.984343 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vbj4\" (UniqueName: \"kubernetes.io/projected/0476848c-e542-4f12-9fdf-4b5ef149886f-kube-api-access-6vbj4\") pod \"redhat-marketplace-qmhpr\" (UID: \"0476848c-e542-4f12-9fdf-4b5ef149886f\") " pod="openshift-marketplace/redhat-marketplace-qmhpr" Feb 17 16:49:16 crc kubenswrapper[4672]: I0217 16:49:16.028977 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-264v6" event={"ID":"7b1ab52d-1671-4118-b99c-a87fa9859f91","Type":"ContainerStarted","Data":"56418438c12751975a17cb428e9b4a53084ccddd268e81df2c09a01aa3c7fb68"} Feb 17 16:49:16 crc kubenswrapper[4672]: I0217 16:49:16.155103 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmhpr" Feb 17 16:49:16 crc kubenswrapper[4672]: I0217 16:49:16.516626 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cqjtt"] Feb 17 16:49:16 crc kubenswrapper[4672]: W0217 16:49:16.519650 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf06d2f2e_8a88_49ca_9345_815de090431d.slice/crio-d7ec346edf6805e3b56a848562316388229e16018fcaee39af4298c02e713db4 WatchSource:0}: Error finding container d7ec346edf6805e3b56a848562316388229e16018fcaee39af4298c02e713db4: Status 404 returned error can't find the container with id d7ec346edf6805e3b56a848562316388229e16018fcaee39af4298c02e713db4 Feb 17 16:49:16 crc kubenswrapper[4672]: I0217 16:49:16.725203 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmhpr"] Feb 17 16:49:16 crc kubenswrapper[4672]: W0217 16:49:16.725444 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0476848c_e542_4f12_9fdf_4b5ef149886f.slice/crio-8ca4c423252e67ba08e2f3f9a0d30ef0a3a1843c9fce56759c9dc8c0cf683981 WatchSource:0}: Error finding container 8ca4c423252e67ba08e2f3f9a0d30ef0a3a1843c9fce56759c9dc8c0cf683981: Status 404 returned error can't find the container with id 8ca4c423252e67ba08e2f3f9a0d30ef0a3a1843c9fce56759c9dc8c0cf683981 Feb 17 16:49:17 crc kubenswrapper[4672]: I0217 16:49:17.042747 4672 generic.go:334] "Generic (PLEG): container finished" podID="7b1ab52d-1671-4118-b99c-a87fa9859f91" containerID="56418438c12751975a17cb428e9b4a53084ccddd268e81df2c09a01aa3c7fb68" exitCode=0 Feb 17 16:49:17 crc kubenswrapper[4672]: I0217 16:49:17.042798 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-264v6" event={"ID":"7b1ab52d-1671-4118-b99c-a87fa9859f91","Type":"ContainerDied","Data":"56418438c12751975a17cb428e9b4a53084ccddd268e81df2c09a01aa3c7fb68"} Feb 17 16:49:17 crc kubenswrapper[4672]: I0217 16:49:17.045804 4672 generic.go:334] "Generic (PLEG): container finished" podID="f06d2f2e-8a88-49ca-9345-815de090431d" containerID="58ac56515eb971fa5a54674cace2a561ca3f12f893a7ad1770d1790d20963fa9" exitCode=0 Feb 17 16:49:17 crc kubenswrapper[4672]: I0217 16:49:17.045866 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqjtt" event={"ID":"f06d2f2e-8a88-49ca-9345-815de090431d","Type":"ContainerDied","Data":"58ac56515eb971fa5a54674cace2a561ca3f12f893a7ad1770d1790d20963fa9"} Feb 17 16:49:17 crc kubenswrapper[4672]: I0217 16:49:17.045892 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqjtt" event={"ID":"f06d2f2e-8a88-49ca-9345-815de090431d","Type":"ContainerStarted","Data":"d7ec346edf6805e3b56a848562316388229e16018fcaee39af4298c02e713db4"} Feb 17 16:49:17 crc kubenswrapper[4672]: I0217 16:49:17.047948 4672 generic.go:334] "Generic (PLEG): container finished" podID="0476848c-e542-4f12-9fdf-4b5ef149886f" containerID="669d99a858966adc246dc1b583939cdc6c606e5bd1437cea5faf1e14deeb1211" exitCode=0 Feb 17 16:49:17 crc kubenswrapper[4672]: I0217 16:49:17.047973 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmhpr" event={"ID":"0476848c-e542-4f12-9fdf-4b5ef149886f","Type":"ContainerDied","Data":"669d99a858966adc246dc1b583939cdc6c606e5bd1437cea5faf1e14deeb1211"} Feb 17 16:49:17 crc kubenswrapper[4672]: I0217 16:49:17.047991 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmhpr" event={"ID":"0476848c-e542-4f12-9fdf-4b5ef149886f","Type":"ContainerStarted","Data":"8ca4c423252e67ba08e2f3f9a0d30ef0a3a1843c9fce56759c9dc8c0cf683981"} Feb 17 16:49:18 crc kubenswrapper[4672]: I0217 16:49:18.064709 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqjtt" event={"ID":"f06d2f2e-8a88-49ca-9345-815de090431d","Type":"ContainerStarted","Data":"251ef7966da033880b0b24a69ca11288dbd740eb4822d366a944df7a63b655a7"} Feb 17 16:49:18 crc kubenswrapper[4672]: I0217 16:49:18.071623 4672 generic.go:334] "Generic (PLEG): container finished" podID="0476848c-e542-4f12-9fdf-4b5ef149886f" containerID="941854d2f9cb107f9cf7e377797e961b0306be1608e72a2ccccf4fb2511a4cab" exitCode=0 Feb 17 16:49:18 crc kubenswrapper[4672]: I0217 16:49:18.071694 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmhpr" event={"ID":"0476848c-e542-4f12-9fdf-4b5ef149886f","Type":"ContainerDied","Data":"941854d2f9cb107f9cf7e377797e961b0306be1608e72a2ccccf4fb2511a4cab"} Feb 17 16:49:18 crc kubenswrapper[4672]: I0217 16:49:18.080287 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-264v6" event={"ID":"7b1ab52d-1671-4118-b99c-a87fa9859f91","Type":"ContainerStarted","Data":"9134dabb862a5636e9e65ed1cae4a91db3ae676abe3662f9ff78468a5efd9c1f"} Feb 17 16:49:18 crc kubenswrapper[4672]: I0217 16:49:18.163634 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-264v6" podStartSLOduration=2.705068172 podStartE2EDuration="5.16361276s" podCreationTimestamp="2026-02-17 16:49:13 +0000 UTC" firstStartedPulling="2026-02-17 16:49:15.010804236 +0000 UTC m=+2763.764892968" lastFinishedPulling="2026-02-17 16:49:17.469348824 +0000 UTC m=+2766.223437556" observedRunningTime="2026-02-17 16:49:18.108308569 +0000 UTC m=+2766.862397321" watchObservedRunningTime="2026-02-17 16:49:18.16361276 +0000 UTC m=+2766.917701502" Feb 17 16:49:19 crc kubenswrapper[4672]: I0217 16:49:19.097120 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmhpr" event={"ID":"0476848c-e542-4f12-9fdf-4b5ef149886f","Type":"ContainerStarted","Data":"6597092ac7c6354e0c39df27f877ad4a9f8193ccaf015b437d3d028ab926317a"} Feb 17 16:49:19 crc kubenswrapper[4672]: I0217 16:49:19.119365 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qmhpr" podStartSLOduration=2.712219966 podStartE2EDuration="4.119349077s" podCreationTimestamp="2026-02-17 16:49:15 +0000 UTC" firstStartedPulling="2026-02-17 16:49:17.05607424 +0000 UTC m=+2765.810162972" lastFinishedPulling="2026-02-17 16:49:18.463203341 +0000 UTC m=+2767.217292083" observedRunningTime="2026-02-17 16:49:19.115382073 +0000 UTC m=+2767.869470835" watchObservedRunningTime="2026-02-17 16:49:19.119349077 +0000 UTC m=+2767.873437809" Feb 17 16:49:21 crc kubenswrapper[4672]: I0217 16:49:21.117231 4672 generic.go:334] "Generic (PLEG): container finished" podID="f06d2f2e-8a88-49ca-9345-815de090431d" containerID="251ef7966da033880b0b24a69ca11288dbd740eb4822d366a944df7a63b655a7" exitCode=0 Feb 17 16:49:21 crc kubenswrapper[4672]: I0217 16:49:21.117365 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqjtt" event={"ID":"f06d2f2e-8a88-49ca-9345-815de090431d","Type":"ContainerDied","Data":"251ef7966da033880b0b24a69ca11288dbd740eb4822d366a944df7a63b655a7"} Feb 17 16:49:21 crc kubenswrapper[4672]: E0217 16:49:21.962208 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:49:22 crc kubenswrapper[4672]: I0217 16:49:22.129587 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqjtt" event={"ID":"f06d2f2e-8a88-49ca-9345-815de090431d","Type":"ContainerStarted","Data":"0798e81e45bd21050c5a6e0e06f7092135e19564944a766aa4579bff50583c2d"} Feb 17 16:49:22 crc kubenswrapper[4672]: I0217 16:49:22.147550 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cqjtt" podStartSLOduration=2.388746489 podStartE2EDuration="7.147528201s" podCreationTimestamp="2026-02-17 16:49:15 +0000 UTC" firstStartedPulling="2026-02-17 16:49:17.047491095 +0000 UTC m=+2765.801579827" lastFinishedPulling="2026-02-17 16:49:21.806272807 +0000 UTC m=+2770.560361539" observedRunningTime="2026-02-17 16:49:22.144098541 +0000 UTC m=+2770.898187273" watchObservedRunningTime="2026-02-17 16:49:22.147528201 +0000 UTC m=+2770.901616923" Feb 17 16:49:23 crc kubenswrapper[4672]: I0217 16:49:23.697778 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-264v6" Feb 17 16:49:23 crc kubenswrapper[4672]: I0217 16:49:23.698152 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-264v6" Feb 17 16:49:23 crc kubenswrapper[4672]: I0217 16:49:23.765327 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-264v6" Feb 17 16:49:24 crc kubenswrapper[4672]: I0217 16:49:24.203250 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-264v6" Feb 17 16:49:25 crc kubenswrapper[4672]: I0217 16:49:25.360440 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-264v6"] Feb 17 16:49:25 crc kubenswrapper[4672]: E0217 16:49:25.947121 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:49:25 crc kubenswrapper[4672]: I0217 16:49:25.964314 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cqjtt" Feb 17 16:49:25 crc kubenswrapper[4672]: I0217 16:49:25.964401 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cqjtt" Feb 17 16:49:26 crc kubenswrapper[4672]: I0217 16:49:26.020570 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cqjtt" Feb 17 16:49:26 crc kubenswrapper[4672]: I0217 16:49:26.156048 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qmhpr" Feb 17 16:49:26 crc kubenswrapper[4672]: I0217 16:49:26.156345 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qmhpr" Feb 17 16:49:26 crc kubenswrapper[4672]: I0217 16:49:26.209251 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qmhpr" Feb 17 16:49:27 crc kubenswrapper[4672]: I0217 16:49:27.174358 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-264v6" podUID="7b1ab52d-1671-4118-b99c-a87fa9859f91" containerName="registry-server" containerID="cri-o://9134dabb862a5636e9e65ed1cae4a91db3ae676abe3662f9ff78468a5efd9c1f" gracePeriod=2 Feb 17 16:49:27 crc kubenswrapper[4672]: I0217 16:49:27.242097 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qmhpr" Feb 17 16:49:27 crc kubenswrapper[4672]: I0217 16:49:27.566603 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:49:27 crc kubenswrapper[4672]: I0217 16:49:27.566928 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:49:27 crc kubenswrapper[4672]: I0217 16:49:27.566973 4672 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:49:27 crc kubenswrapper[4672]: I0217 16:49:27.567941 4672 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e1b57867ae3b2d0f7ae69d5114a296a48281c1419c2e4d2752760b9d915f000f"} pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:49:27 crc kubenswrapper[4672]: I0217 16:49:27.568018 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" containerID="cri-o://e1b57867ae3b2d0f7ae69d5114a296a48281c1419c2e4d2752760b9d915f000f" gracePeriod=600 Feb 17 16:49:27 crc kubenswrapper[4672]: I0217 16:49:27.833740 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-264v6" Feb 17 16:49:27 crc kubenswrapper[4672]: I0217 16:49:27.940039 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b1ab52d-1671-4118-b99c-a87fa9859f91-utilities\") pod \"7b1ab52d-1671-4118-b99c-a87fa9859f91\" (UID: \"7b1ab52d-1671-4118-b99c-a87fa9859f91\") " Feb 17 16:49:27 crc kubenswrapper[4672]: I0217 16:49:27.940337 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b1ab52d-1671-4118-b99c-a87fa9859f91-catalog-content\") pod \"7b1ab52d-1671-4118-b99c-a87fa9859f91\" (UID: \"7b1ab52d-1671-4118-b99c-a87fa9859f91\") " Feb 17 16:49:27 crc kubenswrapper[4672]: I0217 16:49:27.940472 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dckt5\" (UniqueName: \"kubernetes.io/projected/7b1ab52d-1671-4118-b99c-a87fa9859f91-kube-api-access-dckt5\") pod \"7b1ab52d-1671-4118-b99c-a87fa9859f91\" (UID: \"7b1ab52d-1671-4118-b99c-a87fa9859f91\") " Feb 17 16:49:27 crc kubenswrapper[4672]: I0217 16:49:27.941179 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b1ab52d-1671-4118-b99c-a87fa9859f91-utilities" (OuterVolumeSpecName: "utilities") pod "7b1ab52d-1671-4118-b99c-a87fa9859f91" (UID: "7b1ab52d-1671-4118-b99c-a87fa9859f91"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:49:27 crc kubenswrapper[4672]: I0217 16:49:27.946017 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b1ab52d-1671-4118-b99c-a87fa9859f91-kube-api-access-dckt5" (OuterVolumeSpecName: "kube-api-access-dckt5") pod "7b1ab52d-1671-4118-b99c-a87fa9859f91" (UID: "7b1ab52d-1671-4118-b99c-a87fa9859f91"). InnerVolumeSpecName "kube-api-access-dckt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:49:27 crc kubenswrapper[4672]: I0217 16:49:27.991738 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b1ab52d-1671-4118-b99c-a87fa9859f91-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b1ab52d-1671-4118-b99c-a87fa9859f91" (UID: "7b1ab52d-1671-4118-b99c-a87fa9859f91"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:49:28 crc kubenswrapper[4672]: I0217 16:49:28.041730 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b1ab52d-1671-4118-b99c-a87fa9859f91-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:28 crc kubenswrapper[4672]: I0217 16:49:28.041768 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dckt5\" (UniqueName: \"kubernetes.io/projected/7b1ab52d-1671-4118-b99c-a87fa9859f91-kube-api-access-dckt5\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:28 crc kubenswrapper[4672]: I0217 16:49:28.041783 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b1ab52d-1671-4118-b99c-a87fa9859f91-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:28 crc kubenswrapper[4672]: I0217 16:49:28.186890 4672 generic.go:334] "Generic (PLEG): container finished" podID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerID="e1b57867ae3b2d0f7ae69d5114a296a48281c1419c2e4d2752760b9d915f000f" exitCode=0 Feb 17 16:49:28 crc kubenswrapper[4672]: I0217 16:49:28.186967 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerDied","Data":"e1b57867ae3b2d0f7ae69d5114a296a48281c1419c2e4d2752760b9d915f000f"} Feb 17 16:49:28 crc kubenswrapper[4672]: I0217 16:49:28.187288 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerStarted","Data":"9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc"} Feb 17 16:49:28 crc kubenswrapper[4672]: I0217 16:49:28.187342 4672 scope.go:117] "RemoveContainer" containerID="5ec360b5c785e82bf42002bb2ec43e9b549142da918f8b7cc88ceed207ebfec1" Feb 17 16:49:28 crc kubenswrapper[4672]: I0217 16:49:28.190621 4672 generic.go:334] "Generic (PLEG): container finished" podID="7b1ab52d-1671-4118-b99c-a87fa9859f91" containerID="9134dabb862a5636e9e65ed1cae4a91db3ae676abe3662f9ff78468a5efd9c1f" exitCode=0 Feb 17 16:49:28 crc kubenswrapper[4672]: I0217 16:49:28.190673 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-264v6" Feb 17 16:49:28 crc kubenswrapper[4672]: I0217 16:49:28.190735 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-264v6" event={"ID":"7b1ab52d-1671-4118-b99c-a87fa9859f91","Type":"ContainerDied","Data":"9134dabb862a5636e9e65ed1cae4a91db3ae676abe3662f9ff78468a5efd9c1f"} Feb 17 16:49:28 crc kubenswrapper[4672]: I0217 16:49:28.190810 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-264v6" event={"ID":"7b1ab52d-1671-4118-b99c-a87fa9859f91","Type":"ContainerDied","Data":"c6aa4e58be2994b7933e525f83a7674cd5a3e09fe65b313bd40ebe8dadc19a4c"} Feb 17 16:49:28 crc kubenswrapper[4672]: I0217 16:49:28.227657 4672 scope.go:117] "RemoveContainer" containerID="9134dabb862a5636e9e65ed1cae4a91db3ae676abe3662f9ff78468a5efd9c1f" Feb 17 16:49:28 crc kubenswrapper[4672]: I0217 16:49:28.235582 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-264v6"] Feb 17 16:49:28 crc kubenswrapper[4672]: I0217 16:49:28.244461 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-264v6"] Feb 17 16:49:28 crc kubenswrapper[4672]: I0217 16:49:28.250715 4672 scope.go:117] "RemoveContainer" containerID="56418438c12751975a17cb428e9b4a53084ccddd268e81df2c09a01aa3c7fb68" Feb 17 16:49:28 crc kubenswrapper[4672]: I0217 16:49:28.282670 4672 scope.go:117] "RemoveContainer" containerID="03859a1795602bee46a1113ae8b0cc2a7b4453c38c1accacdd5df919a8788361" Feb 17 16:49:28 crc kubenswrapper[4672]: I0217 16:49:28.365478 4672 scope.go:117] "RemoveContainer" containerID="9134dabb862a5636e9e65ed1cae4a91db3ae676abe3662f9ff78468a5efd9c1f" Feb 17 16:49:28 crc kubenswrapper[4672]: E0217 16:49:28.365889 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9134dabb862a5636e9e65ed1cae4a91db3ae676abe3662f9ff78468a5efd9c1f\": container with ID starting with 9134dabb862a5636e9e65ed1cae4a91db3ae676abe3662f9ff78468a5efd9c1f not found: ID does not exist" containerID="9134dabb862a5636e9e65ed1cae4a91db3ae676abe3662f9ff78468a5efd9c1f" Feb 17 16:49:28 crc kubenswrapper[4672]: I0217 16:49:28.365915 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9134dabb862a5636e9e65ed1cae4a91db3ae676abe3662f9ff78468a5efd9c1f"} err="failed to get container status \"9134dabb862a5636e9e65ed1cae4a91db3ae676abe3662f9ff78468a5efd9c1f\": rpc error: code = NotFound desc = could not find container \"9134dabb862a5636e9e65ed1cae4a91db3ae676abe3662f9ff78468a5efd9c1f\": container with ID starting with 9134dabb862a5636e9e65ed1cae4a91db3ae676abe3662f9ff78468a5efd9c1f not found: ID does not exist" Feb 17 16:49:28 crc kubenswrapper[4672]: I0217 16:49:28.365946 4672 scope.go:117] "RemoveContainer" containerID="56418438c12751975a17cb428e9b4a53084ccddd268e81df2c09a01aa3c7fb68" Feb 17 16:49:28 crc kubenswrapper[4672]: E0217 16:49:28.366245 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56418438c12751975a17cb428e9b4a53084ccddd268e81df2c09a01aa3c7fb68\": container with ID starting with 56418438c12751975a17cb428e9b4a53084ccddd268e81df2c09a01aa3c7fb68 not found: ID does not exist" containerID="56418438c12751975a17cb428e9b4a53084ccddd268e81df2c09a01aa3c7fb68" Feb 17 16:49:28 crc kubenswrapper[4672]: I0217 16:49:28.366277 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56418438c12751975a17cb428e9b4a53084ccddd268e81df2c09a01aa3c7fb68"} err="failed to get container status \"56418438c12751975a17cb428e9b4a53084ccddd268e81df2c09a01aa3c7fb68\": rpc error: code = NotFound desc = could not find container \"56418438c12751975a17cb428e9b4a53084ccddd268e81df2c09a01aa3c7fb68\": container with ID starting with 56418438c12751975a17cb428e9b4a53084ccddd268e81df2c09a01aa3c7fb68 not found: ID does not exist" Feb 17 16:49:28 crc kubenswrapper[4672]: I0217 16:49:28.366296 4672 scope.go:117] "RemoveContainer" containerID="03859a1795602bee46a1113ae8b0cc2a7b4453c38c1accacdd5df919a8788361" Feb 17 16:49:28 crc kubenswrapper[4672]: E0217 16:49:28.366498 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03859a1795602bee46a1113ae8b0cc2a7b4453c38c1accacdd5df919a8788361\": container with ID starting with 03859a1795602bee46a1113ae8b0cc2a7b4453c38c1accacdd5df919a8788361 not found: ID does not exist" containerID="03859a1795602bee46a1113ae8b0cc2a7b4453c38c1accacdd5df919a8788361" Feb 17 16:49:28 crc kubenswrapper[4672]: I0217 16:49:28.366584 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03859a1795602bee46a1113ae8b0cc2a7b4453c38c1accacdd5df919a8788361"} err="failed to get container status \"03859a1795602bee46a1113ae8b0cc2a7b4453c38c1accacdd5df919a8788361\": rpc error: code = NotFound desc = could not find container \"03859a1795602bee46a1113ae8b0cc2a7b4453c38c1accacdd5df919a8788361\": container with ID starting with 03859a1795602bee46a1113ae8b0cc2a7b4453c38c1accacdd5df919a8788361 not found: ID does not exist" Feb 17 16:49:29 crc kubenswrapper[4672]: I0217 16:49:29.361149 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmhpr"] Feb 17 16:49:29 crc kubenswrapper[4672]: I0217 16:49:29.361804 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qmhpr" podUID="0476848c-e542-4f12-9fdf-4b5ef149886f" containerName="registry-server" containerID="cri-o://6597092ac7c6354e0c39df27f877ad4a9f8193ccaf015b437d3d028ab926317a" gracePeriod=2 Feb 17 16:49:29 crc kubenswrapper[4672]: I0217 16:49:29.939660 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmhpr" Feb 17 16:49:29 crc kubenswrapper[4672]: I0217 16:49:29.957950 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b1ab52d-1671-4118-b99c-a87fa9859f91" path="/var/lib/kubelet/pods/7b1ab52d-1671-4118-b99c-a87fa9859f91/volumes" Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.099033 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0476848c-e542-4f12-9fdf-4b5ef149886f-catalog-content\") pod \"0476848c-e542-4f12-9fdf-4b5ef149886f\" (UID: \"0476848c-e542-4f12-9fdf-4b5ef149886f\") " Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.108794 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vbj4\" (UniqueName: \"kubernetes.io/projected/0476848c-e542-4f12-9fdf-4b5ef149886f-kube-api-access-6vbj4\") pod \"0476848c-e542-4f12-9fdf-4b5ef149886f\" (UID: \"0476848c-e542-4f12-9fdf-4b5ef149886f\") " Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.109067 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0476848c-e542-4f12-9fdf-4b5ef149886f-utilities\") pod \"0476848c-e542-4f12-9fdf-4b5ef149886f\" (UID: \"0476848c-e542-4f12-9fdf-4b5ef149886f\") " Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.109820 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0476848c-e542-4f12-9fdf-4b5ef149886f-utilities" (OuterVolumeSpecName: "utilities") pod "0476848c-e542-4f12-9fdf-4b5ef149886f" (UID: "0476848c-e542-4f12-9fdf-4b5ef149886f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.109967 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0476848c-e542-4f12-9fdf-4b5ef149886f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.126365 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0476848c-e542-4f12-9fdf-4b5ef149886f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0476848c-e542-4f12-9fdf-4b5ef149886f" (UID: "0476848c-e542-4f12-9fdf-4b5ef149886f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.130289 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0476848c-e542-4f12-9fdf-4b5ef149886f-kube-api-access-6vbj4" (OuterVolumeSpecName: "kube-api-access-6vbj4") pod "0476848c-e542-4f12-9fdf-4b5ef149886f" (UID: "0476848c-e542-4f12-9fdf-4b5ef149886f"). InnerVolumeSpecName "kube-api-access-6vbj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.212344 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0476848c-e542-4f12-9fdf-4b5ef149886f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.212388 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vbj4\" (UniqueName: \"kubernetes.io/projected/0476848c-e542-4f12-9fdf-4b5ef149886f-kube-api-access-6vbj4\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.222440 4672 generic.go:334] "Generic (PLEG): container finished" podID="0476848c-e542-4f12-9fdf-4b5ef149886f" containerID="6597092ac7c6354e0c39df27f877ad4a9f8193ccaf015b437d3d028ab926317a" exitCode=0 Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.222498 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmhpr" event={"ID":"0476848c-e542-4f12-9fdf-4b5ef149886f","Type":"ContainerDied","Data":"6597092ac7c6354e0c39df27f877ad4a9f8193ccaf015b437d3d028ab926317a"} Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.222568 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmhpr" Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.222613 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmhpr" event={"ID":"0476848c-e542-4f12-9fdf-4b5ef149886f","Type":"ContainerDied","Data":"8ca4c423252e67ba08e2f3f9a0d30ef0a3a1843c9fce56759c9dc8c0cf683981"} Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.222652 4672 scope.go:117] "RemoveContainer" containerID="6597092ac7c6354e0c39df27f877ad4a9f8193ccaf015b437d3d028ab926317a" Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.249164 4672 scope.go:117] "RemoveContainer" containerID="941854d2f9cb107f9cf7e377797e961b0306be1608e72a2ccccf4fb2511a4cab" Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.262950 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmhpr"] Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.274308 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmhpr"] Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.276316 4672 scope.go:117] "RemoveContainer" containerID="669d99a858966adc246dc1b583939cdc6c606e5bd1437cea5faf1e14deeb1211" Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.322363 4672 scope.go:117] "RemoveContainer" containerID="6597092ac7c6354e0c39df27f877ad4a9f8193ccaf015b437d3d028ab926317a" Feb 17 16:49:30 crc kubenswrapper[4672]: E0217 16:49:30.322773 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6597092ac7c6354e0c39df27f877ad4a9f8193ccaf015b437d3d028ab926317a\": container with ID starting with 6597092ac7c6354e0c39df27f877ad4a9f8193ccaf015b437d3d028ab926317a not found: ID does not exist" containerID="6597092ac7c6354e0c39df27f877ad4a9f8193ccaf015b437d3d028ab926317a" Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.322840 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6597092ac7c6354e0c39df27f877ad4a9f8193ccaf015b437d3d028ab926317a"} err="failed to get container status \"6597092ac7c6354e0c39df27f877ad4a9f8193ccaf015b437d3d028ab926317a\": rpc error: code = NotFound desc = could not find container \"6597092ac7c6354e0c39df27f877ad4a9f8193ccaf015b437d3d028ab926317a\": container with ID starting with 6597092ac7c6354e0c39df27f877ad4a9f8193ccaf015b437d3d028ab926317a not found: ID does not exist" Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.322869 4672 scope.go:117] "RemoveContainer" containerID="941854d2f9cb107f9cf7e377797e961b0306be1608e72a2ccccf4fb2511a4cab" Feb 17 16:49:30 crc kubenswrapper[4672]: E0217 16:49:30.323201 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"941854d2f9cb107f9cf7e377797e961b0306be1608e72a2ccccf4fb2511a4cab\": container with ID starting with 941854d2f9cb107f9cf7e377797e961b0306be1608e72a2ccccf4fb2511a4cab not found: ID does not exist" containerID="941854d2f9cb107f9cf7e377797e961b0306be1608e72a2ccccf4fb2511a4cab" Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.323225 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"941854d2f9cb107f9cf7e377797e961b0306be1608e72a2ccccf4fb2511a4cab"} err="failed to get container status \"941854d2f9cb107f9cf7e377797e961b0306be1608e72a2ccccf4fb2511a4cab\": rpc error: code = NotFound desc = could not find container \"941854d2f9cb107f9cf7e377797e961b0306be1608e72a2ccccf4fb2511a4cab\": container with ID starting with 941854d2f9cb107f9cf7e377797e961b0306be1608e72a2ccccf4fb2511a4cab not found: ID does not exist" Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.323238 4672 scope.go:117] "RemoveContainer" containerID="669d99a858966adc246dc1b583939cdc6c606e5bd1437cea5faf1e14deeb1211" Feb 17 16:49:30 crc kubenswrapper[4672]: E0217 16:49:30.323501 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"669d99a858966adc246dc1b583939cdc6c606e5bd1437cea5faf1e14deeb1211\": container with ID starting with 669d99a858966adc246dc1b583939cdc6c606e5bd1437cea5faf1e14deeb1211 not found: ID does not exist" containerID="669d99a858966adc246dc1b583939cdc6c606e5bd1437cea5faf1e14deeb1211" Feb 17 16:49:30 crc kubenswrapper[4672]: I0217 16:49:30.323542 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"669d99a858966adc246dc1b583939cdc6c606e5bd1437cea5faf1e14deeb1211"} err="failed to get container status \"669d99a858966adc246dc1b583939cdc6c606e5bd1437cea5faf1e14deeb1211\": rpc error: code = NotFound desc = could not find container \"669d99a858966adc246dc1b583939cdc6c606e5bd1437cea5faf1e14deeb1211\": container with ID starting with 669d99a858966adc246dc1b583939cdc6c606e5bd1437cea5faf1e14deeb1211 not found: ID does not exist" Feb 17 16:49:31 crc kubenswrapper[4672]: I0217 16:49:31.959349 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0476848c-e542-4f12-9fdf-4b5ef149886f" path="/var/lib/kubelet/pods/0476848c-e542-4f12-9fdf-4b5ef149886f/volumes" Feb 17 16:49:36 crc kubenswrapper[4672]: I0217 16:49:36.029914 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cqjtt" Feb 17 16:49:36 crc kubenswrapper[4672]: E0217 16:49:36.947876 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:49:37 crc kubenswrapper[4672]: I0217 16:49:37.364802 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cqjtt"] Feb 17 16:49:37 crc kubenswrapper[4672]: I0217 16:49:37.365496 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cqjtt" podUID="f06d2f2e-8a88-49ca-9345-815de090431d" containerName="registry-server" containerID="cri-o://0798e81e45bd21050c5a6e0e06f7092135e19564944a766aa4579bff50583c2d" gracePeriod=2 Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.234991 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cqjtt" Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.305305 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f06d2f2e-8a88-49ca-9345-815de090431d-catalog-content\") pod \"f06d2f2e-8a88-49ca-9345-815de090431d\" (UID: \"f06d2f2e-8a88-49ca-9345-815de090431d\") " Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.305472 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f06d2f2e-8a88-49ca-9345-815de090431d-utilities\") pod \"f06d2f2e-8a88-49ca-9345-815de090431d\" (UID: \"f06d2f2e-8a88-49ca-9345-815de090431d\") " Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.305606 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gwb6\" (UniqueName: \"kubernetes.io/projected/f06d2f2e-8a88-49ca-9345-815de090431d-kube-api-access-8gwb6\") pod \"f06d2f2e-8a88-49ca-9345-815de090431d\" (UID: \"f06d2f2e-8a88-49ca-9345-815de090431d\") " Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.309610 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f06d2f2e-8a88-49ca-9345-815de090431d-utilities" (OuterVolumeSpecName: "utilities") pod "f06d2f2e-8a88-49ca-9345-815de090431d" (UID: "f06d2f2e-8a88-49ca-9345-815de090431d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.329227 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f06d2f2e-8a88-49ca-9345-815de090431d-kube-api-access-8gwb6" (OuterVolumeSpecName: "kube-api-access-8gwb6") pod "f06d2f2e-8a88-49ca-9345-815de090431d" (UID: "f06d2f2e-8a88-49ca-9345-815de090431d"). InnerVolumeSpecName "kube-api-access-8gwb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.343234 4672 generic.go:334] "Generic (PLEG): container finished" podID="f06d2f2e-8a88-49ca-9345-815de090431d" containerID="0798e81e45bd21050c5a6e0e06f7092135e19564944a766aa4579bff50583c2d" exitCode=0 Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.343345 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cqjtt" Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.343367 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqjtt" event={"ID":"f06d2f2e-8a88-49ca-9345-815de090431d","Type":"ContainerDied","Data":"0798e81e45bd21050c5a6e0e06f7092135e19564944a766aa4579bff50583c2d"} Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.343809 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqjtt" event={"ID":"f06d2f2e-8a88-49ca-9345-815de090431d","Type":"ContainerDied","Data":"d7ec346edf6805e3b56a848562316388229e16018fcaee39af4298c02e713db4"} Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.343835 4672 scope.go:117] "RemoveContainer" containerID="0798e81e45bd21050c5a6e0e06f7092135e19564944a766aa4579bff50583c2d" Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.366174 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f06d2f2e-8a88-49ca-9345-815de090431d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f06d2f2e-8a88-49ca-9345-815de090431d" (UID: "f06d2f2e-8a88-49ca-9345-815de090431d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.375457 4672 scope.go:117] "RemoveContainer" containerID="251ef7966da033880b0b24a69ca11288dbd740eb4822d366a944df7a63b655a7" Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.393650 4672 scope.go:117] "RemoveContainer" containerID="58ac56515eb971fa5a54674cace2a561ca3f12f893a7ad1770d1790d20963fa9" Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.408958 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gwb6\" (UniqueName: \"kubernetes.io/projected/f06d2f2e-8a88-49ca-9345-815de090431d-kube-api-access-8gwb6\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.408986 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f06d2f2e-8a88-49ca-9345-815de090431d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.409009 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f06d2f2e-8a88-49ca-9345-815de090431d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.440319 4672 scope.go:117] "RemoveContainer" containerID="0798e81e45bd21050c5a6e0e06f7092135e19564944a766aa4579bff50583c2d" Feb 17 16:49:38 crc kubenswrapper[4672]: E0217 16:49:38.440887 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0798e81e45bd21050c5a6e0e06f7092135e19564944a766aa4579bff50583c2d\": container with ID starting with 0798e81e45bd21050c5a6e0e06f7092135e19564944a766aa4579bff50583c2d not found: ID does not exist" containerID="0798e81e45bd21050c5a6e0e06f7092135e19564944a766aa4579bff50583c2d" Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.440950 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0798e81e45bd21050c5a6e0e06f7092135e19564944a766aa4579bff50583c2d"} err="failed to get container status \"0798e81e45bd21050c5a6e0e06f7092135e19564944a766aa4579bff50583c2d\": rpc error: code = NotFound desc = could not find container \"0798e81e45bd21050c5a6e0e06f7092135e19564944a766aa4579bff50583c2d\": container with ID starting with 0798e81e45bd21050c5a6e0e06f7092135e19564944a766aa4579bff50583c2d not found: ID does not exist" Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.440988 4672 scope.go:117] "RemoveContainer" containerID="251ef7966da033880b0b24a69ca11288dbd740eb4822d366a944df7a63b655a7" Feb 17 16:49:38 crc kubenswrapper[4672]: E0217 16:49:38.441476 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"251ef7966da033880b0b24a69ca11288dbd740eb4822d366a944df7a63b655a7\": container with ID starting with 251ef7966da033880b0b24a69ca11288dbd740eb4822d366a944df7a63b655a7 not found: ID does not exist" containerID="251ef7966da033880b0b24a69ca11288dbd740eb4822d366a944df7a63b655a7" Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.441531 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"251ef7966da033880b0b24a69ca11288dbd740eb4822d366a944df7a63b655a7"} err="failed to get container status \"251ef7966da033880b0b24a69ca11288dbd740eb4822d366a944df7a63b655a7\": rpc error: code = NotFound desc = could not find container \"251ef7966da033880b0b24a69ca11288dbd740eb4822d366a944df7a63b655a7\": container with ID starting with 251ef7966da033880b0b24a69ca11288dbd740eb4822d366a944df7a63b655a7 not found: ID does not exist" Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.441555 4672 scope.go:117] "RemoveContainer" containerID="58ac56515eb971fa5a54674cace2a561ca3f12f893a7ad1770d1790d20963fa9" Feb 17 16:49:38 crc kubenswrapper[4672]: E0217 16:49:38.441890 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58ac56515eb971fa5a54674cace2a561ca3f12f893a7ad1770d1790d20963fa9\": container with ID starting with 58ac56515eb971fa5a54674cace2a561ca3f12f893a7ad1770d1790d20963fa9 not found: ID does not exist" containerID="58ac56515eb971fa5a54674cace2a561ca3f12f893a7ad1770d1790d20963fa9" Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.441913 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ac56515eb971fa5a54674cace2a561ca3f12f893a7ad1770d1790d20963fa9"} err="failed to get container status \"58ac56515eb971fa5a54674cace2a561ca3f12f893a7ad1770d1790d20963fa9\": rpc error: code = NotFound desc = could not find container \"58ac56515eb971fa5a54674cace2a561ca3f12f893a7ad1770d1790d20963fa9\": container with ID starting with 58ac56515eb971fa5a54674cace2a561ca3f12f893a7ad1770d1790d20963fa9 not found: ID does not exist" Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.706121 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cqjtt"] Feb 17 16:49:38 crc kubenswrapper[4672]: I0217 16:49:38.714368 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cqjtt"] Feb 17 16:49:38 crc kubenswrapper[4672]: E0217 16:49:38.945931 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:49:39 crc kubenswrapper[4672]: I0217 16:49:39.962000 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f06d2f2e-8a88-49ca-9345-815de090431d" path="/var/lib/kubelet/pods/f06d2f2e-8a88-49ca-9345-815de090431d/volumes" Feb 17 16:49:51 crc kubenswrapper[4672]: E0217 16:49:51.956422 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:49:53 crc kubenswrapper[4672]: E0217 16:49:53.947621 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:50:03 crc kubenswrapper[4672]: E0217 16:50:03.946883 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:50:06 crc kubenswrapper[4672]: E0217 16:50:06.947171 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:50:18 crc kubenswrapper[4672]: E0217 16:50:18.946728 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:50:21 crc kubenswrapper[4672]: E0217 16:50:21.955168 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:50:33 crc kubenswrapper[4672]: E0217 16:50:33.946895 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:50:34 crc kubenswrapper[4672]: E0217 16:50:34.947157 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:50:45 crc kubenswrapper[4672]: E0217 16:50:45.948385 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:50:47 crc kubenswrapper[4672]: E0217 16:50:47.946954 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:50:57 crc kubenswrapper[4672]: E0217 16:50:57.949095 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:50:58 crc kubenswrapper[4672]: E0217 16:50:58.946485 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:51:09 crc kubenswrapper[4672]: E0217 16:51:09.948108 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:51:09 crc kubenswrapper[4672]: E0217 16:51:09.948851 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:51:23 crc kubenswrapper[4672]: E0217 16:51:23.949748 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:51:23 crc kubenswrapper[4672]: E0217 16:51:23.949794 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:51:27 crc kubenswrapper[4672]: I0217 16:51:27.566621 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:51:27 crc kubenswrapper[4672]: I0217 16:51:27.567271 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:51:37 crc kubenswrapper[4672]: E0217 16:51:37.949978 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:51:38 crc kubenswrapper[4672]: E0217 16:51:38.947613 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:51:51 crc kubenswrapper[4672]: E0217 16:51:51.953367 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:51:51 crc kubenswrapper[4672]: E0217 16:51:51.953611 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:51:57 crc kubenswrapper[4672]: I0217 16:51:57.565645 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:51:57 crc kubenswrapper[4672]: I0217 16:51:57.566187 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:52:03 crc kubenswrapper[4672]: E0217 16:52:03.947813 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:52:04 crc kubenswrapper[4672]: E0217 16:52:04.945946 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:52:16 crc kubenswrapper[4672]: E0217 16:52:16.947597 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:52:17 crc kubenswrapper[4672]: E0217 16:52:17.948654 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:52:27 crc kubenswrapper[4672]: I0217 16:52:27.566470 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:52:27 crc kubenswrapper[4672]: I0217 16:52:27.567110 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:52:27 crc kubenswrapper[4672]: I0217 16:52:27.567160 4672 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 16:52:27 crc kubenswrapper[4672]: I0217 16:52:27.568310 4672 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc"} pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:52:27 crc kubenswrapper[4672]: I0217 16:52:27.568406 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" containerID="cri-o://9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" gracePeriod=600 Feb 17 16:52:27 crc kubenswrapper[4672]: E0217 16:52:27.733860 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:52:28 crc kubenswrapper[4672]: I0217 16:52:28.085519 4672 generic.go:334] "Generic (PLEG): container finished" podID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" exitCode=0 Feb 17 16:52:28 crc kubenswrapper[4672]: I0217 16:52:28.085554 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerDied","Data":"9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc"} Feb 17 16:52:28 crc kubenswrapper[4672]: I0217 16:52:28.085621 4672 scope.go:117] "RemoveContainer" containerID="e1b57867ae3b2d0f7ae69d5114a296a48281c1419c2e4d2752760b9d915f000f" Feb 17 16:52:28 crc kubenswrapper[4672]: I0217 16:52:28.086324 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:52:28 crc kubenswrapper[4672]: E0217 16:52:28.086721 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:52:29 crc kubenswrapper[4672]: E0217 16:52:29.946710 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:52:29 crc kubenswrapper[4672]: E0217 16:52:29.947436 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:52:40 crc kubenswrapper[4672]: I0217 16:52:40.945644 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:52:40 crc kubenswrapper[4672]: E0217 16:52:40.946503 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:52:42 crc kubenswrapper[4672]: E0217 16:52:42.947582 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:52:42 crc kubenswrapper[4672]: E0217 16:52:42.947766 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:52:46 crc kubenswrapper[4672]: I0217 16:52:46.270776 4672 generic.go:334] "Generic (PLEG): container finished" podID="fcaca0dc-4760-43af-bc46-efcdc09d7164" containerID="13724425a8d31b6ca6bcc56bfdd4d2c738395ff8dcddeb7f177c9dd0af2e5c98" exitCode=2 Feb 17 16:52:46 crc kubenswrapper[4672]: I0217 16:52:46.270850 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zsr64" event={"ID":"fcaca0dc-4760-43af-bc46-efcdc09d7164","Type":"ContainerDied","Data":"13724425a8d31b6ca6bcc56bfdd4d2c738395ff8dcddeb7f177c9dd0af2e5c98"} Feb 17 16:52:47 crc kubenswrapper[4672]: I0217 16:52:47.811820 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zsr64" Feb 17 16:52:47 crc kubenswrapper[4672]: I0217 16:52:47.975866 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qktx6\" (UniqueName: \"kubernetes.io/projected/fcaca0dc-4760-43af-bc46-efcdc09d7164-kube-api-access-qktx6\") pod \"fcaca0dc-4760-43af-bc46-efcdc09d7164\" (UID: \"fcaca0dc-4760-43af-bc46-efcdc09d7164\") " Feb 17 16:52:47 crc kubenswrapper[4672]: I0217 16:52:47.976015 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcaca0dc-4760-43af-bc46-efcdc09d7164-inventory\") pod \"fcaca0dc-4760-43af-bc46-efcdc09d7164\" (UID: \"fcaca0dc-4760-43af-bc46-efcdc09d7164\") " Feb 17 16:52:47 crc kubenswrapper[4672]: I0217 16:52:47.976241 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcaca0dc-4760-43af-bc46-efcdc09d7164-ssh-key-openstack-edpm-ipam\") pod \"fcaca0dc-4760-43af-bc46-efcdc09d7164\" (UID: \"fcaca0dc-4760-43af-bc46-efcdc09d7164\") " Feb 17 16:52:47 crc kubenswrapper[4672]: I0217 16:52:47.981240 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcaca0dc-4760-43af-bc46-efcdc09d7164-kube-api-access-qktx6" (OuterVolumeSpecName: "kube-api-access-qktx6") pod "fcaca0dc-4760-43af-bc46-efcdc09d7164" (UID: "fcaca0dc-4760-43af-bc46-efcdc09d7164"). InnerVolumeSpecName "kube-api-access-qktx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:52:48 crc kubenswrapper[4672]: I0217 16:52:48.004691 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcaca0dc-4760-43af-bc46-efcdc09d7164-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fcaca0dc-4760-43af-bc46-efcdc09d7164" (UID: "fcaca0dc-4760-43af-bc46-efcdc09d7164"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:52:48 crc kubenswrapper[4672]: I0217 16:52:48.024873 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcaca0dc-4760-43af-bc46-efcdc09d7164-inventory" (OuterVolumeSpecName: "inventory") pod "fcaca0dc-4760-43af-bc46-efcdc09d7164" (UID: "fcaca0dc-4760-43af-bc46-efcdc09d7164"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:52:48 crc kubenswrapper[4672]: I0217 16:52:48.078754 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qktx6\" (UniqueName: \"kubernetes.io/projected/fcaca0dc-4760-43af-bc46-efcdc09d7164-kube-api-access-qktx6\") on node \"crc\" DevicePath \"\"" Feb 17 16:52:48 crc kubenswrapper[4672]: I0217 16:52:48.078790 4672 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcaca0dc-4760-43af-bc46-efcdc09d7164-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:52:48 crc kubenswrapper[4672]: I0217 16:52:48.078803 4672 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcaca0dc-4760-43af-bc46-efcdc09d7164-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:52:48 crc kubenswrapper[4672]: I0217 16:52:48.291777 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zsr64" event={"ID":"fcaca0dc-4760-43af-bc46-efcdc09d7164","Type":"ContainerDied","Data":"d79c5b08871036304f0bdafd2118c436d50c346deed88239e48b1b94414e97f6"} Feb 17 16:52:48 crc kubenswrapper[4672]: I0217 16:52:48.291826 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d79c5b08871036304f0bdafd2118c436d50c346deed88239e48b1b94414e97f6" Feb 17 16:52:48 crc kubenswrapper[4672]: I0217 16:52:48.291856 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zsr64" Feb 17 16:52:52 crc kubenswrapper[4672]: I0217 16:52:52.945599 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:52:52 crc kubenswrapper[4672]: E0217 16:52:52.946383 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:52:55 crc kubenswrapper[4672]: E0217 16:52:55.947467 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:52:55 crc kubenswrapper[4672]: E0217 16:52:55.947467 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:53:07 crc kubenswrapper[4672]: E0217 16:53:07.192640 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:53:07 crc kubenswrapper[4672]: I0217 16:53:07.946182 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:53:07 crc kubenswrapper[4672]: E0217 16:53:07.946573 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:53:08 crc kubenswrapper[4672]: E0217 16:53:08.947059 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:53:19 crc kubenswrapper[4672]: I0217 16:53:19.945342 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:53:19 crc kubenswrapper[4672]: E0217 16:53:19.946475 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:53:21 crc kubenswrapper[4672]: E0217 16:53:21.961285 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:53:22 crc kubenswrapper[4672]: I0217 16:53:22.946884 4672 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:53:23 crc kubenswrapper[4672]: E0217 16:53:23.098261 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:53:23 crc kubenswrapper[4672]: E0217 16:53:23.098313 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:53:23 crc kubenswrapper[4672]: E0217 16:53:23.098426 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq9ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-qrhj8_openstack(dc5471f5-2491-4841-be45-09c8f14b35c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:53:23 crc kubenswrapper[4672]: E0217 16:53:23.099584 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.039920 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs"] Feb 17 16:53:25 crc kubenswrapper[4672]: E0217 16:53:25.041014 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcaca0dc-4760-43af-bc46-efcdc09d7164" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.041039 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcaca0dc-4760-43af-bc46-efcdc09d7164" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:53:25 crc kubenswrapper[4672]: E0217 16:53:25.041075 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b1ab52d-1671-4118-b99c-a87fa9859f91" containerName="extract-utilities" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.041087 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b1ab52d-1671-4118-b99c-a87fa9859f91" containerName="extract-utilities" Feb 17 16:53:25 crc kubenswrapper[4672]: E0217 16:53:25.041106 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b1ab52d-1671-4118-b99c-a87fa9859f91" containerName="registry-server" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.041118 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b1ab52d-1671-4118-b99c-a87fa9859f91" containerName="registry-server" Feb 17 16:53:25 crc kubenswrapper[4672]: E0217 16:53:25.041140 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0476848c-e542-4f12-9fdf-4b5ef149886f" containerName="extract-utilities" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.041152 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="0476848c-e542-4f12-9fdf-4b5ef149886f" containerName="extract-utilities" Feb 17 16:53:25 crc kubenswrapper[4672]: E0217 16:53:25.041172 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0476848c-e542-4f12-9fdf-4b5ef149886f" containerName="extract-content" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.041184 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="0476848c-e542-4f12-9fdf-4b5ef149886f" containerName="extract-content" Feb 17 16:53:25 crc kubenswrapper[4672]: E0217 16:53:25.041210 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06d2f2e-8a88-49ca-9345-815de090431d" containerName="extract-content" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.041221 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06d2f2e-8a88-49ca-9345-815de090431d" containerName="extract-content" Feb 17 16:53:25 crc kubenswrapper[4672]: E0217 16:53:25.041238 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0476848c-e542-4f12-9fdf-4b5ef149886f" containerName="registry-server" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.041248 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="0476848c-e542-4f12-9fdf-4b5ef149886f" containerName="registry-server" Feb 17 16:53:25 crc kubenswrapper[4672]: E0217 16:53:25.041265 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b1ab52d-1671-4118-b99c-a87fa9859f91" containerName="extract-content" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.041277 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b1ab52d-1671-4118-b99c-a87fa9859f91" containerName="extract-content" Feb 17 16:53:25 crc kubenswrapper[4672]: E0217 16:53:25.041292 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06d2f2e-8a88-49ca-9345-815de090431d" containerName="extract-utilities" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.041303 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06d2f2e-8a88-49ca-9345-815de090431d" containerName="extract-utilities" Feb 17 16:53:25 crc kubenswrapper[4672]: E0217 16:53:25.041341 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06d2f2e-8a88-49ca-9345-815de090431d" containerName="registry-server" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.041365 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06d2f2e-8a88-49ca-9345-815de090431d" containerName="registry-server" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.041705 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="f06d2f2e-8a88-49ca-9345-815de090431d" containerName="registry-server" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.041750 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b1ab52d-1671-4118-b99c-a87fa9859f91" containerName="registry-server" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.041772 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcaca0dc-4760-43af-bc46-efcdc09d7164" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.041787 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="0476848c-e542-4f12-9fdf-4b5ef149886f" containerName="registry-server" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.043032 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.046193 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.046388 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.046812 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.047317 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6sng" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.054320 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs"] Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.195239 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58598c29-6a4f-43a2-87b4-3247b3144660-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs\" (UID: \"58598c29-6a4f-43a2-87b4-3247b3144660\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.195382 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tkx7\" (UniqueName: \"kubernetes.io/projected/58598c29-6a4f-43a2-87b4-3247b3144660-kube-api-access-9tkx7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs\" (UID: \"58598c29-6a4f-43a2-87b4-3247b3144660\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.195435 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58598c29-6a4f-43a2-87b4-3247b3144660-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs\" (UID: \"58598c29-6a4f-43a2-87b4-3247b3144660\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.297321 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tkx7\" (UniqueName: \"kubernetes.io/projected/58598c29-6a4f-43a2-87b4-3247b3144660-kube-api-access-9tkx7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs\" (UID: \"58598c29-6a4f-43a2-87b4-3247b3144660\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.297449 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58598c29-6a4f-43a2-87b4-3247b3144660-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs\" (UID: \"58598c29-6a4f-43a2-87b4-3247b3144660\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.297710 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58598c29-6a4f-43a2-87b4-3247b3144660-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs\" (UID: \"58598c29-6a4f-43a2-87b4-3247b3144660\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.304362 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58598c29-6a4f-43a2-87b4-3247b3144660-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs\" (UID: \"58598c29-6a4f-43a2-87b4-3247b3144660\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.306017 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58598c29-6a4f-43a2-87b4-3247b3144660-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs\" (UID: \"58598c29-6a4f-43a2-87b4-3247b3144660\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.313977 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tkx7\" (UniqueName: \"kubernetes.io/projected/58598c29-6a4f-43a2-87b4-3247b3144660-kube-api-access-9tkx7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs\" (UID: \"58598c29-6a4f-43a2-87b4-3247b3144660\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.381077 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs" Feb 17 16:53:25 crc kubenswrapper[4672]: I0217 16:53:25.987301 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs"] Feb 17 16:53:25 crc kubenswrapper[4672]: W0217 16:53:25.991200 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58598c29_6a4f_43a2_87b4_3247b3144660.slice/crio-27972ccafda1b8482813a63a984eb3113b87f35cce08dfd73c2be95a6ae50203 WatchSource:0}: Error finding container 27972ccafda1b8482813a63a984eb3113b87f35cce08dfd73c2be95a6ae50203: Status 404 returned error can't find the container with id 27972ccafda1b8482813a63a984eb3113b87f35cce08dfd73c2be95a6ae50203 Feb 17 16:53:26 crc kubenswrapper[4672]: I0217 16:53:26.665245 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs" event={"ID":"58598c29-6a4f-43a2-87b4-3247b3144660","Type":"ContainerStarted","Data":"27972ccafda1b8482813a63a984eb3113b87f35cce08dfd73c2be95a6ae50203"} Feb 17 16:53:27 crc kubenswrapper[4672]: I0217 16:53:27.676462 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs" event={"ID":"58598c29-6a4f-43a2-87b4-3247b3144660","Type":"ContainerStarted","Data":"a8bbea965613bc64d840dd07b7b860bab526717decc7fbe312ce11a359f4328c"} Feb 17 16:53:32 crc kubenswrapper[4672]: I0217 16:53:32.945952 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:53:32 crc kubenswrapper[4672]: E0217 16:53:32.946928 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:53:34 crc kubenswrapper[4672]: E0217 16:53:34.946278 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:53:34 crc kubenswrapper[4672]: I0217 16:53:34.962909 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs" podStartSLOduration=9.408593064 podStartE2EDuration="9.962891383s" podCreationTimestamp="2026-02-17 16:53:25 +0000 UTC" firstStartedPulling="2026-02-17 16:53:25.993582713 +0000 UTC m=+3014.747671455" lastFinishedPulling="2026-02-17 16:53:26.547881022 +0000 UTC m=+3015.301969774" observedRunningTime="2026-02-17 16:53:27.707473242 +0000 UTC m=+3016.461561974" watchObservedRunningTime="2026-02-17 16:53:34.962891383 +0000 UTC m=+3023.716980125" Feb 17 16:53:36 crc kubenswrapper[4672]: E0217 16:53:36.085323 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:53:36 crc kubenswrapper[4672]: E0217 16:53:36.085393 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:53:36 crc kubenswrapper[4672]: E0217 16:53:36.085543 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66h7h644h64ch5f8h565hfch5dh56chfdh8hfdh5b5h567h6dh665h557h74h665hcbh96h659h554h589h57fh5d9h55h564hcfh5dhffhfdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tx4bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9e58ce9b-ddd5-42bb-8e07-08a22c8871a5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:53:36 crc kubenswrapper[4672]: E0217 16:53:36.086739 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:53:46 crc kubenswrapper[4672]: I0217 16:53:46.945469 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:53:46 crc kubenswrapper[4672]: E0217 16:53:46.946537 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:53:46 crc kubenswrapper[4672]: E0217 16:53:46.947765 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:53:47 crc kubenswrapper[4672]: E0217 16:53:47.945957 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:53:58 crc kubenswrapper[4672]: E0217 16:53:58.946914 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:53:58 crc kubenswrapper[4672]: E0217 16:53:58.947456 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:53:59 crc kubenswrapper[4672]: I0217 16:53:59.945015 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:53:59 crc kubenswrapper[4672]: E0217 16:53:59.945434 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:54:11 crc kubenswrapper[4672]: E0217 16:54:11.954132 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:54:12 crc kubenswrapper[4672]: I0217 16:54:12.945331 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:54:12 crc kubenswrapper[4672]: E0217 16:54:12.945667 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:54:13 crc kubenswrapper[4672]: E0217 16:54:13.946501 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:54:24 crc kubenswrapper[4672]: E0217 16:54:24.949021 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:54:24 crc kubenswrapper[4672]: E0217 16:54:24.949021 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:54:26 crc kubenswrapper[4672]: I0217 16:54:26.946267 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:54:26 crc kubenswrapper[4672]: E0217 16:54:26.947044 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:54:36 crc kubenswrapper[4672]: E0217 16:54:36.951632 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:54:38 crc kubenswrapper[4672]: I0217 16:54:38.946011 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:54:38 crc kubenswrapper[4672]: E0217 16:54:38.947379 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:54:38 crc kubenswrapper[4672]: E0217 16:54:38.947602 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:54:49 crc kubenswrapper[4672]: E0217 16:54:49.947090 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:54:50 crc kubenswrapper[4672]: I0217 16:54:50.945956 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:54:50 crc kubenswrapper[4672]: E0217 16:54:50.946351 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:54:52 crc kubenswrapper[4672]: E0217 16:54:52.947951 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:55:01 crc kubenswrapper[4672]: I0217 16:55:01.977159 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:55:01 crc kubenswrapper[4672]: E0217 16:55:01.996994 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:55:02 crc kubenswrapper[4672]: E0217 16:55:02.005326 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:55:04 crc kubenswrapper[4672]: E0217 16:55:04.948303 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:55:12 crc kubenswrapper[4672]: E0217 16:55:12.950117 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:55:15 crc kubenswrapper[4672]: I0217 16:55:15.945166 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:55:15 crc kubenswrapper[4672]: E0217 16:55:15.945791 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:55:15 crc kubenswrapper[4672]: E0217 16:55:15.949393 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:55:23 crc kubenswrapper[4672]: E0217 16:55:23.947181 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:55:26 crc kubenswrapper[4672]: E0217 16:55:26.948619 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:55:30 crc kubenswrapper[4672]: I0217 16:55:30.944854 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:55:30 crc kubenswrapper[4672]: E0217 16:55:30.945850 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:55:35 crc kubenswrapper[4672]: E0217 16:55:35.947763 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:55:40 crc kubenswrapper[4672]: E0217 16:55:40.946151 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:55:44 crc kubenswrapper[4672]: I0217 16:55:44.945250 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:55:44 crc kubenswrapper[4672]: E0217 16:55:44.946237 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:55:49 crc kubenswrapper[4672]: E0217 16:55:49.953287 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:55:52 crc kubenswrapper[4672]: E0217 16:55:52.956181 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:55:59 crc kubenswrapper[4672]: I0217 16:55:59.945062 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:55:59 crc kubenswrapper[4672]: E0217 16:55:59.946086 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:56:02 crc kubenswrapper[4672]: E0217 16:56:02.946762 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:56:07 crc kubenswrapper[4672]: E0217 16:56:07.947379 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:56:13 crc kubenswrapper[4672]: I0217 16:56:13.945488 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:56:13 crc kubenswrapper[4672]: E0217 16:56:13.946744 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:56:16 crc kubenswrapper[4672]: E0217 16:56:16.947331 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:56:19 crc kubenswrapper[4672]: E0217 16:56:19.947776 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:56:27 crc kubenswrapper[4672]: I0217 16:56:27.945222 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:56:27 crc kubenswrapper[4672]: E0217 16:56:27.947080 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:56:28 crc kubenswrapper[4672]: E0217 16:56:28.948142 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:56:33 crc kubenswrapper[4672]: E0217 16:56:33.947164 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:56:39 crc kubenswrapper[4672]: I0217 16:56:39.945077 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:56:39 crc kubenswrapper[4672]: E0217 16:56:39.945828 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:56:40 crc kubenswrapper[4672]: E0217 16:56:40.947817 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:56:45 crc kubenswrapper[4672]: E0217 16:56:45.946971 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:56:51 crc kubenswrapper[4672]: E0217 16:56:51.962543 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:56:52 crc kubenswrapper[4672]: I0217 16:56:52.945754 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:56:52 crc kubenswrapper[4672]: E0217 16:56:52.946072 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:56:57 crc kubenswrapper[4672]: E0217 16:56:57.948059 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:57:02 crc kubenswrapper[4672]: E0217 16:57:02.946833 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:57:06 crc kubenswrapper[4672]: I0217 16:57:06.945229 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:57:06 crc kubenswrapper[4672]: E0217 16:57:06.946097 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:57:08 crc kubenswrapper[4672]: E0217 16:57:08.946761 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:57:16 crc kubenswrapper[4672]: E0217 16:57:16.952547 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:57:19 crc kubenswrapper[4672]: I0217 16:57:19.945120 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:57:19 crc kubenswrapper[4672]: E0217 16:57:19.946036 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 16:57:22 crc kubenswrapper[4672]: E0217 16:57:22.947375 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:57:29 crc kubenswrapper[4672]: E0217 16:57:29.949445 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:57:31 crc kubenswrapper[4672]: I0217 16:57:31.953098 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 16:57:32 crc kubenswrapper[4672]: I0217 16:57:32.468307 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerStarted","Data":"399c1b28a73d295545a85ac9813544c6363f8e54412c109aba83e40a76db0358"} Feb 17 16:57:35 crc kubenswrapper[4672]: E0217 16:57:35.949483 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:57:40 crc kubenswrapper[4672]: E0217 16:57:40.946764 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:57:47 crc kubenswrapper[4672]: E0217 16:57:47.947813 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:57:54 crc kubenswrapper[4672]: E0217 16:57:54.947965 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:58:00 crc kubenswrapper[4672]: E0217 16:58:00.948452 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:58:07 crc kubenswrapper[4672]: E0217 16:58:07.947267 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:58:12 crc kubenswrapper[4672]: E0217 16:58:12.948230 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:58:15 crc kubenswrapper[4672]: I0217 16:58:15.761814 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qpfxv"] Feb 17 16:58:15 crc kubenswrapper[4672]: I0217 16:58:15.765399 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpfxv" Feb 17 16:58:15 crc kubenswrapper[4672]: I0217 16:58:15.784493 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qpfxv"] Feb 17 16:58:15 crc kubenswrapper[4672]: I0217 16:58:15.937840 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2gt2\" (UniqueName: \"kubernetes.io/projected/df6c79ca-fcc8-41b7-9af9-977995644317-kube-api-access-k2gt2\") pod \"redhat-operators-qpfxv\" (UID: \"df6c79ca-fcc8-41b7-9af9-977995644317\") " pod="openshift-marketplace/redhat-operators-qpfxv" Feb 17 16:58:15 crc kubenswrapper[4672]: I0217 16:58:15.937968 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df6c79ca-fcc8-41b7-9af9-977995644317-catalog-content\") pod \"redhat-operators-qpfxv\" (UID: \"df6c79ca-fcc8-41b7-9af9-977995644317\") " pod="openshift-marketplace/redhat-operators-qpfxv" Feb 17 16:58:15 crc kubenswrapper[4672]: I0217 16:58:15.938085 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df6c79ca-fcc8-41b7-9af9-977995644317-utilities\") pod \"redhat-operators-qpfxv\" (UID: \"df6c79ca-fcc8-41b7-9af9-977995644317\") " pod="openshift-marketplace/redhat-operators-qpfxv" Feb 17 16:58:16 crc kubenswrapper[4672]: I0217 16:58:16.040324 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2gt2\" (UniqueName: \"kubernetes.io/projected/df6c79ca-fcc8-41b7-9af9-977995644317-kube-api-access-k2gt2\") pod \"redhat-operators-qpfxv\" (UID: \"df6c79ca-fcc8-41b7-9af9-977995644317\") " pod="openshift-marketplace/redhat-operators-qpfxv" Feb 17 16:58:16 crc kubenswrapper[4672]: I0217 16:58:16.040436 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df6c79ca-fcc8-41b7-9af9-977995644317-catalog-content\") pod \"redhat-operators-qpfxv\" (UID: \"df6c79ca-fcc8-41b7-9af9-977995644317\") " pod="openshift-marketplace/redhat-operators-qpfxv" Feb 17 16:58:16 crc kubenswrapper[4672]: I0217 16:58:16.040584 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df6c79ca-fcc8-41b7-9af9-977995644317-utilities\") pod \"redhat-operators-qpfxv\" (UID: \"df6c79ca-fcc8-41b7-9af9-977995644317\") " pod="openshift-marketplace/redhat-operators-qpfxv" Feb 17 16:58:16 crc kubenswrapper[4672]: I0217 16:58:16.041706 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df6c79ca-fcc8-41b7-9af9-977995644317-catalog-content\") pod \"redhat-operators-qpfxv\" (UID: \"df6c79ca-fcc8-41b7-9af9-977995644317\") " pod="openshift-marketplace/redhat-operators-qpfxv" Feb 17 16:58:16 crc kubenswrapper[4672]: I0217 16:58:16.041917 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df6c79ca-fcc8-41b7-9af9-977995644317-utilities\") pod \"redhat-operators-qpfxv\" (UID: \"df6c79ca-fcc8-41b7-9af9-977995644317\") " pod="openshift-marketplace/redhat-operators-qpfxv" Feb 17 16:58:16 crc kubenswrapper[4672]: I0217 16:58:16.072602 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2gt2\" (UniqueName: \"kubernetes.io/projected/df6c79ca-fcc8-41b7-9af9-977995644317-kube-api-access-k2gt2\") pod \"redhat-operators-qpfxv\" (UID: \"df6c79ca-fcc8-41b7-9af9-977995644317\") " pod="openshift-marketplace/redhat-operators-qpfxv" Feb 17 16:58:16 crc kubenswrapper[4672]: I0217 16:58:16.092536 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpfxv" Feb 17 16:58:16 crc kubenswrapper[4672]: I0217 16:58:16.592925 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qpfxv"] Feb 17 16:58:16 crc kubenswrapper[4672]: W0217 16:58:16.600842 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf6c79ca_fcc8_41b7_9af9_977995644317.slice/crio-f7e9fa6feecb3481e433fd0b4697707d03d8f037fe5fee2b0ff80ad9524d8720 WatchSource:0}: Error finding container f7e9fa6feecb3481e433fd0b4697707d03d8f037fe5fee2b0ff80ad9524d8720: Status 404 returned error can't find the container with id f7e9fa6feecb3481e433fd0b4697707d03d8f037fe5fee2b0ff80ad9524d8720 Feb 17 16:58:17 crc kubenswrapper[4672]: I0217 16:58:17.249480 4672 generic.go:334] "Generic (PLEG): container finished" podID="df6c79ca-fcc8-41b7-9af9-977995644317" containerID="15c577e1ede92de54a3785c10a421dc75ec3ec42221afc585edd6e8f6ea3b0b8" exitCode=0 Feb 17 16:58:17 crc kubenswrapper[4672]: I0217 16:58:17.249574 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpfxv" event={"ID":"df6c79ca-fcc8-41b7-9af9-977995644317","Type":"ContainerDied","Data":"15c577e1ede92de54a3785c10a421dc75ec3ec42221afc585edd6e8f6ea3b0b8"} Feb 17 16:58:17 crc kubenswrapper[4672]: I0217 16:58:17.249764 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpfxv" event={"ID":"df6c79ca-fcc8-41b7-9af9-977995644317","Type":"ContainerStarted","Data":"f7e9fa6feecb3481e433fd0b4697707d03d8f037fe5fee2b0ff80ad9524d8720"} Feb 17 16:58:18 crc kubenswrapper[4672]: I0217 16:58:18.274692 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpfxv" event={"ID":"df6c79ca-fcc8-41b7-9af9-977995644317","Type":"ContainerStarted","Data":"147bba47d22cd2c21120e67dbcfcfdaccae7189f961a58ccc92f8a11ef9a6ee2"} Feb 17 16:58:18 crc kubenswrapper[4672]: E0217 16:58:18.947043 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:58:22 crc kubenswrapper[4672]: I0217 16:58:22.312244 4672 generic.go:334] "Generic (PLEG): container finished" podID="df6c79ca-fcc8-41b7-9af9-977995644317" containerID="147bba47d22cd2c21120e67dbcfcfdaccae7189f961a58ccc92f8a11ef9a6ee2" exitCode=0 Feb 17 16:58:22 crc kubenswrapper[4672]: I0217 16:58:22.312306 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpfxv" event={"ID":"df6c79ca-fcc8-41b7-9af9-977995644317","Type":"ContainerDied","Data":"147bba47d22cd2c21120e67dbcfcfdaccae7189f961a58ccc92f8a11ef9a6ee2"} Feb 17 16:58:23 crc kubenswrapper[4672]: I0217 16:58:23.322360 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpfxv" event={"ID":"df6c79ca-fcc8-41b7-9af9-977995644317","Type":"ContainerStarted","Data":"e89be5bbdaee046568896d107229a544011c4e0c4fbf3c442e9f9215c6e3da34"} Feb 17 16:58:23 crc kubenswrapper[4672]: I0217 16:58:23.342686 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qpfxv" podStartSLOduration=2.8028691329999997 podStartE2EDuration="8.342667588s" podCreationTimestamp="2026-02-17 16:58:15 +0000 UTC" firstStartedPulling="2026-02-17 16:58:17.251269574 +0000 UTC m=+3306.005358306" lastFinishedPulling="2026-02-17 16:58:22.791068029 +0000 UTC m=+3311.545156761" observedRunningTime="2026-02-17 16:58:23.338485438 +0000 UTC m=+3312.092574170" watchObservedRunningTime="2026-02-17 16:58:23.342667588 +0000 UTC m=+3312.096756320" Feb 17 16:58:26 crc kubenswrapper[4672]: I0217 16:58:26.092622 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qpfxv" Feb 17 16:58:26 crc kubenswrapper[4672]: I0217 16:58:26.094158 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qpfxv" Feb 17 16:58:27 crc kubenswrapper[4672]: I0217 16:58:27.153355 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qpfxv" podUID="df6c79ca-fcc8-41b7-9af9-977995644317" containerName="registry-server" probeResult="failure" output=< Feb 17 16:58:27 crc kubenswrapper[4672]: timeout: failed to connect service ":50051" within 1s Feb 17 16:58:27 crc kubenswrapper[4672]: > Feb 17 16:58:27 crc kubenswrapper[4672]: I0217 16:58:27.947205 4672 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:58:28 crc kubenswrapper[4672]: E0217 16:58:28.096323 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:58:28 crc kubenswrapper[4672]: E0217 16:58:28.096407 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:58:28 crc kubenswrapper[4672]: E0217 16:58:28.096577 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq9ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-qrhj8_openstack(dc5471f5-2491-4841-be45-09c8f14b35c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:58:28 crc kubenswrapper[4672]: E0217 16:58:28.098473 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:58:30 crc kubenswrapper[4672]: E0217 16:58:30.947734 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:58:36 crc kubenswrapper[4672]: I0217 16:58:36.154197 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qpfxv" Feb 17 16:58:36 crc kubenswrapper[4672]: I0217 16:58:36.209014 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qpfxv" Feb 17 16:58:36 crc kubenswrapper[4672]: I0217 16:58:36.400504 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qpfxv"] Feb 17 16:58:37 crc kubenswrapper[4672]: I0217 16:58:37.457671 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qpfxv" podUID="df6c79ca-fcc8-41b7-9af9-977995644317" containerName="registry-server" containerID="cri-o://e89be5bbdaee046568896d107229a544011c4e0c4fbf3c442e9f9215c6e3da34" gracePeriod=2 Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.077023 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpfxv" Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.212284 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2gt2\" (UniqueName: \"kubernetes.io/projected/df6c79ca-fcc8-41b7-9af9-977995644317-kube-api-access-k2gt2\") pod \"df6c79ca-fcc8-41b7-9af9-977995644317\" (UID: \"df6c79ca-fcc8-41b7-9af9-977995644317\") " Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.212620 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df6c79ca-fcc8-41b7-9af9-977995644317-utilities\") pod \"df6c79ca-fcc8-41b7-9af9-977995644317\" (UID: \"df6c79ca-fcc8-41b7-9af9-977995644317\") " Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.212894 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df6c79ca-fcc8-41b7-9af9-977995644317-catalog-content\") pod \"df6c79ca-fcc8-41b7-9af9-977995644317\" (UID: \"df6c79ca-fcc8-41b7-9af9-977995644317\") " Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.213900 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df6c79ca-fcc8-41b7-9af9-977995644317-utilities" (OuterVolumeSpecName: "utilities") pod "df6c79ca-fcc8-41b7-9af9-977995644317" (UID: "df6c79ca-fcc8-41b7-9af9-977995644317"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.219350 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df6c79ca-fcc8-41b7-9af9-977995644317-kube-api-access-k2gt2" (OuterVolumeSpecName: "kube-api-access-k2gt2") pod "df6c79ca-fcc8-41b7-9af9-977995644317" (UID: "df6c79ca-fcc8-41b7-9af9-977995644317"). InnerVolumeSpecName "kube-api-access-k2gt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.315646 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df6c79ca-fcc8-41b7-9af9-977995644317-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.316010 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2gt2\" (UniqueName: \"kubernetes.io/projected/df6c79ca-fcc8-41b7-9af9-977995644317-kube-api-access-k2gt2\") on node \"crc\" DevicePath \"\"" Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.380754 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df6c79ca-fcc8-41b7-9af9-977995644317-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df6c79ca-fcc8-41b7-9af9-977995644317" (UID: "df6c79ca-fcc8-41b7-9af9-977995644317"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.418483 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df6c79ca-fcc8-41b7-9af9-977995644317-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.470991 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpfxv" event={"ID":"df6c79ca-fcc8-41b7-9af9-977995644317","Type":"ContainerDied","Data":"e89be5bbdaee046568896d107229a544011c4e0c4fbf3c442e9f9215c6e3da34"} Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.471060 4672 scope.go:117] "RemoveContainer" containerID="e89be5bbdaee046568896d107229a544011c4e0c4fbf3c442e9f9215c6e3da34" Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.471012 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpfxv" Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.471450 4672 generic.go:334] "Generic (PLEG): container finished" podID="df6c79ca-fcc8-41b7-9af9-977995644317" containerID="e89be5bbdaee046568896d107229a544011c4e0c4fbf3c442e9f9215c6e3da34" exitCode=0 Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.471541 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpfxv" event={"ID":"df6c79ca-fcc8-41b7-9af9-977995644317","Type":"ContainerDied","Data":"f7e9fa6feecb3481e433fd0b4697707d03d8f037fe5fee2b0ff80ad9524d8720"} Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.494069 4672 scope.go:117] "RemoveContainer" containerID="147bba47d22cd2c21120e67dbcfcfdaccae7189f961a58ccc92f8a11ef9a6ee2" Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.521879 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qpfxv"] Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.538217 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qpfxv"] Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.555886 4672 scope.go:117] "RemoveContainer" containerID="15c577e1ede92de54a3785c10a421dc75ec3ec42221afc585edd6e8f6ea3b0b8" Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.584644 4672 scope.go:117] "RemoveContainer" containerID="e89be5bbdaee046568896d107229a544011c4e0c4fbf3c442e9f9215c6e3da34" Feb 17 16:58:38 crc kubenswrapper[4672]: E0217 16:58:38.585099 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e89be5bbdaee046568896d107229a544011c4e0c4fbf3c442e9f9215c6e3da34\": container with ID starting with e89be5bbdaee046568896d107229a544011c4e0c4fbf3c442e9f9215c6e3da34 not found: ID does not exist" containerID="e89be5bbdaee046568896d107229a544011c4e0c4fbf3c442e9f9215c6e3da34" Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.585143 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e89be5bbdaee046568896d107229a544011c4e0c4fbf3c442e9f9215c6e3da34"} err="failed to get container status \"e89be5bbdaee046568896d107229a544011c4e0c4fbf3c442e9f9215c6e3da34\": rpc error: code = NotFound desc = could not find container \"e89be5bbdaee046568896d107229a544011c4e0c4fbf3c442e9f9215c6e3da34\": container with ID starting with e89be5bbdaee046568896d107229a544011c4e0c4fbf3c442e9f9215c6e3da34 not found: ID does not exist" Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.585174 4672 scope.go:117] "RemoveContainer" containerID="147bba47d22cd2c21120e67dbcfcfdaccae7189f961a58ccc92f8a11ef9a6ee2" Feb 17 16:58:38 crc kubenswrapper[4672]: E0217 16:58:38.585681 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"147bba47d22cd2c21120e67dbcfcfdaccae7189f961a58ccc92f8a11ef9a6ee2\": container with ID starting with 147bba47d22cd2c21120e67dbcfcfdaccae7189f961a58ccc92f8a11ef9a6ee2 not found: ID does not exist" containerID="147bba47d22cd2c21120e67dbcfcfdaccae7189f961a58ccc92f8a11ef9a6ee2" Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.585719 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"147bba47d22cd2c21120e67dbcfcfdaccae7189f961a58ccc92f8a11ef9a6ee2"} err="failed to get container status \"147bba47d22cd2c21120e67dbcfcfdaccae7189f961a58ccc92f8a11ef9a6ee2\": rpc error: code = NotFound desc = could not find container \"147bba47d22cd2c21120e67dbcfcfdaccae7189f961a58ccc92f8a11ef9a6ee2\": container with ID starting with 147bba47d22cd2c21120e67dbcfcfdaccae7189f961a58ccc92f8a11ef9a6ee2 not found: ID does not exist" Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.585746 4672 scope.go:117] "RemoveContainer" containerID="15c577e1ede92de54a3785c10a421dc75ec3ec42221afc585edd6e8f6ea3b0b8" Feb 17 16:58:38 crc kubenswrapper[4672]: E0217 16:58:38.586358 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15c577e1ede92de54a3785c10a421dc75ec3ec42221afc585edd6e8f6ea3b0b8\": container with ID starting with 15c577e1ede92de54a3785c10a421dc75ec3ec42221afc585edd6e8f6ea3b0b8 not found: ID does not exist" containerID="15c577e1ede92de54a3785c10a421dc75ec3ec42221afc585edd6e8f6ea3b0b8" Feb 17 16:58:38 crc kubenswrapper[4672]: I0217 16:58:38.586390 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15c577e1ede92de54a3785c10a421dc75ec3ec42221afc585edd6e8f6ea3b0b8"} err="failed to get container status \"15c577e1ede92de54a3785c10a421dc75ec3ec42221afc585edd6e8f6ea3b0b8\": rpc error: code = NotFound desc = could not find container \"15c577e1ede92de54a3785c10a421dc75ec3ec42221afc585edd6e8f6ea3b0b8\": container with ID starting with 15c577e1ede92de54a3785c10a421dc75ec3ec42221afc585edd6e8f6ea3b0b8 not found: ID does not exist" Feb 17 16:58:38 crc kubenswrapper[4672]: E0217 16:58:38.946987 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:58:39 crc kubenswrapper[4672]: I0217 16:58:39.958647 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df6c79ca-fcc8-41b7-9af9-977995644317" path="/var/lib/kubelet/pods/df6c79ca-fcc8-41b7-9af9-977995644317/volumes" Feb 17 16:58:44 crc kubenswrapper[4672]: E0217 16:58:44.074614 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:58:44 crc kubenswrapper[4672]: E0217 16:58:44.075222 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:58:44 crc kubenswrapper[4672]: E0217 16:58:44.075364 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66h7h644h64ch5f8h565hfch5dh56chfdh8hfdh5b5h567h6dh665h557h74h665hcbh96h659h554h589h57fh5d9h55h564hcfh5dhffhfdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tx4bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9e58ce9b-ddd5-42bb-8e07-08a22c8871a5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:58:44 crc kubenswrapper[4672]: E0217 16:58:44.076589 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:58:52 crc kubenswrapper[4672]: E0217 16:58:52.947216 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:58:58 crc kubenswrapper[4672]: E0217 16:58:58.947992 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:59:05 crc kubenswrapper[4672]: E0217 16:59:05.948800 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:59:12 crc kubenswrapper[4672]: E0217 16:59:12.948364 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:59:18 crc kubenswrapper[4672]: E0217 16:59:18.947641 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:59:23 crc kubenswrapper[4672]: E0217 16:59:23.947833 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:59:33 crc kubenswrapper[4672]: E0217 16:59:33.946907 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:59:34 crc kubenswrapper[4672]: E0217 16:59:34.946490 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:59:43 crc kubenswrapper[4672]: I0217 16:59:43.142700 4672 generic.go:334] "Generic (PLEG): container finished" podID="58598c29-6a4f-43a2-87b4-3247b3144660" containerID="a8bbea965613bc64d840dd07b7b860bab526717decc7fbe312ce11a359f4328c" exitCode=2 Feb 17 16:59:43 crc kubenswrapper[4672]: I0217 16:59:43.142850 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs" event={"ID":"58598c29-6a4f-43a2-87b4-3247b3144660","Type":"ContainerDied","Data":"a8bbea965613bc64d840dd07b7b860bab526717decc7fbe312ce11a359f4328c"} Feb 17 16:59:44 crc kubenswrapper[4672]: I0217 16:59:44.822362 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs" Feb 17 16:59:44 crc kubenswrapper[4672]: I0217 16:59:44.926217 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58598c29-6a4f-43a2-87b4-3247b3144660-ssh-key-openstack-edpm-ipam\") pod \"58598c29-6a4f-43a2-87b4-3247b3144660\" (UID: \"58598c29-6a4f-43a2-87b4-3247b3144660\") " Feb 17 16:59:44 crc kubenswrapper[4672]: I0217 16:59:44.926317 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58598c29-6a4f-43a2-87b4-3247b3144660-inventory\") pod \"58598c29-6a4f-43a2-87b4-3247b3144660\" (UID: \"58598c29-6a4f-43a2-87b4-3247b3144660\") " Feb 17 16:59:44 crc kubenswrapper[4672]: I0217 16:59:44.926434 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tkx7\" (UniqueName: \"kubernetes.io/projected/58598c29-6a4f-43a2-87b4-3247b3144660-kube-api-access-9tkx7\") pod \"58598c29-6a4f-43a2-87b4-3247b3144660\" (UID: \"58598c29-6a4f-43a2-87b4-3247b3144660\") " Feb 17 16:59:44 crc kubenswrapper[4672]: I0217 16:59:44.933312 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58598c29-6a4f-43a2-87b4-3247b3144660-kube-api-access-9tkx7" (OuterVolumeSpecName: "kube-api-access-9tkx7") pod "58598c29-6a4f-43a2-87b4-3247b3144660" (UID: "58598c29-6a4f-43a2-87b4-3247b3144660"). InnerVolumeSpecName "kube-api-access-9tkx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:59:44 crc kubenswrapper[4672]: I0217 16:59:44.966092 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58598c29-6a4f-43a2-87b4-3247b3144660-inventory" (OuterVolumeSpecName: "inventory") pod "58598c29-6a4f-43a2-87b4-3247b3144660" (UID: "58598c29-6a4f-43a2-87b4-3247b3144660"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:59:44 crc kubenswrapper[4672]: I0217 16:59:44.973472 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58598c29-6a4f-43a2-87b4-3247b3144660-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "58598c29-6a4f-43a2-87b4-3247b3144660" (UID: "58598c29-6a4f-43a2-87b4-3247b3144660"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:59:45 crc kubenswrapper[4672]: I0217 16:59:45.029859 4672 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58598c29-6a4f-43a2-87b4-3247b3144660-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:59:45 crc kubenswrapper[4672]: I0217 16:59:45.029904 4672 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58598c29-6a4f-43a2-87b4-3247b3144660-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:59:45 crc kubenswrapper[4672]: I0217 16:59:45.029915 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tkx7\" (UniqueName: \"kubernetes.io/projected/58598c29-6a4f-43a2-87b4-3247b3144660-kube-api-access-9tkx7\") on node \"crc\" DevicePath \"\"" Feb 17 16:59:45 crc kubenswrapper[4672]: I0217 16:59:45.172309 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs" event={"ID":"58598c29-6a4f-43a2-87b4-3247b3144660","Type":"ContainerDied","Data":"27972ccafda1b8482813a63a984eb3113b87f35cce08dfd73c2be95a6ae50203"} Feb 17 16:59:45 crc kubenswrapper[4672]: I0217 16:59:45.172352 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27972ccafda1b8482813a63a984eb3113b87f35cce08dfd73c2be95a6ae50203" Feb 17 16:59:45 crc kubenswrapper[4672]: I0217 16:59:45.172385 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs" Feb 17 16:59:46 crc kubenswrapper[4672]: E0217 16:59:46.947226 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 16:59:48 crc kubenswrapper[4672]: E0217 16:59:48.947144 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 16:59:57 crc kubenswrapper[4672]: I0217 16:59:57.565884 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:59:57 crc kubenswrapper[4672]: I0217 16:59:57.566600 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:59:57 crc kubenswrapper[4672]: E0217 16:59:57.948023 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:00:00 crc kubenswrapper[4672]: I0217 17:00:00.148497 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522460-p2p4h"] Feb 17 17:00:00 crc kubenswrapper[4672]: E0217 17:00:00.149167 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6c79ca-fcc8-41b7-9af9-977995644317" containerName="extract-utilities" Feb 17 17:00:00 crc kubenswrapper[4672]: I0217 17:00:00.149180 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6c79ca-fcc8-41b7-9af9-977995644317" containerName="extract-utilities" Feb 17 17:00:00 crc kubenswrapper[4672]: E0217 17:00:00.149198 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58598c29-6a4f-43a2-87b4-3247b3144660" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:00:00 crc kubenswrapper[4672]: I0217 17:00:00.149205 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="58598c29-6a4f-43a2-87b4-3247b3144660" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:00:00 crc kubenswrapper[4672]: E0217 17:00:00.149217 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6c79ca-fcc8-41b7-9af9-977995644317" containerName="registry-server" Feb 17 17:00:00 crc kubenswrapper[4672]: I0217 17:00:00.149223 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6c79ca-fcc8-41b7-9af9-977995644317" containerName="registry-server" Feb 17 17:00:00 crc kubenswrapper[4672]: E0217 17:00:00.149242 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6c79ca-fcc8-41b7-9af9-977995644317" containerName="extract-content" Feb 17 17:00:00 crc kubenswrapper[4672]: I0217 17:00:00.149249 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6c79ca-fcc8-41b7-9af9-977995644317" containerName="extract-content" Feb 17 17:00:00 crc kubenswrapper[4672]: I0217 17:00:00.149416 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="58598c29-6a4f-43a2-87b4-3247b3144660" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:00:00 crc kubenswrapper[4672]: I0217 17:00:00.149431 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="df6c79ca-fcc8-41b7-9af9-977995644317" containerName="registry-server" Feb 17 17:00:00 crc kubenswrapper[4672]: I0217 17:00:00.150165 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-p2p4h" Feb 17 17:00:00 crc kubenswrapper[4672]: I0217 17:00:00.152464 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 17:00:00 crc kubenswrapper[4672]: I0217 17:00:00.152694 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 17:00:00 crc kubenswrapper[4672]: I0217 17:00:00.174245 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522460-p2p4h"] Feb 17 17:00:00 crc kubenswrapper[4672]: I0217 17:00:00.255018 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p8wx\" (UniqueName: \"kubernetes.io/projected/409c876b-73ac-42a3-8d51-84a3d5b13476-kube-api-access-5p8wx\") pod \"collect-profiles-29522460-p2p4h\" (UID: \"409c876b-73ac-42a3-8d51-84a3d5b13476\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-p2p4h" Feb 17 17:00:00 crc kubenswrapper[4672]: I0217 17:00:00.255620 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/409c876b-73ac-42a3-8d51-84a3d5b13476-config-volume\") pod \"collect-profiles-29522460-p2p4h\" (UID: \"409c876b-73ac-42a3-8d51-84a3d5b13476\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-p2p4h" Feb 17 17:00:00 crc kubenswrapper[4672]: I0217 17:00:00.255741 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/409c876b-73ac-42a3-8d51-84a3d5b13476-secret-volume\") pod \"collect-profiles-29522460-p2p4h\" (UID: \"409c876b-73ac-42a3-8d51-84a3d5b13476\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-p2p4h" Feb 17 17:00:00 crc kubenswrapper[4672]: I0217 17:00:00.358040 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/409c876b-73ac-42a3-8d51-84a3d5b13476-config-volume\") pod \"collect-profiles-29522460-p2p4h\" (UID: \"409c876b-73ac-42a3-8d51-84a3d5b13476\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-p2p4h" Feb 17 17:00:00 crc kubenswrapper[4672]: I0217 17:00:00.358344 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/409c876b-73ac-42a3-8d51-84a3d5b13476-secret-volume\") pod \"collect-profiles-29522460-p2p4h\" (UID: \"409c876b-73ac-42a3-8d51-84a3d5b13476\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-p2p4h" Feb 17 17:00:00 crc kubenswrapper[4672]: I0217 17:00:00.358479 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p8wx\" (UniqueName: \"kubernetes.io/projected/409c876b-73ac-42a3-8d51-84a3d5b13476-kube-api-access-5p8wx\") pod \"collect-profiles-29522460-p2p4h\" (UID: \"409c876b-73ac-42a3-8d51-84a3d5b13476\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-p2p4h" Feb 17 17:00:00 crc kubenswrapper[4672]: I0217 17:00:00.359037 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/409c876b-73ac-42a3-8d51-84a3d5b13476-config-volume\") pod \"collect-profiles-29522460-p2p4h\" (UID: \"409c876b-73ac-42a3-8d51-84a3d5b13476\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-p2p4h" Feb 17 17:00:00 crc kubenswrapper[4672]: I0217 17:00:00.378213 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/409c876b-73ac-42a3-8d51-84a3d5b13476-secret-volume\") pod \"collect-profiles-29522460-p2p4h\" (UID: \"409c876b-73ac-42a3-8d51-84a3d5b13476\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-p2p4h" Feb 17 17:00:00 crc kubenswrapper[4672]: I0217 17:00:00.387051 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p8wx\" (UniqueName: \"kubernetes.io/projected/409c876b-73ac-42a3-8d51-84a3d5b13476-kube-api-access-5p8wx\") pod \"collect-profiles-29522460-p2p4h\" (UID: \"409c876b-73ac-42a3-8d51-84a3d5b13476\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-p2p4h" Feb 17 17:00:00 crc kubenswrapper[4672]: I0217 17:00:00.473668 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-p2p4h" Feb 17 17:00:00 crc kubenswrapper[4672]: I0217 17:00:00.931014 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522460-p2p4h"] Feb 17 17:00:00 crc kubenswrapper[4672]: E0217 17:00:00.958948 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:00:01 crc kubenswrapper[4672]: I0217 17:00:01.044737 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9jz46"] Feb 17 17:00:01 crc kubenswrapper[4672]: I0217 17:00:01.046833 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jz46" Feb 17 17:00:01 crc kubenswrapper[4672]: I0217 17:00:01.075064 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkmcr\" (UniqueName: \"kubernetes.io/projected/49271048-ee8a-4d8e-a915-ce6001a18924-kube-api-access-fkmcr\") pod \"certified-operators-9jz46\" (UID: \"49271048-ee8a-4d8e-a915-ce6001a18924\") " pod="openshift-marketplace/certified-operators-9jz46" Feb 17 17:00:01 crc kubenswrapper[4672]: I0217 17:00:01.075200 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49271048-ee8a-4d8e-a915-ce6001a18924-utilities\") pod \"certified-operators-9jz46\" (UID: \"49271048-ee8a-4d8e-a915-ce6001a18924\") " pod="openshift-marketplace/certified-operators-9jz46" Feb 17 17:00:01 crc kubenswrapper[4672]: I0217 17:00:01.075235 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49271048-ee8a-4d8e-a915-ce6001a18924-catalog-content\") pod \"certified-operators-9jz46\" (UID: \"49271048-ee8a-4d8e-a915-ce6001a18924\") " pod="openshift-marketplace/certified-operators-9jz46" Feb 17 17:00:01 crc kubenswrapper[4672]: I0217 17:00:01.076376 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9jz46"] Feb 17 17:00:01 crc kubenswrapper[4672]: I0217 17:00:01.176820 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49271048-ee8a-4d8e-a915-ce6001a18924-catalog-content\") pod \"certified-operators-9jz46\" (UID: \"49271048-ee8a-4d8e-a915-ce6001a18924\") " pod="openshift-marketplace/certified-operators-9jz46" Feb 17 17:00:01 crc kubenswrapper[4672]: I0217 17:00:01.177202 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkmcr\" (UniqueName: \"kubernetes.io/projected/49271048-ee8a-4d8e-a915-ce6001a18924-kube-api-access-fkmcr\") pod \"certified-operators-9jz46\" (UID: \"49271048-ee8a-4d8e-a915-ce6001a18924\") " pod="openshift-marketplace/certified-operators-9jz46" Feb 17 17:00:01 crc kubenswrapper[4672]: I0217 17:00:01.177278 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49271048-ee8a-4d8e-a915-ce6001a18924-utilities\") pod \"certified-operators-9jz46\" (UID: \"49271048-ee8a-4d8e-a915-ce6001a18924\") " pod="openshift-marketplace/certified-operators-9jz46" Feb 17 17:00:01 crc kubenswrapper[4672]: I0217 17:00:01.177677 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49271048-ee8a-4d8e-a915-ce6001a18924-utilities\") pod \"certified-operators-9jz46\" (UID: \"49271048-ee8a-4d8e-a915-ce6001a18924\") " pod="openshift-marketplace/certified-operators-9jz46" Feb 17 17:00:01 crc kubenswrapper[4672]: I0217 17:00:01.178163 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49271048-ee8a-4d8e-a915-ce6001a18924-catalog-content\") pod \"certified-operators-9jz46\" (UID: \"49271048-ee8a-4d8e-a915-ce6001a18924\") " pod="openshift-marketplace/certified-operators-9jz46" Feb 17 17:00:01 crc kubenswrapper[4672]: I0217 17:00:01.198607 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkmcr\" (UniqueName: \"kubernetes.io/projected/49271048-ee8a-4d8e-a915-ce6001a18924-kube-api-access-fkmcr\") pod \"certified-operators-9jz46\" (UID: \"49271048-ee8a-4d8e-a915-ce6001a18924\") " pod="openshift-marketplace/certified-operators-9jz46" Feb 17 17:00:01 crc kubenswrapper[4672]: I0217 17:00:01.320287 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-p2p4h" event={"ID":"409c876b-73ac-42a3-8d51-84a3d5b13476","Type":"ContainerStarted","Data":"911c63877b641b431ff3078b67b53fe6f9cebff3ea2a2b5171cbf7875a405b50"} Feb 17 17:00:01 crc kubenswrapper[4672]: I0217 17:00:01.320328 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-p2p4h" event={"ID":"409c876b-73ac-42a3-8d51-84a3d5b13476","Type":"ContainerStarted","Data":"41aa823d7f95a25d458045b6e135f1f36ff60a7ad1bb4dff636be54ad5a4312e"} Feb 17 17:00:01 crc kubenswrapper[4672]: I0217 17:00:01.344952 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-p2p4h" podStartSLOduration=1.344931217 podStartE2EDuration="1.344931217s" podCreationTimestamp="2026-02-17 17:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:00:01.33670371 +0000 UTC m=+3410.090792442" watchObservedRunningTime="2026-02-17 17:00:01.344931217 +0000 UTC m=+3410.099019949" Feb 17 17:00:01 crc kubenswrapper[4672]: I0217 17:00:01.404997 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jz46" Feb 17 17:00:01 crc kubenswrapper[4672]: I0217 17:00:01.959336 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9jz46"] Feb 17 17:00:02 crc kubenswrapper[4672]: I0217 17:00:02.333290 4672 generic.go:334] "Generic (PLEG): container finished" podID="49271048-ee8a-4d8e-a915-ce6001a18924" containerID="2f974cf8dbc347ca6388ad3923766a6f7ec05ea96d5ef3a5b1e7b2b7864d5818" exitCode=0 Feb 17 17:00:02 crc kubenswrapper[4672]: I0217 17:00:02.333390 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jz46" event={"ID":"49271048-ee8a-4d8e-a915-ce6001a18924","Type":"ContainerDied","Data":"2f974cf8dbc347ca6388ad3923766a6f7ec05ea96d5ef3a5b1e7b2b7864d5818"} Feb 17 17:00:02 crc kubenswrapper[4672]: I0217 17:00:02.333648 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jz46" event={"ID":"49271048-ee8a-4d8e-a915-ce6001a18924","Type":"ContainerStarted","Data":"5f1e0e8346ff3a08731367bdf33195de06314100f8b5bdf687ae42202e86107e"} Feb 17 17:00:02 crc kubenswrapper[4672]: I0217 17:00:02.336239 4672 generic.go:334] "Generic (PLEG): container finished" podID="409c876b-73ac-42a3-8d51-84a3d5b13476" containerID="911c63877b641b431ff3078b67b53fe6f9cebff3ea2a2b5171cbf7875a405b50" exitCode=0 Feb 17 17:00:02 crc kubenswrapper[4672]: I0217 17:00:02.336279 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-p2p4h" event={"ID":"409c876b-73ac-42a3-8d51-84a3d5b13476","Type":"ContainerDied","Data":"911c63877b641b431ff3078b67b53fe6f9cebff3ea2a2b5171cbf7875a405b50"} Feb 17 17:00:03 crc kubenswrapper[4672]: I0217 17:00:03.775619 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-p2p4h" Feb 17 17:00:03 crc kubenswrapper[4672]: I0217 17:00:03.833435 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/409c876b-73ac-42a3-8d51-84a3d5b13476-secret-volume\") pod \"409c876b-73ac-42a3-8d51-84a3d5b13476\" (UID: \"409c876b-73ac-42a3-8d51-84a3d5b13476\") " Feb 17 17:00:03 crc kubenswrapper[4672]: I0217 17:00:03.833743 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/409c876b-73ac-42a3-8d51-84a3d5b13476-config-volume\") pod \"409c876b-73ac-42a3-8d51-84a3d5b13476\" (UID: \"409c876b-73ac-42a3-8d51-84a3d5b13476\") " Feb 17 17:00:03 crc kubenswrapper[4672]: I0217 17:00:03.833837 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p8wx\" (UniqueName: \"kubernetes.io/projected/409c876b-73ac-42a3-8d51-84a3d5b13476-kube-api-access-5p8wx\") pod \"409c876b-73ac-42a3-8d51-84a3d5b13476\" (UID: \"409c876b-73ac-42a3-8d51-84a3d5b13476\") " Feb 17 17:00:03 crc kubenswrapper[4672]: I0217 17:00:03.835354 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/409c876b-73ac-42a3-8d51-84a3d5b13476-config-volume" (OuterVolumeSpecName: "config-volume") pod "409c876b-73ac-42a3-8d51-84a3d5b13476" (UID: "409c876b-73ac-42a3-8d51-84a3d5b13476"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:00:03 crc kubenswrapper[4672]: I0217 17:00:03.839654 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/409c876b-73ac-42a3-8d51-84a3d5b13476-kube-api-access-5p8wx" (OuterVolumeSpecName: "kube-api-access-5p8wx") pod "409c876b-73ac-42a3-8d51-84a3d5b13476" (UID: "409c876b-73ac-42a3-8d51-84a3d5b13476"). InnerVolumeSpecName "kube-api-access-5p8wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:00:03 crc kubenswrapper[4672]: I0217 17:00:03.842657 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/409c876b-73ac-42a3-8d51-84a3d5b13476-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "409c876b-73ac-42a3-8d51-84a3d5b13476" (UID: "409c876b-73ac-42a3-8d51-84a3d5b13476"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:00:03 crc kubenswrapper[4672]: I0217 17:00:03.936389 4672 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/409c876b-73ac-42a3-8d51-84a3d5b13476-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:03 crc kubenswrapper[4672]: I0217 17:00:03.936467 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5p8wx\" (UniqueName: \"kubernetes.io/projected/409c876b-73ac-42a3-8d51-84a3d5b13476-kube-api-access-5p8wx\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:03 crc kubenswrapper[4672]: I0217 17:00:03.936487 4672 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/409c876b-73ac-42a3-8d51-84a3d5b13476-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:04 crc kubenswrapper[4672]: I0217 17:00:04.358569 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-p2p4h" event={"ID":"409c876b-73ac-42a3-8d51-84a3d5b13476","Type":"ContainerDied","Data":"41aa823d7f95a25d458045b6e135f1f36ff60a7ad1bb4dff636be54ad5a4312e"} Feb 17 17:00:04 crc kubenswrapper[4672]: I0217 17:00:04.358626 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41aa823d7f95a25d458045b6e135f1f36ff60a7ad1bb4dff636be54ad5a4312e" Feb 17 17:00:04 crc kubenswrapper[4672]: I0217 17:00:04.358687 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-p2p4h" Feb 17 17:00:04 crc kubenswrapper[4672]: I0217 17:00:04.437037 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-k4s4t"] Feb 17 17:00:04 crc kubenswrapper[4672]: I0217 17:00:04.445458 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-k4s4t"] Feb 17 17:00:05 crc kubenswrapper[4672]: I0217 17:00:05.602704 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jz46" event={"ID":"49271048-ee8a-4d8e-a915-ce6001a18924","Type":"ContainerStarted","Data":"06f38511628410888458772a1949ba28a6df43b37fcc3f94d0cb8c59dc61e124"} Feb 17 17:00:05 crc kubenswrapper[4672]: I0217 17:00:05.957406 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64bc792d-4f6e-45f7-948d-5b879a249534" path="/var/lib/kubelet/pods/64bc792d-4f6e-45f7-948d-5b879a249534/volumes" Feb 17 17:00:07 crc kubenswrapper[4672]: I0217 17:00:07.635541 4672 generic.go:334] "Generic (PLEG): container finished" podID="49271048-ee8a-4d8e-a915-ce6001a18924" containerID="06f38511628410888458772a1949ba28a6df43b37fcc3f94d0cb8c59dc61e124" exitCode=0 Feb 17 17:00:07 crc kubenswrapper[4672]: I0217 17:00:07.635623 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jz46" event={"ID":"49271048-ee8a-4d8e-a915-ce6001a18924","Type":"ContainerDied","Data":"06f38511628410888458772a1949ba28a6df43b37fcc3f94d0cb8c59dc61e124"} Feb 17 17:00:08 crc kubenswrapper[4672]: I0217 17:00:08.648475 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jz46" event={"ID":"49271048-ee8a-4d8e-a915-ce6001a18924","Type":"ContainerStarted","Data":"36259afaf7dbda69e8e4b517af01c424233262673b0ce360500ddf7bcecf26bd"} Feb 17 17:00:08 crc kubenswrapper[4672]: I0217 17:00:08.685588 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9jz46" podStartSLOduration=1.8841687889999998 podStartE2EDuration="7.685563654s" podCreationTimestamp="2026-02-17 17:00:01 +0000 UTC" firstStartedPulling="2026-02-17 17:00:02.336211447 +0000 UTC m=+3411.090300179" lastFinishedPulling="2026-02-17 17:00:08.137606312 +0000 UTC m=+3416.891695044" observedRunningTime="2026-02-17 17:00:08.66758451 +0000 UTC m=+3417.421673282" watchObservedRunningTime="2026-02-17 17:00:08.685563654 +0000 UTC m=+3417.439652396" Feb 17 17:00:10 crc kubenswrapper[4672]: E0217 17:00:10.946909 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:00:11 crc kubenswrapper[4672]: I0217 17:00:11.405787 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9jz46" Feb 17 17:00:11 crc kubenswrapper[4672]: I0217 17:00:11.405855 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9jz46" Feb 17 17:00:11 crc kubenswrapper[4672]: I0217 17:00:11.490691 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9jz46" Feb 17 17:00:15 crc kubenswrapper[4672]: E0217 17:00:15.948601 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:00:19 crc kubenswrapper[4672]: I0217 17:00:19.160048 4672 scope.go:117] "RemoveContainer" containerID="966bd3b083f2fe4aa3a60d5243e3ae215223140b40519d3eb2d4b9d06efbf9f4" Feb 17 17:00:21 crc kubenswrapper[4672]: I0217 17:00:21.479977 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9jz46" Feb 17 17:00:21 crc kubenswrapper[4672]: I0217 17:00:21.542465 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9jz46"] Feb 17 17:00:21 crc kubenswrapper[4672]: I0217 17:00:21.784067 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9jz46" podUID="49271048-ee8a-4d8e-a915-ce6001a18924" containerName="registry-server" containerID="cri-o://36259afaf7dbda69e8e4b517af01c424233262673b0ce360500ddf7bcecf26bd" gracePeriod=2 Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.331726 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jz46" Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.468795 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49271048-ee8a-4d8e-a915-ce6001a18924-utilities\") pod \"49271048-ee8a-4d8e-a915-ce6001a18924\" (UID: \"49271048-ee8a-4d8e-a915-ce6001a18924\") " Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.469149 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkmcr\" (UniqueName: \"kubernetes.io/projected/49271048-ee8a-4d8e-a915-ce6001a18924-kube-api-access-fkmcr\") pod \"49271048-ee8a-4d8e-a915-ce6001a18924\" (UID: \"49271048-ee8a-4d8e-a915-ce6001a18924\") " Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.469382 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49271048-ee8a-4d8e-a915-ce6001a18924-catalog-content\") pod \"49271048-ee8a-4d8e-a915-ce6001a18924\" (UID: \"49271048-ee8a-4d8e-a915-ce6001a18924\") " Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.470372 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49271048-ee8a-4d8e-a915-ce6001a18924-utilities" (OuterVolumeSpecName: "utilities") pod "49271048-ee8a-4d8e-a915-ce6001a18924" (UID: "49271048-ee8a-4d8e-a915-ce6001a18924"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.474918 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49271048-ee8a-4d8e-a915-ce6001a18924-kube-api-access-fkmcr" (OuterVolumeSpecName: "kube-api-access-fkmcr") pod "49271048-ee8a-4d8e-a915-ce6001a18924" (UID: "49271048-ee8a-4d8e-a915-ce6001a18924"). InnerVolumeSpecName "kube-api-access-fkmcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.536010 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49271048-ee8a-4d8e-a915-ce6001a18924-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49271048-ee8a-4d8e-a915-ce6001a18924" (UID: "49271048-ee8a-4d8e-a915-ce6001a18924"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.572356 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49271048-ee8a-4d8e-a915-ce6001a18924-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.572434 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49271048-ee8a-4d8e-a915-ce6001a18924-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.572468 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkmcr\" (UniqueName: \"kubernetes.io/projected/49271048-ee8a-4d8e-a915-ce6001a18924-kube-api-access-fkmcr\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.797461 4672 generic.go:334] "Generic (PLEG): container finished" podID="49271048-ee8a-4d8e-a915-ce6001a18924" containerID="36259afaf7dbda69e8e4b517af01c424233262673b0ce360500ddf7bcecf26bd" exitCode=0 Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.797594 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jz46" event={"ID":"49271048-ee8a-4d8e-a915-ce6001a18924","Type":"ContainerDied","Data":"36259afaf7dbda69e8e4b517af01c424233262673b0ce360500ddf7bcecf26bd"} Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.797636 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jz46" event={"ID":"49271048-ee8a-4d8e-a915-ce6001a18924","Type":"ContainerDied","Data":"5f1e0e8346ff3a08731367bdf33195de06314100f8b5bdf687ae42202e86107e"} Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.797658 4672 scope.go:117] "RemoveContainer" containerID="36259afaf7dbda69e8e4b517af01c424233262673b0ce360500ddf7bcecf26bd" Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.797754 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jz46" Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.824712 4672 scope.go:117] "RemoveContainer" containerID="06f38511628410888458772a1949ba28a6df43b37fcc3f94d0cb8c59dc61e124" Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.848914 4672 scope.go:117] "RemoveContainer" containerID="2f974cf8dbc347ca6388ad3923766a6f7ec05ea96d5ef3a5b1e7b2b7864d5818" Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.863117 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9jz46"] Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.874804 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9jz46"] Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.897268 4672 scope.go:117] "RemoveContainer" containerID="36259afaf7dbda69e8e4b517af01c424233262673b0ce360500ddf7bcecf26bd" Feb 17 17:00:22 crc kubenswrapper[4672]: E0217 17:00:22.897797 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36259afaf7dbda69e8e4b517af01c424233262673b0ce360500ddf7bcecf26bd\": container with ID starting with 36259afaf7dbda69e8e4b517af01c424233262673b0ce360500ddf7bcecf26bd not found: ID does not exist" containerID="36259afaf7dbda69e8e4b517af01c424233262673b0ce360500ddf7bcecf26bd" Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.897841 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36259afaf7dbda69e8e4b517af01c424233262673b0ce360500ddf7bcecf26bd"} err="failed to get container status \"36259afaf7dbda69e8e4b517af01c424233262673b0ce360500ddf7bcecf26bd\": rpc error: code = NotFound desc = could not find container \"36259afaf7dbda69e8e4b517af01c424233262673b0ce360500ddf7bcecf26bd\": container with ID starting with 36259afaf7dbda69e8e4b517af01c424233262673b0ce360500ddf7bcecf26bd not found: ID does not exist" Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.897870 4672 scope.go:117] "RemoveContainer" containerID="06f38511628410888458772a1949ba28a6df43b37fcc3f94d0cb8c59dc61e124" Feb 17 17:00:22 crc kubenswrapper[4672]: E0217 17:00:22.898152 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06f38511628410888458772a1949ba28a6df43b37fcc3f94d0cb8c59dc61e124\": container with ID starting with 06f38511628410888458772a1949ba28a6df43b37fcc3f94d0cb8c59dc61e124 not found: ID does not exist" containerID="06f38511628410888458772a1949ba28a6df43b37fcc3f94d0cb8c59dc61e124" Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.898177 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f38511628410888458772a1949ba28a6df43b37fcc3f94d0cb8c59dc61e124"} err="failed to get container status \"06f38511628410888458772a1949ba28a6df43b37fcc3f94d0cb8c59dc61e124\": rpc error: code = NotFound desc = could not find container \"06f38511628410888458772a1949ba28a6df43b37fcc3f94d0cb8c59dc61e124\": container with ID starting with 06f38511628410888458772a1949ba28a6df43b37fcc3f94d0cb8c59dc61e124 not found: ID does not exist" Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.898191 4672 scope.go:117] "RemoveContainer" containerID="2f974cf8dbc347ca6388ad3923766a6f7ec05ea96d5ef3a5b1e7b2b7864d5818" Feb 17 17:00:22 crc kubenswrapper[4672]: E0217 17:00:22.898435 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f974cf8dbc347ca6388ad3923766a6f7ec05ea96d5ef3a5b1e7b2b7864d5818\": container with ID starting with 2f974cf8dbc347ca6388ad3923766a6f7ec05ea96d5ef3a5b1e7b2b7864d5818 not found: ID does not exist" containerID="2f974cf8dbc347ca6388ad3923766a6f7ec05ea96d5ef3a5b1e7b2b7864d5818" Feb 17 17:00:22 crc kubenswrapper[4672]: I0217 17:00:22.898462 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f974cf8dbc347ca6388ad3923766a6f7ec05ea96d5ef3a5b1e7b2b7864d5818"} err="failed to get container status \"2f974cf8dbc347ca6388ad3923766a6f7ec05ea96d5ef3a5b1e7b2b7864d5818\": rpc error: code = NotFound desc = could not find container \"2f974cf8dbc347ca6388ad3923766a6f7ec05ea96d5ef3a5b1e7b2b7864d5818\": container with ID starting with 2f974cf8dbc347ca6388ad3923766a6f7ec05ea96d5ef3a5b1e7b2b7864d5818 not found: ID does not exist" Feb 17 17:00:23 crc kubenswrapper[4672]: E0217 17:00:23.947825 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:00:23 crc kubenswrapper[4672]: I0217 17:00:23.957589 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49271048-ee8a-4d8e-a915-ce6001a18924" path="/var/lib/kubelet/pods/49271048-ee8a-4d8e-a915-ce6001a18924/volumes" Feb 17 17:00:26 crc kubenswrapper[4672]: E0217 17:00:26.948854 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:00:27 crc kubenswrapper[4672]: I0217 17:00:27.566328 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:00:27 crc kubenswrapper[4672]: I0217 17:00:27.566427 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:00:38 crc kubenswrapper[4672]: E0217 17:00:38.947497 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:00:38 crc kubenswrapper[4672]: E0217 17:00:38.947501 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:00:53 crc kubenswrapper[4672]: E0217 17:00:53.946863 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:00:53 crc kubenswrapper[4672]: E0217 17:00:53.946944 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:00:57 crc kubenswrapper[4672]: I0217 17:00:57.565970 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:00:57 crc kubenswrapper[4672]: I0217 17:00:57.566499 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:00:57 crc kubenswrapper[4672]: I0217 17:00:57.566630 4672 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 17:00:57 crc kubenswrapper[4672]: I0217 17:00:57.567997 4672 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"399c1b28a73d295545a85ac9813544c6363f8e54412c109aba83e40a76db0358"} pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:00:57 crc kubenswrapper[4672]: I0217 17:00:57.568155 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" containerID="cri-o://399c1b28a73d295545a85ac9813544c6363f8e54412c109aba83e40a76db0358" gracePeriod=600 Feb 17 17:00:58 crc kubenswrapper[4672]: I0217 17:00:58.227048 4672 generic.go:334] "Generic (PLEG): container finished" podID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerID="399c1b28a73d295545a85ac9813544c6363f8e54412c109aba83e40a76db0358" exitCode=0 Feb 17 17:00:58 crc kubenswrapper[4672]: I0217 17:00:58.227152 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerDied","Data":"399c1b28a73d295545a85ac9813544c6363f8e54412c109aba83e40a76db0358"} Feb 17 17:00:58 crc kubenswrapper[4672]: I0217 17:00:58.227407 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerStarted","Data":"89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd"} Feb 17 17:00:58 crc kubenswrapper[4672]: I0217 17:00:58.227435 4672 scope.go:117] "RemoveContainer" containerID="9dea5ec410e293f594728e3d38216f730173d601a89a768840e0fb078db09fcc" Feb 17 17:01:00 crc kubenswrapper[4672]: I0217 17:01:00.157251 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29522461-d5dt4"] Feb 17 17:01:00 crc kubenswrapper[4672]: E0217 17:01:00.158338 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49271048-ee8a-4d8e-a915-ce6001a18924" containerName="extract-utilities" Feb 17 17:01:00 crc kubenswrapper[4672]: I0217 17:01:00.158356 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="49271048-ee8a-4d8e-a915-ce6001a18924" containerName="extract-utilities" Feb 17 17:01:00 crc kubenswrapper[4672]: E0217 17:01:00.158387 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49271048-ee8a-4d8e-a915-ce6001a18924" containerName="extract-content" Feb 17 17:01:00 crc kubenswrapper[4672]: I0217 17:01:00.158393 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="49271048-ee8a-4d8e-a915-ce6001a18924" containerName="extract-content" Feb 17 17:01:00 crc kubenswrapper[4672]: E0217 17:01:00.158418 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49271048-ee8a-4d8e-a915-ce6001a18924" containerName="registry-server" Feb 17 17:01:00 crc kubenswrapper[4672]: I0217 17:01:00.158424 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="49271048-ee8a-4d8e-a915-ce6001a18924" containerName="registry-server" Feb 17 17:01:00 crc kubenswrapper[4672]: E0217 17:01:00.158433 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="409c876b-73ac-42a3-8d51-84a3d5b13476" containerName="collect-profiles" Feb 17 17:01:00 crc kubenswrapper[4672]: I0217 17:01:00.158439 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="409c876b-73ac-42a3-8d51-84a3d5b13476" containerName="collect-profiles" Feb 17 17:01:00 crc kubenswrapper[4672]: I0217 17:01:00.158636 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="409c876b-73ac-42a3-8d51-84a3d5b13476" containerName="collect-profiles" Feb 17 17:01:00 crc kubenswrapper[4672]: I0217 17:01:00.158649 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="49271048-ee8a-4d8e-a915-ce6001a18924" containerName="registry-server" Feb 17 17:01:00 crc kubenswrapper[4672]: I0217 17:01:00.159373 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-d5dt4" Feb 17 17:01:00 crc kubenswrapper[4672]: I0217 17:01:00.174616 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522461-d5dt4"] Feb 17 17:01:00 crc kubenswrapper[4672]: I0217 17:01:00.269588 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb6wn\" (UniqueName: \"kubernetes.io/projected/73c49726-b590-48ff-a8a1-bdaa0683a643-kube-api-access-jb6wn\") pod \"keystone-cron-29522461-d5dt4\" (UID: \"73c49726-b590-48ff-a8a1-bdaa0683a643\") " pod="openstack/keystone-cron-29522461-d5dt4" Feb 17 17:01:00 crc kubenswrapper[4672]: I0217 17:01:00.269738 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c49726-b590-48ff-a8a1-bdaa0683a643-config-data\") pod \"keystone-cron-29522461-d5dt4\" (UID: \"73c49726-b590-48ff-a8a1-bdaa0683a643\") " pod="openstack/keystone-cron-29522461-d5dt4" Feb 17 17:01:00 crc kubenswrapper[4672]: I0217 17:01:00.269801 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c49726-b590-48ff-a8a1-bdaa0683a643-combined-ca-bundle\") pod \"keystone-cron-29522461-d5dt4\" (UID: \"73c49726-b590-48ff-a8a1-bdaa0683a643\") " pod="openstack/keystone-cron-29522461-d5dt4" Feb 17 17:01:00 crc kubenswrapper[4672]: I0217 17:01:00.269826 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/73c49726-b590-48ff-a8a1-bdaa0683a643-fernet-keys\") pod \"keystone-cron-29522461-d5dt4\" (UID: \"73c49726-b590-48ff-a8a1-bdaa0683a643\") " pod="openstack/keystone-cron-29522461-d5dt4" Feb 17 17:01:00 crc kubenswrapper[4672]: I0217 17:01:00.371970 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c49726-b590-48ff-a8a1-bdaa0683a643-config-data\") pod \"keystone-cron-29522461-d5dt4\" (UID: \"73c49726-b590-48ff-a8a1-bdaa0683a643\") " pod="openstack/keystone-cron-29522461-d5dt4" Feb 17 17:01:00 crc kubenswrapper[4672]: I0217 17:01:00.372041 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c49726-b590-48ff-a8a1-bdaa0683a643-combined-ca-bundle\") pod \"keystone-cron-29522461-d5dt4\" (UID: \"73c49726-b590-48ff-a8a1-bdaa0683a643\") " pod="openstack/keystone-cron-29522461-d5dt4" Feb 17 17:01:00 crc kubenswrapper[4672]: I0217 17:01:00.372063 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/73c49726-b590-48ff-a8a1-bdaa0683a643-fernet-keys\") pod \"keystone-cron-29522461-d5dt4\" (UID: \"73c49726-b590-48ff-a8a1-bdaa0683a643\") " pod="openstack/keystone-cron-29522461-d5dt4" Feb 17 17:01:00 crc kubenswrapper[4672]: I0217 17:01:00.372206 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb6wn\" (UniqueName: \"kubernetes.io/projected/73c49726-b590-48ff-a8a1-bdaa0683a643-kube-api-access-jb6wn\") pod \"keystone-cron-29522461-d5dt4\" (UID: \"73c49726-b590-48ff-a8a1-bdaa0683a643\") " pod="openstack/keystone-cron-29522461-d5dt4" Feb 17 17:01:00 crc kubenswrapper[4672]: I0217 17:01:00.377942 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/73c49726-b590-48ff-a8a1-bdaa0683a643-fernet-keys\") pod \"keystone-cron-29522461-d5dt4\" (UID: \"73c49726-b590-48ff-a8a1-bdaa0683a643\") " pod="openstack/keystone-cron-29522461-d5dt4" Feb 17 17:01:00 crc kubenswrapper[4672]: I0217 17:01:00.379189 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c49726-b590-48ff-a8a1-bdaa0683a643-config-data\") pod \"keystone-cron-29522461-d5dt4\" (UID: \"73c49726-b590-48ff-a8a1-bdaa0683a643\") " pod="openstack/keystone-cron-29522461-d5dt4" Feb 17 17:01:00 crc kubenswrapper[4672]: I0217 17:01:00.381159 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c49726-b590-48ff-a8a1-bdaa0683a643-combined-ca-bundle\") pod \"keystone-cron-29522461-d5dt4\" (UID: \"73c49726-b590-48ff-a8a1-bdaa0683a643\") " pod="openstack/keystone-cron-29522461-d5dt4" Feb 17 17:01:00 crc kubenswrapper[4672]: I0217 17:01:00.389158 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb6wn\" (UniqueName: \"kubernetes.io/projected/73c49726-b590-48ff-a8a1-bdaa0683a643-kube-api-access-jb6wn\") pod \"keystone-cron-29522461-d5dt4\" (UID: \"73c49726-b590-48ff-a8a1-bdaa0683a643\") " pod="openstack/keystone-cron-29522461-d5dt4" Feb 17 17:01:00 crc kubenswrapper[4672]: I0217 17:01:00.481174 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-d5dt4" Feb 17 17:01:00 crc kubenswrapper[4672]: W0217 17:01:00.957837 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73c49726_b590_48ff_a8a1_bdaa0683a643.slice/crio-6f4d56ebca2704ec6a452d1351523facf753cc1dd75be3acbfda70149efc78ed WatchSource:0}: Error finding container 6f4d56ebca2704ec6a452d1351523facf753cc1dd75be3acbfda70149efc78ed: Status 404 returned error can't find the container with id 6f4d56ebca2704ec6a452d1351523facf753cc1dd75be3acbfda70149efc78ed Feb 17 17:01:00 crc kubenswrapper[4672]: I0217 17:01:00.960653 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522461-d5dt4"] Feb 17 17:01:01 crc kubenswrapper[4672]: I0217 17:01:01.265018 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-d5dt4" event={"ID":"73c49726-b590-48ff-a8a1-bdaa0683a643","Type":"ContainerStarted","Data":"6f4d56ebca2704ec6a452d1351523facf753cc1dd75be3acbfda70149efc78ed"} Feb 17 17:01:02 crc kubenswrapper[4672]: I0217 17:01:02.026377 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm"] Feb 17 17:01:02 crc kubenswrapper[4672]: I0217 17:01:02.029135 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm" Feb 17 17:01:02 crc kubenswrapper[4672]: I0217 17:01:02.032914 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 17:01:02 crc kubenswrapper[4672]: I0217 17:01:02.032926 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 17:01:02 crc kubenswrapper[4672]: I0217 17:01:02.033001 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 17:01:02 crc kubenswrapper[4672]: I0217 17:01:02.033031 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6sng" Feb 17 17:01:02 crc kubenswrapper[4672]: I0217 17:01:02.040103 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm"] Feb 17 17:01:02 crc kubenswrapper[4672]: I0217 17:01:02.111050 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/015a71e3-cfc6-4bd6-bc90-2efce2db5885-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm\" (UID: \"015a71e3-cfc6-4bd6-bc90-2efce2db5885\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm" Feb 17 17:01:02 crc kubenswrapper[4672]: I0217 17:01:02.111130 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/015a71e3-cfc6-4bd6-bc90-2efce2db5885-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm\" (UID: \"015a71e3-cfc6-4bd6-bc90-2efce2db5885\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm" Feb 17 17:01:02 crc kubenswrapper[4672]: I0217 17:01:02.111220 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8zpp\" (UniqueName: \"kubernetes.io/projected/015a71e3-cfc6-4bd6-bc90-2efce2db5885-kube-api-access-p8zpp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm\" (UID: \"015a71e3-cfc6-4bd6-bc90-2efce2db5885\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm" Feb 17 17:01:02 crc kubenswrapper[4672]: I0217 17:01:02.213622 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/015a71e3-cfc6-4bd6-bc90-2efce2db5885-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm\" (UID: \"015a71e3-cfc6-4bd6-bc90-2efce2db5885\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm" Feb 17 17:01:02 crc kubenswrapper[4672]: I0217 17:01:02.213682 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/015a71e3-cfc6-4bd6-bc90-2efce2db5885-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm\" (UID: \"015a71e3-cfc6-4bd6-bc90-2efce2db5885\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm" Feb 17 17:01:02 crc kubenswrapper[4672]: I0217 17:01:02.213702 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8zpp\" (UniqueName: \"kubernetes.io/projected/015a71e3-cfc6-4bd6-bc90-2efce2db5885-kube-api-access-p8zpp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm\" (UID: \"015a71e3-cfc6-4bd6-bc90-2efce2db5885\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm" Feb 17 17:01:02 crc kubenswrapper[4672]: I0217 17:01:02.220634 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/015a71e3-cfc6-4bd6-bc90-2efce2db5885-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm\" (UID: \"015a71e3-cfc6-4bd6-bc90-2efce2db5885\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm" Feb 17 17:01:02 crc kubenswrapper[4672]: I0217 17:01:02.229098 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/015a71e3-cfc6-4bd6-bc90-2efce2db5885-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm\" (UID: \"015a71e3-cfc6-4bd6-bc90-2efce2db5885\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm" Feb 17 17:01:02 crc kubenswrapper[4672]: I0217 17:01:02.230671 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8zpp\" (UniqueName: \"kubernetes.io/projected/015a71e3-cfc6-4bd6-bc90-2efce2db5885-kube-api-access-p8zpp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm\" (UID: \"015a71e3-cfc6-4bd6-bc90-2efce2db5885\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm" Feb 17 17:01:02 crc kubenswrapper[4672]: I0217 17:01:02.276284 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-d5dt4" event={"ID":"73c49726-b590-48ff-a8a1-bdaa0683a643","Type":"ContainerStarted","Data":"7f78ba6b7897436eba7fc66e412a3080c5638a4da809dd18349339d356864d69"} Feb 17 17:01:02 crc kubenswrapper[4672]: I0217 17:01:02.361971 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm" Feb 17 17:01:02 crc kubenswrapper[4672]: I0217 17:01:02.918414 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29522461-d5dt4" podStartSLOduration=2.9183971680000003 podStartE2EDuration="2.918397168s" podCreationTimestamp="2026-02-17 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:01:02.295221224 +0000 UTC m=+3471.049309976" watchObservedRunningTime="2026-02-17 17:01:02.918397168 +0000 UTC m=+3471.672485900" Feb 17 17:01:02 crc kubenswrapper[4672]: I0217 17:01:02.919425 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm"] Feb 17 17:01:03 crc kubenswrapper[4672]: I0217 17:01:03.286898 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm" event={"ID":"015a71e3-cfc6-4bd6-bc90-2efce2db5885","Type":"ContainerStarted","Data":"9fd67c7655a02379894785efc95986b3c8f8b638586ec952cf2cd4d2c7023423"} Feb 17 17:01:04 crc kubenswrapper[4672]: I0217 17:01:04.299272 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm" event={"ID":"015a71e3-cfc6-4bd6-bc90-2efce2db5885","Type":"ContainerStarted","Data":"9853cfac8c888c5fa3e5c4b951d1c0a434a30a29d0fb6afa082e36c27f67ab37"} Feb 17 17:01:04 crc kubenswrapper[4672]: I0217 17:01:04.321776 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm" podStartSLOduration=1.5035893420000002 podStartE2EDuration="2.321752662s" podCreationTimestamp="2026-02-17 17:01:02 +0000 UTC" firstStartedPulling="2026-02-17 17:01:02.919195609 +0000 UTC m=+3471.673284331" lastFinishedPulling="2026-02-17 17:01:03.737358909 +0000 UTC m=+3472.491447651" observedRunningTime="2026-02-17 17:01:04.319966965 +0000 UTC m=+3473.074055737" watchObservedRunningTime="2026-02-17 17:01:04.321752662 +0000 UTC m=+3473.075841404" Feb 17 17:01:04 crc kubenswrapper[4672]: E0217 17:01:04.947659 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:01:06 crc kubenswrapper[4672]: E0217 17:01:06.946127 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:01:07 crc kubenswrapper[4672]: I0217 17:01:07.341296 4672 generic.go:334] "Generic (PLEG): container finished" podID="73c49726-b590-48ff-a8a1-bdaa0683a643" containerID="7f78ba6b7897436eba7fc66e412a3080c5638a4da809dd18349339d356864d69" exitCode=0 Feb 17 17:01:07 crc kubenswrapper[4672]: I0217 17:01:07.341344 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-d5dt4" event={"ID":"73c49726-b590-48ff-a8a1-bdaa0683a643","Type":"ContainerDied","Data":"7f78ba6b7897436eba7fc66e412a3080c5638a4da809dd18349339d356864d69"} Feb 17 17:01:08 crc kubenswrapper[4672]: I0217 17:01:08.779610 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-d5dt4" Feb 17 17:01:08 crc kubenswrapper[4672]: I0217 17:01:08.859380 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c49726-b590-48ff-a8a1-bdaa0683a643-combined-ca-bundle\") pod \"73c49726-b590-48ff-a8a1-bdaa0683a643\" (UID: \"73c49726-b590-48ff-a8a1-bdaa0683a643\") " Feb 17 17:01:08 crc kubenswrapper[4672]: I0217 17:01:08.859589 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb6wn\" (UniqueName: \"kubernetes.io/projected/73c49726-b590-48ff-a8a1-bdaa0683a643-kube-api-access-jb6wn\") pod \"73c49726-b590-48ff-a8a1-bdaa0683a643\" (UID: \"73c49726-b590-48ff-a8a1-bdaa0683a643\") " Feb 17 17:01:08 crc kubenswrapper[4672]: I0217 17:01:08.859623 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c49726-b590-48ff-a8a1-bdaa0683a643-config-data\") pod \"73c49726-b590-48ff-a8a1-bdaa0683a643\" (UID: \"73c49726-b590-48ff-a8a1-bdaa0683a643\") " Feb 17 17:01:08 crc kubenswrapper[4672]: I0217 17:01:08.859689 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/73c49726-b590-48ff-a8a1-bdaa0683a643-fernet-keys\") pod \"73c49726-b590-48ff-a8a1-bdaa0683a643\" (UID: \"73c49726-b590-48ff-a8a1-bdaa0683a643\") " Feb 17 17:01:08 crc kubenswrapper[4672]: I0217 17:01:08.865900 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73c49726-b590-48ff-a8a1-bdaa0683a643-kube-api-access-jb6wn" (OuterVolumeSpecName: "kube-api-access-jb6wn") pod "73c49726-b590-48ff-a8a1-bdaa0683a643" (UID: "73c49726-b590-48ff-a8a1-bdaa0683a643"). InnerVolumeSpecName "kube-api-access-jb6wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:01:08 crc kubenswrapper[4672]: I0217 17:01:08.869349 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c49726-b590-48ff-a8a1-bdaa0683a643-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "73c49726-b590-48ff-a8a1-bdaa0683a643" (UID: "73c49726-b590-48ff-a8a1-bdaa0683a643"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:01:08 crc kubenswrapper[4672]: I0217 17:01:08.897753 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c49726-b590-48ff-a8a1-bdaa0683a643-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73c49726-b590-48ff-a8a1-bdaa0683a643" (UID: "73c49726-b590-48ff-a8a1-bdaa0683a643"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:01:08 crc kubenswrapper[4672]: I0217 17:01:08.934658 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c49726-b590-48ff-a8a1-bdaa0683a643-config-data" (OuterVolumeSpecName: "config-data") pod "73c49726-b590-48ff-a8a1-bdaa0683a643" (UID: "73c49726-b590-48ff-a8a1-bdaa0683a643"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:01:08 crc kubenswrapper[4672]: I0217 17:01:08.962242 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb6wn\" (UniqueName: \"kubernetes.io/projected/73c49726-b590-48ff-a8a1-bdaa0683a643-kube-api-access-jb6wn\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:08 crc kubenswrapper[4672]: I0217 17:01:08.962284 4672 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c49726-b590-48ff-a8a1-bdaa0683a643-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:08 crc kubenswrapper[4672]: I0217 17:01:08.962297 4672 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/73c49726-b590-48ff-a8a1-bdaa0683a643-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:08 crc kubenswrapper[4672]: I0217 17:01:08.962308 4672 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c49726-b590-48ff-a8a1-bdaa0683a643-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:09 crc kubenswrapper[4672]: I0217 17:01:09.363730 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-d5dt4" event={"ID":"73c49726-b590-48ff-a8a1-bdaa0683a643","Type":"ContainerDied","Data":"6f4d56ebca2704ec6a452d1351523facf753cc1dd75be3acbfda70149efc78ed"} Feb 17 17:01:09 crc kubenswrapper[4672]: I0217 17:01:09.363773 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f4d56ebca2704ec6a452d1351523facf753cc1dd75be3acbfda70149efc78ed" Feb 17 17:01:09 crc kubenswrapper[4672]: I0217 17:01:09.363793 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-d5dt4" Feb 17 17:01:16 crc kubenswrapper[4672]: E0217 17:01:16.947174 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:01:18 crc kubenswrapper[4672]: E0217 17:01:18.947195 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:01:31 crc kubenswrapper[4672]: E0217 17:01:31.953089 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:01:31 crc kubenswrapper[4672]: E0217 17:01:31.953129 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:01:44 crc kubenswrapper[4672]: E0217 17:01:44.946803 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:01:46 crc kubenswrapper[4672]: E0217 17:01:46.947616 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:01:55 crc kubenswrapper[4672]: E0217 17:01:55.948881 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:01:58 crc kubenswrapper[4672]: E0217 17:01:58.947951 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:02:03 crc kubenswrapper[4672]: I0217 17:02:03.429445 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wzcp4"] Feb 17 17:02:03 crc kubenswrapper[4672]: E0217 17:02:03.430984 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73c49726-b590-48ff-a8a1-bdaa0683a643" containerName="keystone-cron" Feb 17 17:02:03 crc kubenswrapper[4672]: I0217 17:02:03.431009 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="73c49726-b590-48ff-a8a1-bdaa0683a643" containerName="keystone-cron" Feb 17 17:02:03 crc kubenswrapper[4672]: I0217 17:02:03.431398 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="73c49726-b590-48ff-a8a1-bdaa0683a643" containerName="keystone-cron" Feb 17 17:02:03 crc kubenswrapper[4672]: I0217 17:02:03.434239 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wzcp4" Feb 17 17:02:03 crc kubenswrapper[4672]: I0217 17:02:03.468611 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wzcp4"] Feb 17 17:02:03 crc kubenswrapper[4672]: I0217 17:02:03.602315 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl7mj\" (UniqueName: \"kubernetes.io/projected/c29b3d53-bb2f-4fbe-a509-1ea28306ea66-kube-api-access-fl7mj\") pod \"community-operators-wzcp4\" (UID: \"c29b3d53-bb2f-4fbe-a509-1ea28306ea66\") " pod="openshift-marketplace/community-operators-wzcp4" Feb 17 17:02:03 crc kubenswrapper[4672]: I0217 17:02:03.602908 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c29b3d53-bb2f-4fbe-a509-1ea28306ea66-catalog-content\") pod \"community-operators-wzcp4\" (UID: \"c29b3d53-bb2f-4fbe-a509-1ea28306ea66\") " pod="openshift-marketplace/community-operators-wzcp4" Feb 17 17:02:03 crc kubenswrapper[4672]: I0217 17:02:03.602959 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c29b3d53-bb2f-4fbe-a509-1ea28306ea66-utilities\") pod \"community-operators-wzcp4\" (UID: \"c29b3d53-bb2f-4fbe-a509-1ea28306ea66\") " pod="openshift-marketplace/community-operators-wzcp4" Feb 17 17:02:03 crc kubenswrapper[4672]: I0217 17:02:03.705590 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl7mj\" (UniqueName: \"kubernetes.io/projected/c29b3d53-bb2f-4fbe-a509-1ea28306ea66-kube-api-access-fl7mj\") pod \"community-operators-wzcp4\" (UID: \"c29b3d53-bb2f-4fbe-a509-1ea28306ea66\") " pod="openshift-marketplace/community-operators-wzcp4" Feb 17 17:02:03 crc kubenswrapper[4672]: I0217 17:02:03.705645 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c29b3d53-bb2f-4fbe-a509-1ea28306ea66-catalog-content\") pod \"community-operators-wzcp4\" (UID: \"c29b3d53-bb2f-4fbe-a509-1ea28306ea66\") " pod="openshift-marketplace/community-operators-wzcp4" Feb 17 17:02:03 crc kubenswrapper[4672]: I0217 17:02:03.705680 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c29b3d53-bb2f-4fbe-a509-1ea28306ea66-utilities\") pod \"community-operators-wzcp4\" (UID: \"c29b3d53-bb2f-4fbe-a509-1ea28306ea66\") " pod="openshift-marketplace/community-operators-wzcp4" Feb 17 17:02:03 crc kubenswrapper[4672]: I0217 17:02:03.706208 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c29b3d53-bb2f-4fbe-a509-1ea28306ea66-catalog-content\") pod \"community-operators-wzcp4\" (UID: \"c29b3d53-bb2f-4fbe-a509-1ea28306ea66\") " pod="openshift-marketplace/community-operators-wzcp4" Feb 17 17:02:03 crc kubenswrapper[4672]: I0217 17:02:03.706330 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c29b3d53-bb2f-4fbe-a509-1ea28306ea66-utilities\") pod \"community-operators-wzcp4\" (UID: \"c29b3d53-bb2f-4fbe-a509-1ea28306ea66\") " pod="openshift-marketplace/community-operators-wzcp4" Feb 17 17:02:03 crc kubenswrapper[4672]: I0217 17:02:03.747030 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl7mj\" (UniqueName: \"kubernetes.io/projected/c29b3d53-bb2f-4fbe-a509-1ea28306ea66-kube-api-access-fl7mj\") pod \"community-operators-wzcp4\" (UID: \"c29b3d53-bb2f-4fbe-a509-1ea28306ea66\") " pod="openshift-marketplace/community-operators-wzcp4" Feb 17 17:02:03 crc kubenswrapper[4672]: I0217 17:02:03.782015 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wzcp4" Feb 17 17:02:04 crc kubenswrapper[4672]: I0217 17:02:04.331979 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wzcp4"] Feb 17 17:02:04 crc kubenswrapper[4672]: I0217 17:02:04.411289 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bk75p"] Feb 17 17:02:04 crc kubenswrapper[4672]: I0217 17:02:04.413794 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bk75p" Feb 17 17:02:04 crc kubenswrapper[4672]: I0217 17:02:04.454957 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bk75p"] Feb 17 17:02:04 crc kubenswrapper[4672]: I0217 17:02:04.545990 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51029833-b38a-4970-9324-020ad69edeb1-catalog-content\") pod \"redhat-marketplace-bk75p\" (UID: \"51029833-b38a-4970-9324-020ad69edeb1\") " pod="openshift-marketplace/redhat-marketplace-bk75p" Feb 17 17:02:04 crc kubenswrapper[4672]: I0217 17:02:04.546081 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggccn\" (UniqueName: \"kubernetes.io/projected/51029833-b38a-4970-9324-020ad69edeb1-kube-api-access-ggccn\") pod \"redhat-marketplace-bk75p\" (UID: \"51029833-b38a-4970-9324-020ad69edeb1\") " pod="openshift-marketplace/redhat-marketplace-bk75p" Feb 17 17:02:04 crc kubenswrapper[4672]: I0217 17:02:04.546259 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51029833-b38a-4970-9324-020ad69edeb1-utilities\") pod \"redhat-marketplace-bk75p\" (UID: \"51029833-b38a-4970-9324-020ad69edeb1\") " pod="openshift-marketplace/redhat-marketplace-bk75p" Feb 17 17:02:04 crc kubenswrapper[4672]: I0217 17:02:04.648378 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51029833-b38a-4970-9324-020ad69edeb1-catalog-content\") pod \"redhat-marketplace-bk75p\" (UID: \"51029833-b38a-4970-9324-020ad69edeb1\") " pod="openshift-marketplace/redhat-marketplace-bk75p" Feb 17 17:02:04 crc kubenswrapper[4672]: I0217 17:02:04.648681 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggccn\" (UniqueName: \"kubernetes.io/projected/51029833-b38a-4970-9324-020ad69edeb1-kube-api-access-ggccn\") pod \"redhat-marketplace-bk75p\" (UID: \"51029833-b38a-4970-9324-020ad69edeb1\") " pod="openshift-marketplace/redhat-marketplace-bk75p" Feb 17 17:02:04 crc kubenswrapper[4672]: I0217 17:02:04.648830 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51029833-b38a-4970-9324-020ad69edeb1-utilities\") pod \"redhat-marketplace-bk75p\" (UID: \"51029833-b38a-4970-9324-020ad69edeb1\") " pod="openshift-marketplace/redhat-marketplace-bk75p" Feb 17 17:02:04 crc kubenswrapper[4672]: I0217 17:02:04.649004 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51029833-b38a-4970-9324-020ad69edeb1-catalog-content\") pod \"redhat-marketplace-bk75p\" (UID: \"51029833-b38a-4970-9324-020ad69edeb1\") " pod="openshift-marketplace/redhat-marketplace-bk75p" Feb 17 17:02:04 crc kubenswrapper[4672]: I0217 17:02:04.649276 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51029833-b38a-4970-9324-020ad69edeb1-utilities\") pod \"redhat-marketplace-bk75p\" (UID: \"51029833-b38a-4970-9324-020ad69edeb1\") " pod="openshift-marketplace/redhat-marketplace-bk75p" Feb 17 17:02:04 crc kubenswrapper[4672]: I0217 17:02:04.671119 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggccn\" (UniqueName: \"kubernetes.io/projected/51029833-b38a-4970-9324-020ad69edeb1-kube-api-access-ggccn\") pod \"redhat-marketplace-bk75p\" (UID: \"51029833-b38a-4970-9324-020ad69edeb1\") " pod="openshift-marketplace/redhat-marketplace-bk75p" Feb 17 17:02:04 crc kubenswrapper[4672]: I0217 17:02:04.836438 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bk75p" Feb 17 17:02:04 crc kubenswrapper[4672]: I0217 17:02:04.898530 4672 generic.go:334] "Generic (PLEG): container finished" podID="c29b3d53-bb2f-4fbe-a509-1ea28306ea66" containerID="8f0c29e8cf23be12c803e6c3aa00c17e333982617fd824a0216fccc313e3b31d" exitCode=0 Feb 17 17:02:04 crc kubenswrapper[4672]: I0217 17:02:04.898775 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzcp4" event={"ID":"c29b3d53-bb2f-4fbe-a509-1ea28306ea66","Type":"ContainerDied","Data":"8f0c29e8cf23be12c803e6c3aa00c17e333982617fd824a0216fccc313e3b31d"} Feb 17 17:02:04 crc kubenswrapper[4672]: I0217 17:02:04.898884 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzcp4" event={"ID":"c29b3d53-bb2f-4fbe-a509-1ea28306ea66","Type":"ContainerStarted","Data":"65d49d66f894cd6a1393d7d281d34465fdfe45113345982a01bd7ce85fb1cfc7"} Feb 17 17:02:05 crc kubenswrapper[4672]: W0217 17:02:05.373100 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51029833_b38a_4970_9324_020ad69edeb1.slice/crio-3d7af328afe26a18570131f9ee29c87be495cf027b0525973eb3220837f6d261 WatchSource:0}: Error finding container 3d7af328afe26a18570131f9ee29c87be495cf027b0525973eb3220837f6d261: Status 404 returned error can't find the container with id 3d7af328afe26a18570131f9ee29c87be495cf027b0525973eb3220837f6d261 Feb 17 17:02:05 crc kubenswrapper[4672]: I0217 17:02:05.379230 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bk75p"] Feb 17 17:02:05 crc kubenswrapper[4672]: I0217 17:02:05.908456 4672 generic.go:334] "Generic (PLEG): container finished" podID="51029833-b38a-4970-9324-020ad69edeb1" containerID="959e723ebde43c24f08bbee180919e39b62127fe18fa0d329c5afc5362ba22b2" exitCode=0 Feb 17 17:02:05 crc kubenswrapper[4672]: I0217 17:02:05.908502 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bk75p" event={"ID":"51029833-b38a-4970-9324-020ad69edeb1","Type":"ContainerDied","Data":"959e723ebde43c24f08bbee180919e39b62127fe18fa0d329c5afc5362ba22b2"} Feb 17 17:02:05 crc kubenswrapper[4672]: I0217 17:02:05.908565 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bk75p" event={"ID":"51029833-b38a-4970-9324-020ad69edeb1","Type":"ContainerStarted","Data":"3d7af328afe26a18570131f9ee29c87be495cf027b0525973eb3220837f6d261"} Feb 17 17:02:05 crc kubenswrapper[4672]: I0217 17:02:05.910465 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzcp4" event={"ID":"c29b3d53-bb2f-4fbe-a509-1ea28306ea66","Type":"ContainerStarted","Data":"430dd5462ea101d36c908bb8f0addba67023715db79082d52f0b4dfd0e68a87a"} Feb 17 17:02:07 crc kubenswrapper[4672]: I0217 17:02:07.932352 4672 generic.go:334] "Generic (PLEG): container finished" podID="c29b3d53-bb2f-4fbe-a509-1ea28306ea66" containerID="430dd5462ea101d36c908bb8f0addba67023715db79082d52f0b4dfd0e68a87a" exitCode=0 Feb 17 17:02:07 crc kubenswrapper[4672]: I0217 17:02:07.932442 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzcp4" event={"ID":"c29b3d53-bb2f-4fbe-a509-1ea28306ea66","Type":"ContainerDied","Data":"430dd5462ea101d36c908bb8f0addba67023715db79082d52f0b4dfd0e68a87a"} Feb 17 17:02:07 crc kubenswrapper[4672]: I0217 17:02:07.937592 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bk75p" event={"ID":"51029833-b38a-4970-9324-020ad69edeb1","Type":"ContainerStarted","Data":"e2bd80b73ec7a2325cee045ea7c011743540c4e5a4892fc205b871c1f9465314"} Feb 17 17:02:08 crc kubenswrapper[4672]: I0217 17:02:08.948010 4672 generic.go:334] "Generic (PLEG): container finished" podID="51029833-b38a-4970-9324-020ad69edeb1" containerID="e2bd80b73ec7a2325cee045ea7c011743540c4e5a4892fc205b871c1f9465314" exitCode=0 Feb 17 17:02:08 crc kubenswrapper[4672]: I0217 17:02:08.948062 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bk75p" event={"ID":"51029833-b38a-4970-9324-020ad69edeb1","Type":"ContainerDied","Data":"e2bd80b73ec7a2325cee045ea7c011743540c4e5a4892fc205b871c1f9465314"} Feb 17 17:02:08 crc kubenswrapper[4672]: E0217 17:02:08.948649 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:02:09 crc kubenswrapper[4672]: I0217 17:02:09.961828 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzcp4" event={"ID":"c29b3d53-bb2f-4fbe-a509-1ea28306ea66","Type":"ContainerStarted","Data":"d90bf9fd3bdadf4f738c9ab22ab9f41575271420d91935fa5474fa7ce22d4b0f"} Feb 17 17:02:09 crc kubenswrapper[4672]: I0217 17:02:09.964751 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bk75p" event={"ID":"51029833-b38a-4970-9324-020ad69edeb1","Type":"ContainerStarted","Data":"b86310e9f802a4a38643ac128f2e361db4f22a0f44e36e5d3813377ebe295a86"} Feb 17 17:02:09 crc kubenswrapper[4672]: I0217 17:02:09.986122 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wzcp4" podStartSLOduration=2.77783763 podStartE2EDuration="6.986102413s" podCreationTimestamp="2026-02-17 17:02:03 +0000 UTC" firstStartedPulling="2026-02-17 17:02:04.900635816 +0000 UTC m=+3533.654724548" lastFinishedPulling="2026-02-17 17:02:09.108900599 +0000 UTC m=+3537.862989331" observedRunningTime="2026-02-17 17:02:09.978575915 +0000 UTC m=+3538.732664667" watchObservedRunningTime="2026-02-17 17:02:09.986102413 +0000 UTC m=+3538.740191145" Feb 17 17:02:10 crc kubenswrapper[4672]: I0217 17:02:10.010970 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bk75p" podStartSLOduration=2.567590043 podStartE2EDuration="6.010946768s" podCreationTimestamp="2026-02-17 17:02:04 +0000 UTC" firstStartedPulling="2026-02-17 17:02:05.909967112 +0000 UTC m=+3534.664055844" lastFinishedPulling="2026-02-17 17:02:09.353323837 +0000 UTC m=+3538.107412569" observedRunningTime="2026-02-17 17:02:10.000872113 +0000 UTC m=+3538.754960855" watchObservedRunningTime="2026-02-17 17:02:10.010946768 +0000 UTC m=+3538.765035500" Feb 17 17:02:12 crc kubenswrapper[4672]: E0217 17:02:12.947169 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:02:13 crc kubenswrapper[4672]: I0217 17:02:13.782750 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wzcp4" Feb 17 17:02:13 crc kubenswrapper[4672]: I0217 17:02:13.783307 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wzcp4" Feb 17 17:02:13 crc kubenswrapper[4672]: I0217 17:02:13.829147 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wzcp4" Feb 17 17:02:14 crc kubenswrapper[4672]: I0217 17:02:14.054707 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wzcp4" Feb 17 17:02:14 crc kubenswrapper[4672]: I0217 17:02:14.836757 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bk75p" Feb 17 17:02:14 crc kubenswrapper[4672]: I0217 17:02:14.836814 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bk75p" Feb 17 17:02:14 crc kubenswrapper[4672]: I0217 17:02:14.887991 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bk75p" Feb 17 17:02:15 crc kubenswrapper[4672]: I0217 17:02:15.071035 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bk75p" Feb 17 17:02:16 crc kubenswrapper[4672]: I0217 17:02:16.031667 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wzcp4"] Feb 17 17:02:16 crc kubenswrapper[4672]: I0217 17:02:16.813113 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bk75p"] Feb 17 17:02:17 crc kubenswrapper[4672]: I0217 17:02:17.040359 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bk75p" podUID="51029833-b38a-4970-9324-020ad69edeb1" containerName="registry-server" containerID="cri-o://b86310e9f802a4a38643ac128f2e361db4f22a0f44e36e5d3813377ebe295a86" gracePeriod=2 Feb 17 17:02:17 crc kubenswrapper[4672]: I0217 17:02:17.040539 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wzcp4" podUID="c29b3d53-bb2f-4fbe-a509-1ea28306ea66" containerName="registry-server" containerID="cri-o://d90bf9fd3bdadf4f738c9ab22ab9f41575271420d91935fa5474fa7ce22d4b0f" gracePeriod=2 Feb 17 17:02:17 crc kubenswrapper[4672]: I0217 17:02:17.704976 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bk75p" Feb 17 17:02:17 crc kubenswrapper[4672]: I0217 17:02:17.733652 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wzcp4" Feb 17 17:02:17 crc kubenswrapper[4672]: I0217 17:02:17.826284 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51029833-b38a-4970-9324-020ad69edeb1-catalog-content\") pod \"51029833-b38a-4970-9324-020ad69edeb1\" (UID: \"51029833-b38a-4970-9324-020ad69edeb1\") " Feb 17 17:02:17 crc kubenswrapper[4672]: I0217 17:02:17.826542 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggccn\" (UniqueName: \"kubernetes.io/projected/51029833-b38a-4970-9324-020ad69edeb1-kube-api-access-ggccn\") pod \"51029833-b38a-4970-9324-020ad69edeb1\" (UID: \"51029833-b38a-4970-9324-020ad69edeb1\") " Feb 17 17:02:17 crc kubenswrapper[4672]: I0217 17:02:17.826654 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51029833-b38a-4970-9324-020ad69edeb1-utilities\") pod \"51029833-b38a-4970-9324-020ad69edeb1\" (UID: \"51029833-b38a-4970-9324-020ad69edeb1\") " Feb 17 17:02:17 crc kubenswrapper[4672]: I0217 17:02:17.826690 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c29b3d53-bb2f-4fbe-a509-1ea28306ea66-catalog-content\") pod \"c29b3d53-bb2f-4fbe-a509-1ea28306ea66\" (UID: \"c29b3d53-bb2f-4fbe-a509-1ea28306ea66\") " Feb 17 17:02:17 crc kubenswrapper[4672]: I0217 17:02:17.826714 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c29b3d53-bb2f-4fbe-a509-1ea28306ea66-utilities\") pod \"c29b3d53-bb2f-4fbe-a509-1ea28306ea66\" (UID: \"c29b3d53-bb2f-4fbe-a509-1ea28306ea66\") " Feb 17 17:02:17 crc kubenswrapper[4672]: I0217 17:02:17.826735 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl7mj\" (UniqueName: \"kubernetes.io/projected/c29b3d53-bb2f-4fbe-a509-1ea28306ea66-kube-api-access-fl7mj\") pod \"c29b3d53-bb2f-4fbe-a509-1ea28306ea66\" (UID: \"c29b3d53-bb2f-4fbe-a509-1ea28306ea66\") " Feb 17 17:02:17 crc kubenswrapper[4672]: I0217 17:02:17.828124 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c29b3d53-bb2f-4fbe-a509-1ea28306ea66-utilities" (OuterVolumeSpecName: "utilities") pod "c29b3d53-bb2f-4fbe-a509-1ea28306ea66" (UID: "c29b3d53-bb2f-4fbe-a509-1ea28306ea66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:02:17 crc kubenswrapper[4672]: I0217 17:02:17.828213 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51029833-b38a-4970-9324-020ad69edeb1-utilities" (OuterVolumeSpecName: "utilities") pod "51029833-b38a-4970-9324-020ad69edeb1" (UID: "51029833-b38a-4970-9324-020ad69edeb1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:02:17 crc kubenswrapper[4672]: I0217 17:02:17.833173 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51029833-b38a-4970-9324-020ad69edeb1-kube-api-access-ggccn" (OuterVolumeSpecName: "kube-api-access-ggccn") pod "51029833-b38a-4970-9324-020ad69edeb1" (UID: "51029833-b38a-4970-9324-020ad69edeb1"). InnerVolumeSpecName "kube-api-access-ggccn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:02:17 crc kubenswrapper[4672]: I0217 17:02:17.836227 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29b3d53-bb2f-4fbe-a509-1ea28306ea66-kube-api-access-fl7mj" (OuterVolumeSpecName: "kube-api-access-fl7mj") pod "c29b3d53-bb2f-4fbe-a509-1ea28306ea66" (UID: "c29b3d53-bb2f-4fbe-a509-1ea28306ea66"). InnerVolumeSpecName "kube-api-access-fl7mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:02:17 crc kubenswrapper[4672]: I0217 17:02:17.860660 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51029833-b38a-4970-9324-020ad69edeb1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51029833-b38a-4970-9324-020ad69edeb1" (UID: "51029833-b38a-4970-9324-020ad69edeb1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:02:17 crc kubenswrapper[4672]: I0217 17:02:17.884741 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c29b3d53-bb2f-4fbe-a509-1ea28306ea66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c29b3d53-bb2f-4fbe-a509-1ea28306ea66" (UID: "c29b3d53-bb2f-4fbe-a509-1ea28306ea66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:02:17 crc kubenswrapper[4672]: I0217 17:02:17.929544 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggccn\" (UniqueName: \"kubernetes.io/projected/51029833-b38a-4970-9324-020ad69edeb1-kube-api-access-ggccn\") on node \"crc\" DevicePath \"\"" Feb 17 17:02:17 crc kubenswrapper[4672]: I0217 17:02:17.929587 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51029833-b38a-4970-9324-020ad69edeb1-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:02:17 crc kubenswrapper[4672]: I0217 17:02:17.929601 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c29b3d53-bb2f-4fbe-a509-1ea28306ea66-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:02:17 crc kubenswrapper[4672]: I0217 17:02:17.929616 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c29b3d53-bb2f-4fbe-a509-1ea28306ea66-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:02:17 crc kubenswrapper[4672]: I0217 17:02:17.929665 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl7mj\" (UniqueName: \"kubernetes.io/projected/c29b3d53-bb2f-4fbe-a509-1ea28306ea66-kube-api-access-fl7mj\") on node \"crc\" DevicePath \"\"" Feb 17 17:02:17 crc kubenswrapper[4672]: I0217 17:02:17.929677 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51029833-b38a-4970-9324-020ad69edeb1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.051963 4672 generic.go:334] "Generic (PLEG): container finished" podID="c29b3d53-bb2f-4fbe-a509-1ea28306ea66" containerID="d90bf9fd3bdadf4f738c9ab22ab9f41575271420d91935fa5474fa7ce22d4b0f" exitCode=0 Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.052050 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wzcp4" Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.052057 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzcp4" event={"ID":"c29b3d53-bb2f-4fbe-a509-1ea28306ea66","Type":"ContainerDied","Data":"d90bf9fd3bdadf4f738c9ab22ab9f41575271420d91935fa5474fa7ce22d4b0f"} Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.052143 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wzcp4" event={"ID":"c29b3d53-bb2f-4fbe-a509-1ea28306ea66","Type":"ContainerDied","Data":"65d49d66f894cd6a1393d7d281d34465fdfe45113345982a01bd7ce85fb1cfc7"} Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.052168 4672 scope.go:117] "RemoveContainer" containerID="d90bf9fd3bdadf4f738c9ab22ab9f41575271420d91935fa5474fa7ce22d4b0f" Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.055796 4672 generic.go:334] "Generic (PLEG): container finished" podID="51029833-b38a-4970-9324-020ad69edeb1" containerID="b86310e9f802a4a38643ac128f2e361db4f22a0f44e36e5d3813377ebe295a86" exitCode=0 Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.055842 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bk75p" event={"ID":"51029833-b38a-4970-9324-020ad69edeb1","Type":"ContainerDied","Data":"b86310e9f802a4a38643ac128f2e361db4f22a0f44e36e5d3813377ebe295a86"} Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.055880 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bk75p" event={"ID":"51029833-b38a-4970-9324-020ad69edeb1","Type":"ContainerDied","Data":"3d7af328afe26a18570131f9ee29c87be495cf027b0525973eb3220837f6d261"} Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.055952 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bk75p" Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.079794 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wzcp4"] Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.087898 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wzcp4"] Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.093672 4672 scope.go:117] "RemoveContainer" containerID="430dd5462ea101d36c908bb8f0addba67023715db79082d52f0b4dfd0e68a87a" Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.097165 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bk75p"] Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.105209 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bk75p"] Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.112885 4672 scope.go:117] "RemoveContainer" containerID="8f0c29e8cf23be12c803e6c3aa00c17e333982617fd824a0216fccc313e3b31d" Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.132379 4672 scope.go:117] "RemoveContainer" containerID="d90bf9fd3bdadf4f738c9ab22ab9f41575271420d91935fa5474fa7ce22d4b0f" Feb 17 17:02:18 crc kubenswrapper[4672]: E0217 17:02:18.132877 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d90bf9fd3bdadf4f738c9ab22ab9f41575271420d91935fa5474fa7ce22d4b0f\": container with ID starting with d90bf9fd3bdadf4f738c9ab22ab9f41575271420d91935fa5474fa7ce22d4b0f not found: ID does not exist" containerID="d90bf9fd3bdadf4f738c9ab22ab9f41575271420d91935fa5474fa7ce22d4b0f" Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.132927 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d90bf9fd3bdadf4f738c9ab22ab9f41575271420d91935fa5474fa7ce22d4b0f"} err="failed to get container status \"d90bf9fd3bdadf4f738c9ab22ab9f41575271420d91935fa5474fa7ce22d4b0f\": rpc error: code = NotFound desc = could not find container \"d90bf9fd3bdadf4f738c9ab22ab9f41575271420d91935fa5474fa7ce22d4b0f\": container with ID starting with d90bf9fd3bdadf4f738c9ab22ab9f41575271420d91935fa5474fa7ce22d4b0f not found: ID does not exist" Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.132967 4672 scope.go:117] "RemoveContainer" containerID="430dd5462ea101d36c908bb8f0addba67023715db79082d52f0b4dfd0e68a87a" Feb 17 17:02:18 crc kubenswrapper[4672]: E0217 17:02:18.133355 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"430dd5462ea101d36c908bb8f0addba67023715db79082d52f0b4dfd0e68a87a\": container with ID starting with 430dd5462ea101d36c908bb8f0addba67023715db79082d52f0b4dfd0e68a87a not found: ID does not exist" containerID="430dd5462ea101d36c908bb8f0addba67023715db79082d52f0b4dfd0e68a87a" Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.133396 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430dd5462ea101d36c908bb8f0addba67023715db79082d52f0b4dfd0e68a87a"} err="failed to get container status \"430dd5462ea101d36c908bb8f0addba67023715db79082d52f0b4dfd0e68a87a\": rpc error: code = NotFound desc = could not find container \"430dd5462ea101d36c908bb8f0addba67023715db79082d52f0b4dfd0e68a87a\": container with ID starting with 430dd5462ea101d36c908bb8f0addba67023715db79082d52f0b4dfd0e68a87a not found: ID does not exist" Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.133417 4672 scope.go:117] "RemoveContainer" containerID="8f0c29e8cf23be12c803e6c3aa00c17e333982617fd824a0216fccc313e3b31d" Feb 17 17:02:18 crc kubenswrapper[4672]: E0217 17:02:18.133668 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f0c29e8cf23be12c803e6c3aa00c17e333982617fd824a0216fccc313e3b31d\": container with ID starting with 8f0c29e8cf23be12c803e6c3aa00c17e333982617fd824a0216fccc313e3b31d not found: ID does not exist" containerID="8f0c29e8cf23be12c803e6c3aa00c17e333982617fd824a0216fccc313e3b31d" Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.133744 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f0c29e8cf23be12c803e6c3aa00c17e333982617fd824a0216fccc313e3b31d"} err="failed to get container status \"8f0c29e8cf23be12c803e6c3aa00c17e333982617fd824a0216fccc313e3b31d\": rpc error: code = NotFound desc = could not find container \"8f0c29e8cf23be12c803e6c3aa00c17e333982617fd824a0216fccc313e3b31d\": container with ID starting with 8f0c29e8cf23be12c803e6c3aa00c17e333982617fd824a0216fccc313e3b31d not found: ID does not exist" Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.134027 4672 scope.go:117] "RemoveContainer" containerID="b86310e9f802a4a38643ac128f2e361db4f22a0f44e36e5d3813377ebe295a86" Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.155445 4672 scope.go:117] "RemoveContainer" containerID="e2bd80b73ec7a2325cee045ea7c011743540c4e5a4892fc205b871c1f9465314" Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.179234 4672 scope.go:117] "RemoveContainer" containerID="959e723ebde43c24f08bbee180919e39b62127fe18fa0d329c5afc5362ba22b2" Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.251061 4672 scope.go:117] "RemoveContainer" containerID="b86310e9f802a4a38643ac128f2e361db4f22a0f44e36e5d3813377ebe295a86" Feb 17 17:02:18 crc kubenswrapper[4672]: E0217 17:02:18.251472 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b86310e9f802a4a38643ac128f2e361db4f22a0f44e36e5d3813377ebe295a86\": container with ID starting with b86310e9f802a4a38643ac128f2e361db4f22a0f44e36e5d3813377ebe295a86 not found: ID does not exist" containerID="b86310e9f802a4a38643ac128f2e361db4f22a0f44e36e5d3813377ebe295a86" Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.251504 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b86310e9f802a4a38643ac128f2e361db4f22a0f44e36e5d3813377ebe295a86"} err="failed to get container status \"b86310e9f802a4a38643ac128f2e361db4f22a0f44e36e5d3813377ebe295a86\": rpc error: code = NotFound desc = could not find container \"b86310e9f802a4a38643ac128f2e361db4f22a0f44e36e5d3813377ebe295a86\": container with ID starting with b86310e9f802a4a38643ac128f2e361db4f22a0f44e36e5d3813377ebe295a86 not found: ID does not exist" Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.251541 4672 scope.go:117] "RemoveContainer" containerID="e2bd80b73ec7a2325cee045ea7c011743540c4e5a4892fc205b871c1f9465314" Feb 17 17:02:18 crc kubenswrapper[4672]: E0217 17:02:18.251870 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2bd80b73ec7a2325cee045ea7c011743540c4e5a4892fc205b871c1f9465314\": container with ID starting with e2bd80b73ec7a2325cee045ea7c011743540c4e5a4892fc205b871c1f9465314 not found: ID does not exist" containerID="e2bd80b73ec7a2325cee045ea7c011743540c4e5a4892fc205b871c1f9465314" Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.251910 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2bd80b73ec7a2325cee045ea7c011743540c4e5a4892fc205b871c1f9465314"} err="failed to get container status \"e2bd80b73ec7a2325cee045ea7c011743540c4e5a4892fc205b871c1f9465314\": rpc error: code = NotFound desc = could not find container \"e2bd80b73ec7a2325cee045ea7c011743540c4e5a4892fc205b871c1f9465314\": container with ID starting with e2bd80b73ec7a2325cee045ea7c011743540c4e5a4892fc205b871c1f9465314 not found: ID does not exist" Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.251937 4672 scope.go:117] "RemoveContainer" containerID="959e723ebde43c24f08bbee180919e39b62127fe18fa0d329c5afc5362ba22b2" Feb 17 17:02:18 crc kubenswrapper[4672]: E0217 17:02:18.253030 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"959e723ebde43c24f08bbee180919e39b62127fe18fa0d329c5afc5362ba22b2\": container with ID starting with 959e723ebde43c24f08bbee180919e39b62127fe18fa0d329c5afc5362ba22b2 not found: ID does not exist" containerID="959e723ebde43c24f08bbee180919e39b62127fe18fa0d329c5afc5362ba22b2" Feb 17 17:02:18 crc kubenswrapper[4672]: I0217 17:02:18.253062 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"959e723ebde43c24f08bbee180919e39b62127fe18fa0d329c5afc5362ba22b2"} err="failed to get container status \"959e723ebde43c24f08bbee180919e39b62127fe18fa0d329c5afc5362ba22b2\": rpc error: code = NotFound desc = could not find container \"959e723ebde43c24f08bbee180919e39b62127fe18fa0d329c5afc5362ba22b2\": container with ID starting with 959e723ebde43c24f08bbee180919e39b62127fe18fa0d329c5afc5362ba22b2 not found: ID does not exist" Feb 17 17:02:19 crc kubenswrapper[4672]: I0217 17:02:19.968124 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51029833-b38a-4970-9324-020ad69edeb1" path="/var/lib/kubelet/pods/51029833-b38a-4970-9324-020ad69edeb1/volumes" Feb 17 17:02:19 crc kubenswrapper[4672]: I0217 17:02:19.969747 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c29b3d53-bb2f-4fbe-a509-1ea28306ea66" path="/var/lib/kubelet/pods/c29b3d53-bb2f-4fbe-a509-1ea28306ea66/volumes" Feb 17 17:02:21 crc kubenswrapper[4672]: E0217 17:02:21.955473 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:02:26 crc kubenswrapper[4672]: E0217 17:02:26.948147 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:02:34 crc kubenswrapper[4672]: E0217 17:02:34.948028 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:02:37 crc kubenswrapper[4672]: E0217 17:02:37.950352 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:02:47 crc kubenswrapper[4672]: E0217 17:02:47.947498 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:02:49 crc kubenswrapper[4672]: E0217 17:02:49.947565 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:02:59 crc kubenswrapper[4672]: E0217 17:02:59.949887 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:03:01 crc kubenswrapper[4672]: E0217 17:03:01.953263 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:03:12 crc kubenswrapper[4672]: E0217 17:03:12.970375 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:03:13 crc kubenswrapper[4672]: E0217 17:03:13.947682 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:03:26 crc kubenswrapper[4672]: E0217 17:03:26.947879 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:03:27 crc kubenswrapper[4672]: I0217 17:03:27.566082 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:03:27 crc kubenswrapper[4672]: I0217 17:03:27.566195 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:03:27 crc kubenswrapper[4672]: E0217 17:03:27.948626 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:03:38 crc kubenswrapper[4672]: E0217 17:03:38.947729 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:03:40 crc kubenswrapper[4672]: I0217 17:03:40.949495 4672 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:03:41 crc kubenswrapper[4672]: E0217 17:03:41.079412 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:03:41 crc kubenswrapper[4672]: E0217 17:03:41.079481 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:03:41 crc kubenswrapper[4672]: E0217 17:03:41.079676 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq9ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-qrhj8_openstack(dc5471f5-2491-4841-be45-09c8f14b35c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:03:41 crc kubenswrapper[4672]: E0217 17:03:41.080902 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:03:51 crc kubenswrapper[4672]: E0217 17:03:51.064789 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:03:51 crc kubenswrapper[4672]: E0217 17:03:51.065360 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:03:51 crc kubenswrapper[4672]: E0217 17:03:51.065473 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66h7h644h64ch5f8h565hfch5dh56chfdh8hfdh5b5h567h6dh665h557h74h665hcbh96h659h554h589h57fh5d9h55h564hcfh5dhffhfdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tx4bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9e58ce9b-ddd5-42bb-8e07-08a22c8871a5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:03:51 crc kubenswrapper[4672]: E0217 17:03:51.066681 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:03:53 crc kubenswrapper[4672]: E0217 17:03:53.946118 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:03:57 crc kubenswrapper[4672]: I0217 17:03:57.566115 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:03:57 crc kubenswrapper[4672]: I0217 17:03:57.566543 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:04:05 crc kubenswrapper[4672]: E0217 17:04:05.947360 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:04:05 crc kubenswrapper[4672]: E0217 17:04:05.947463 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:04:16 crc kubenswrapper[4672]: E0217 17:04:16.949605 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:04:17 crc kubenswrapper[4672]: E0217 17:04:17.949152 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:04:27 crc kubenswrapper[4672]: I0217 17:04:27.565763 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:04:27 crc kubenswrapper[4672]: I0217 17:04:27.566383 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:04:27 crc kubenswrapper[4672]: I0217 17:04:27.566434 4672 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 17:04:27 crc kubenswrapper[4672]: I0217 17:04:27.567546 4672 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd"} pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:04:27 crc kubenswrapper[4672]: I0217 17:04:27.567786 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" containerID="cri-o://89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" gracePeriod=600 Feb 17 17:04:27 crc kubenswrapper[4672]: E0217 17:04:27.704157 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:04:28 crc kubenswrapper[4672]: I0217 17:04:28.448023 4672 generic.go:334] "Generic (PLEG): container finished" podID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" exitCode=0 Feb 17 17:04:28 crc kubenswrapper[4672]: I0217 17:04:28.448126 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerDied","Data":"89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd"} Feb 17 17:04:28 crc kubenswrapper[4672]: I0217 17:04:28.448299 4672 scope.go:117] "RemoveContainer" containerID="399c1b28a73d295545a85ac9813544c6363f8e54412c109aba83e40a76db0358" Feb 17 17:04:28 crc kubenswrapper[4672]: I0217 17:04:28.449255 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:04:28 crc kubenswrapper[4672]: E0217 17:04:28.449857 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:04:29 crc kubenswrapper[4672]: E0217 17:04:29.948000 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:04:31 crc kubenswrapper[4672]: E0217 17:04:31.951950 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:04:40 crc kubenswrapper[4672]: I0217 17:04:40.944981 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:04:40 crc kubenswrapper[4672]: E0217 17:04:40.945776 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:04:41 crc kubenswrapper[4672]: E0217 17:04:41.954264 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:04:46 crc kubenswrapper[4672]: E0217 17:04:46.947444 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:04:52 crc kubenswrapper[4672]: I0217 17:04:52.945592 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:04:52 crc kubenswrapper[4672]: E0217 17:04:52.946326 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:04:54 crc kubenswrapper[4672]: E0217 17:04:54.946812 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:04:59 crc kubenswrapper[4672]: E0217 17:04:59.947350 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:05:05 crc kubenswrapper[4672]: E0217 17:05:05.947077 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:05:07 crc kubenswrapper[4672]: I0217 17:05:07.947540 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:05:07 crc kubenswrapper[4672]: E0217 17:05:07.948226 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:05:13 crc kubenswrapper[4672]: E0217 17:05:13.947410 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:05:19 crc kubenswrapper[4672]: I0217 17:05:19.945427 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:05:19 crc kubenswrapper[4672]: E0217 17:05:19.946046 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:05:20 crc kubenswrapper[4672]: E0217 17:05:20.948378 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:05:26 crc kubenswrapper[4672]: E0217 17:05:26.947556 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:05:34 crc kubenswrapper[4672]: I0217 17:05:34.945183 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:05:34 crc kubenswrapper[4672]: E0217 17:05:34.946107 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:05:35 crc kubenswrapper[4672]: E0217 17:05:35.949312 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:05:38 crc kubenswrapper[4672]: E0217 17:05:38.947614 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:05:48 crc kubenswrapper[4672]: I0217 17:05:48.945725 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:05:48 crc kubenswrapper[4672]: E0217 17:05:48.946733 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:05:50 crc kubenswrapper[4672]: E0217 17:05:50.947321 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:05:50 crc kubenswrapper[4672]: E0217 17:05:50.947377 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:06:00 crc kubenswrapper[4672]: I0217 17:06:00.944928 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:06:00 crc kubenswrapper[4672]: E0217 17:06:00.945736 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:06:02 crc kubenswrapper[4672]: E0217 17:06:02.947073 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:06:05 crc kubenswrapper[4672]: E0217 17:06:05.961581 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:06:15 crc kubenswrapper[4672]: I0217 17:06:15.945081 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:06:15 crc kubenswrapper[4672]: E0217 17:06:15.945916 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:06:17 crc kubenswrapper[4672]: E0217 17:06:17.951743 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:06:18 crc kubenswrapper[4672]: E0217 17:06:18.946599 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:06:27 crc kubenswrapper[4672]: I0217 17:06:27.945394 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:06:27 crc kubenswrapper[4672]: E0217 17:06:27.946295 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:06:29 crc kubenswrapper[4672]: E0217 17:06:29.948116 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:06:33 crc kubenswrapper[4672]: E0217 17:06:33.947839 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:06:40 crc kubenswrapper[4672]: I0217 17:06:40.946104 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:06:40 crc kubenswrapper[4672]: E0217 17:06:40.947015 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:06:40 crc kubenswrapper[4672]: E0217 17:06:40.948430 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:06:46 crc kubenswrapper[4672]: E0217 17:06:46.948850 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:06:54 crc kubenswrapper[4672]: I0217 17:06:54.944485 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:06:54 crc kubenswrapper[4672]: E0217 17:06:54.945193 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:06:54 crc kubenswrapper[4672]: E0217 17:06:54.946901 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:06:58 crc kubenswrapper[4672]: E0217 17:06:58.947502 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:07:05 crc kubenswrapper[4672]: I0217 17:07:05.945372 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:07:05 crc kubenswrapper[4672]: E0217 17:07:05.946223 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:07:07 crc kubenswrapper[4672]: E0217 17:07:07.949094 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:07:12 crc kubenswrapper[4672]: E0217 17:07:12.946534 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:07:16 crc kubenswrapper[4672]: I0217 17:07:16.153564 4672 generic.go:334] "Generic (PLEG): container finished" podID="015a71e3-cfc6-4bd6-bc90-2efce2db5885" containerID="9853cfac8c888c5fa3e5c4b951d1c0a434a30a29d0fb6afa082e36c27f67ab37" exitCode=2 Feb 17 17:07:16 crc kubenswrapper[4672]: I0217 17:07:16.153614 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm" event={"ID":"015a71e3-cfc6-4bd6-bc90-2efce2db5885","Type":"ContainerDied","Data":"9853cfac8c888c5fa3e5c4b951d1c0a434a30a29d0fb6afa082e36c27f67ab37"} Feb 17 17:07:17 crc kubenswrapper[4672]: I0217 17:07:17.753436 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm" Feb 17 17:07:17 crc kubenswrapper[4672]: I0217 17:07:17.914278 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8zpp\" (UniqueName: \"kubernetes.io/projected/015a71e3-cfc6-4bd6-bc90-2efce2db5885-kube-api-access-p8zpp\") pod \"015a71e3-cfc6-4bd6-bc90-2efce2db5885\" (UID: \"015a71e3-cfc6-4bd6-bc90-2efce2db5885\") " Feb 17 17:07:17 crc kubenswrapper[4672]: I0217 17:07:17.914472 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/015a71e3-cfc6-4bd6-bc90-2efce2db5885-inventory\") pod \"015a71e3-cfc6-4bd6-bc90-2efce2db5885\" (UID: \"015a71e3-cfc6-4bd6-bc90-2efce2db5885\") " Feb 17 17:07:17 crc kubenswrapper[4672]: I0217 17:07:17.914696 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/015a71e3-cfc6-4bd6-bc90-2efce2db5885-ssh-key-openstack-edpm-ipam\") pod \"015a71e3-cfc6-4bd6-bc90-2efce2db5885\" (UID: \"015a71e3-cfc6-4bd6-bc90-2efce2db5885\") " Feb 17 17:07:17 crc kubenswrapper[4672]: I0217 17:07:17.920202 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/015a71e3-cfc6-4bd6-bc90-2efce2db5885-kube-api-access-p8zpp" (OuterVolumeSpecName: "kube-api-access-p8zpp") pod "015a71e3-cfc6-4bd6-bc90-2efce2db5885" (UID: "015a71e3-cfc6-4bd6-bc90-2efce2db5885"). InnerVolumeSpecName "kube-api-access-p8zpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:07:17 crc kubenswrapper[4672]: I0217 17:07:17.944998 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015a71e3-cfc6-4bd6-bc90-2efce2db5885-inventory" (OuterVolumeSpecName: "inventory") pod "015a71e3-cfc6-4bd6-bc90-2efce2db5885" (UID: "015a71e3-cfc6-4bd6-bc90-2efce2db5885"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:07:17 crc kubenswrapper[4672]: I0217 17:07:17.953608 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015a71e3-cfc6-4bd6-bc90-2efce2db5885-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "015a71e3-cfc6-4bd6-bc90-2efce2db5885" (UID: "015a71e3-cfc6-4bd6-bc90-2efce2db5885"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:07:18 crc kubenswrapper[4672]: I0217 17:07:18.017331 4672 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/015a71e3-cfc6-4bd6-bc90-2efce2db5885-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 17:07:18 crc kubenswrapper[4672]: I0217 17:07:18.017369 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8zpp\" (UniqueName: \"kubernetes.io/projected/015a71e3-cfc6-4bd6-bc90-2efce2db5885-kube-api-access-p8zpp\") on node \"crc\" DevicePath \"\"" Feb 17 17:07:18 crc kubenswrapper[4672]: I0217 17:07:18.017381 4672 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/015a71e3-cfc6-4bd6-bc90-2efce2db5885-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 17:07:18 crc kubenswrapper[4672]: I0217 17:07:18.174108 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm" event={"ID":"015a71e3-cfc6-4bd6-bc90-2efce2db5885","Type":"ContainerDied","Data":"9fd67c7655a02379894785efc95986b3c8f8b638586ec952cf2cd4d2c7023423"} Feb 17 17:07:18 crc kubenswrapper[4672]: I0217 17:07:18.174157 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fd67c7655a02379894785efc95986b3c8f8b638586ec952cf2cd4d2c7023423" Feb 17 17:07:18 crc kubenswrapper[4672]: I0217 17:07:18.174167 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm" Feb 17 17:07:20 crc kubenswrapper[4672]: I0217 17:07:20.945149 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:07:20 crc kubenswrapper[4672]: E0217 17:07:20.946639 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:07:21 crc kubenswrapper[4672]: E0217 17:07:21.955760 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:07:24 crc kubenswrapper[4672]: E0217 17:07:24.948301 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:07:34 crc kubenswrapper[4672]: E0217 17:07:34.947988 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:07:35 crc kubenswrapper[4672]: I0217 17:07:35.945066 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:07:35 crc kubenswrapper[4672]: E0217 17:07:35.945846 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:07:39 crc kubenswrapper[4672]: E0217 17:07:39.947328 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:07:47 crc kubenswrapper[4672]: E0217 17:07:47.946981 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:07:47 crc kubenswrapper[4672]: I0217 17:07:47.947186 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:07:47 crc kubenswrapper[4672]: E0217 17:07:47.947919 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:07:51 crc kubenswrapper[4672]: E0217 17:07:51.959426 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:07:58 crc kubenswrapper[4672]: I0217 17:07:58.946336 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:07:58 crc kubenswrapper[4672]: E0217 17:07:58.947409 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:07:58 crc kubenswrapper[4672]: E0217 17:07:58.948327 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:08:06 crc kubenswrapper[4672]: E0217 17:08:06.946779 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:08:09 crc kubenswrapper[4672]: E0217 17:08:09.947467 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:08:13 crc kubenswrapper[4672]: I0217 17:08:13.945539 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:08:13 crc kubenswrapper[4672]: E0217 17:08:13.946374 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:08:19 crc kubenswrapper[4672]: E0217 17:08:19.947040 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:08:22 crc kubenswrapper[4672]: E0217 17:08:22.949851 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:08:26 crc kubenswrapper[4672]: I0217 17:08:26.946276 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:08:26 crc kubenswrapper[4672]: E0217 17:08:26.947267 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.111735 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k2dhd"] Feb 17 17:08:28 crc kubenswrapper[4672]: E0217 17:08:28.112849 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015a71e3-cfc6-4bd6-bc90-2efce2db5885" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.112868 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="015a71e3-cfc6-4bd6-bc90-2efce2db5885" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:08:28 crc kubenswrapper[4672]: E0217 17:08:28.112890 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51029833-b38a-4970-9324-020ad69edeb1" containerName="extract-content" Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.112899 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="51029833-b38a-4970-9324-020ad69edeb1" containerName="extract-content" Feb 17 17:08:28 crc kubenswrapper[4672]: E0217 17:08:28.112913 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29b3d53-bb2f-4fbe-a509-1ea28306ea66" containerName="extract-content" Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.112920 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29b3d53-bb2f-4fbe-a509-1ea28306ea66" containerName="extract-content" Feb 17 17:08:28 crc kubenswrapper[4672]: E0217 17:08:28.112940 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29b3d53-bb2f-4fbe-a509-1ea28306ea66" containerName="registry-server" Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.112947 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29b3d53-bb2f-4fbe-a509-1ea28306ea66" containerName="registry-server" Feb 17 17:08:28 crc kubenswrapper[4672]: E0217 17:08:28.112962 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51029833-b38a-4970-9324-020ad69edeb1" containerName="extract-utilities" Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.112970 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="51029833-b38a-4970-9324-020ad69edeb1" containerName="extract-utilities" Feb 17 17:08:28 crc kubenswrapper[4672]: E0217 17:08:28.112984 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51029833-b38a-4970-9324-020ad69edeb1" containerName="registry-server" Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.112990 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="51029833-b38a-4970-9324-020ad69edeb1" containerName="registry-server" Feb 17 17:08:28 crc kubenswrapper[4672]: E0217 17:08:28.113015 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29b3d53-bb2f-4fbe-a509-1ea28306ea66" containerName="extract-utilities" Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.113023 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29b3d53-bb2f-4fbe-a509-1ea28306ea66" containerName="extract-utilities" Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.113271 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29b3d53-bb2f-4fbe-a509-1ea28306ea66" containerName="registry-server" Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.113289 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="015a71e3-cfc6-4bd6-bc90-2efce2db5885" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.113301 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="51029833-b38a-4970-9324-020ad69edeb1" containerName="registry-server" Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.114935 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2dhd" Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.121307 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8tzc\" (UniqueName: \"kubernetes.io/projected/c47ef162-cbaf-47e9-b385-ee881fbba200-kube-api-access-w8tzc\") pod \"redhat-operators-k2dhd\" (UID: \"c47ef162-cbaf-47e9-b385-ee881fbba200\") " pod="openshift-marketplace/redhat-operators-k2dhd" Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.121596 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c47ef162-cbaf-47e9-b385-ee881fbba200-utilities\") pod \"redhat-operators-k2dhd\" (UID: \"c47ef162-cbaf-47e9-b385-ee881fbba200\") " pod="openshift-marketplace/redhat-operators-k2dhd" Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.121847 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c47ef162-cbaf-47e9-b385-ee881fbba200-catalog-content\") pod \"redhat-operators-k2dhd\" (UID: \"c47ef162-cbaf-47e9-b385-ee881fbba200\") " pod="openshift-marketplace/redhat-operators-k2dhd" Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.124263 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2dhd"] Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.223023 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c47ef162-cbaf-47e9-b385-ee881fbba200-catalog-content\") pod \"redhat-operators-k2dhd\" (UID: \"c47ef162-cbaf-47e9-b385-ee881fbba200\") " pod="openshift-marketplace/redhat-operators-k2dhd" Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.223097 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8tzc\" (UniqueName: \"kubernetes.io/projected/c47ef162-cbaf-47e9-b385-ee881fbba200-kube-api-access-w8tzc\") pod \"redhat-operators-k2dhd\" (UID: \"c47ef162-cbaf-47e9-b385-ee881fbba200\") " pod="openshift-marketplace/redhat-operators-k2dhd" Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.223145 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c47ef162-cbaf-47e9-b385-ee881fbba200-utilities\") pod \"redhat-operators-k2dhd\" (UID: \"c47ef162-cbaf-47e9-b385-ee881fbba200\") " pod="openshift-marketplace/redhat-operators-k2dhd" Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.223933 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c47ef162-cbaf-47e9-b385-ee881fbba200-utilities\") pod \"redhat-operators-k2dhd\" (UID: \"c47ef162-cbaf-47e9-b385-ee881fbba200\") " pod="openshift-marketplace/redhat-operators-k2dhd" Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.224032 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c47ef162-cbaf-47e9-b385-ee881fbba200-catalog-content\") pod \"redhat-operators-k2dhd\" (UID: \"c47ef162-cbaf-47e9-b385-ee881fbba200\") " pod="openshift-marketplace/redhat-operators-k2dhd" Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.244371 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8tzc\" (UniqueName: \"kubernetes.io/projected/c47ef162-cbaf-47e9-b385-ee881fbba200-kube-api-access-w8tzc\") pod \"redhat-operators-k2dhd\" (UID: \"c47ef162-cbaf-47e9-b385-ee881fbba200\") " pod="openshift-marketplace/redhat-operators-k2dhd" Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.447925 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2dhd" Feb 17 17:08:28 crc kubenswrapper[4672]: I0217 17:08:28.934967 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2dhd"] Feb 17 17:08:29 crc kubenswrapper[4672]: I0217 17:08:29.921859 4672 generic.go:334] "Generic (PLEG): container finished" podID="c47ef162-cbaf-47e9-b385-ee881fbba200" containerID="0aa7b33afddefaf52520a7423c75542f431fba30372e612118141242350efe0f" exitCode=0 Feb 17 17:08:29 crc kubenswrapper[4672]: I0217 17:08:29.922070 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2dhd" event={"ID":"c47ef162-cbaf-47e9-b385-ee881fbba200","Type":"ContainerDied","Data":"0aa7b33afddefaf52520a7423c75542f431fba30372e612118141242350efe0f"} Feb 17 17:08:29 crc kubenswrapper[4672]: I0217 17:08:29.922150 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2dhd" event={"ID":"c47ef162-cbaf-47e9-b385-ee881fbba200","Type":"ContainerStarted","Data":"543dd09413ee994817a2b50d0b950217ed1576b4985a0e799aed3902ecdcca80"} Feb 17 17:08:32 crc kubenswrapper[4672]: I0217 17:08:32.957498 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2dhd" event={"ID":"c47ef162-cbaf-47e9-b385-ee881fbba200","Type":"ContainerStarted","Data":"af3190440f28006d8c319424bec1b69c5b5a3033d997d176c8e9e082f8bb2da1"} Feb 17 17:08:33 crc kubenswrapper[4672]: E0217 17:08:33.947190 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:08:34 crc kubenswrapper[4672]: E0217 17:08:34.947771 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:08:37 crc kubenswrapper[4672]: I0217 17:08:37.001236 4672 generic.go:334] "Generic (PLEG): container finished" podID="c47ef162-cbaf-47e9-b385-ee881fbba200" containerID="af3190440f28006d8c319424bec1b69c5b5a3033d997d176c8e9e082f8bb2da1" exitCode=0 Feb 17 17:08:37 crc kubenswrapper[4672]: I0217 17:08:37.001349 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2dhd" event={"ID":"c47ef162-cbaf-47e9-b385-ee881fbba200","Type":"ContainerDied","Data":"af3190440f28006d8c319424bec1b69c5b5a3033d997d176c8e9e082f8bb2da1"} Feb 17 17:08:38 crc kubenswrapper[4672]: I0217 17:08:38.945973 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:08:38 crc kubenswrapper[4672]: E0217 17:08:38.946625 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:08:39 crc kubenswrapper[4672]: I0217 17:08:39.023382 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2dhd" event={"ID":"c47ef162-cbaf-47e9-b385-ee881fbba200","Type":"ContainerStarted","Data":"1f633e2b889bf2198eac30e49374456b6ea76a2b959efaf00b76965d4ce15e29"} Feb 17 17:08:39 crc kubenswrapper[4672]: I0217 17:08:39.063098 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k2dhd" podStartSLOduration=3.409172717 podStartE2EDuration="11.063080255s" podCreationTimestamp="2026-02-17 17:08:28 +0000 UTC" firstStartedPulling="2026-02-17 17:08:29.923861485 +0000 UTC m=+3918.677950217" lastFinishedPulling="2026-02-17 17:08:37.577769023 +0000 UTC m=+3926.331857755" observedRunningTime="2026-02-17 17:08:39.055635538 +0000 UTC m=+3927.809724280" watchObservedRunningTime="2026-02-17 17:08:39.063080255 +0000 UTC m=+3927.817168987" Feb 17 17:08:45 crc kubenswrapper[4672]: E0217 17:08:45.947416 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:08:46 crc kubenswrapper[4672]: I0217 17:08:46.947098 4672 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:08:47 crc kubenswrapper[4672]: E0217 17:08:47.133643 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:08:47 crc kubenswrapper[4672]: E0217 17:08:47.133710 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:08:47 crc kubenswrapper[4672]: E0217 17:08:47.133904 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq9ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-qrhj8_openstack(dc5471f5-2491-4841-be45-09c8f14b35c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:08:47 crc kubenswrapper[4672]: E0217 17:08:47.135026 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:08:48 crc kubenswrapper[4672]: I0217 17:08:48.448181 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k2dhd" Feb 17 17:08:48 crc kubenswrapper[4672]: I0217 17:08:48.448664 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k2dhd" Feb 17 17:08:48 crc kubenswrapper[4672]: I0217 17:08:48.831878 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k2dhd" Feb 17 17:08:49 crc kubenswrapper[4672]: I0217 17:08:49.151623 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k2dhd" Feb 17 17:08:49 crc kubenswrapper[4672]: I0217 17:08:49.206994 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k2dhd"] Feb 17 17:08:51 crc kubenswrapper[4672]: I0217 17:08:51.129421 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k2dhd" podUID="c47ef162-cbaf-47e9-b385-ee881fbba200" containerName="registry-server" containerID="cri-o://1f633e2b889bf2198eac30e49374456b6ea76a2b959efaf00b76965d4ce15e29" gracePeriod=2 Feb 17 17:08:51 crc kubenswrapper[4672]: I0217 17:08:51.766884 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2dhd" Feb 17 17:08:51 crc kubenswrapper[4672]: I0217 17:08:51.934149 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8tzc\" (UniqueName: \"kubernetes.io/projected/c47ef162-cbaf-47e9-b385-ee881fbba200-kube-api-access-w8tzc\") pod \"c47ef162-cbaf-47e9-b385-ee881fbba200\" (UID: \"c47ef162-cbaf-47e9-b385-ee881fbba200\") " Feb 17 17:08:51 crc kubenswrapper[4672]: I0217 17:08:51.934762 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c47ef162-cbaf-47e9-b385-ee881fbba200-utilities\") pod \"c47ef162-cbaf-47e9-b385-ee881fbba200\" (UID: \"c47ef162-cbaf-47e9-b385-ee881fbba200\") " Feb 17 17:08:51 crc kubenswrapper[4672]: I0217 17:08:51.934843 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c47ef162-cbaf-47e9-b385-ee881fbba200-catalog-content\") pod \"c47ef162-cbaf-47e9-b385-ee881fbba200\" (UID: \"c47ef162-cbaf-47e9-b385-ee881fbba200\") " Feb 17 17:08:51 crc kubenswrapper[4672]: I0217 17:08:51.936066 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c47ef162-cbaf-47e9-b385-ee881fbba200-utilities" (OuterVolumeSpecName: "utilities") pod "c47ef162-cbaf-47e9-b385-ee881fbba200" (UID: "c47ef162-cbaf-47e9-b385-ee881fbba200"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:08:51 crc kubenswrapper[4672]: I0217 17:08:51.939994 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c47ef162-cbaf-47e9-b385-ee881fbba200-kube-api-access-w8tzc" (OuterVolumeSpecName: "kube-api-access-w8tzc") pod "c47ef162-cbaf-47e9-b385-ee881fbba200" (UID: "c47ef162-cbaf-47e9-b385-ee881fbba200"). InnerVolumeSpecName "kube-api-access-w8tzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:08:52 crc kubenswrapper[4672]: I0217 17:08:52.037378 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8tzc\" (UniqueName: \"kubernetes.io/projected/c47ef162-cbaf-47e9-b385-ee881fbba200-kube-api-access-w8tzc\") on node \"crc\" DevicePath \"\"" Feb 17 17:08:52 crc kubenswrapper[4672]: I0217 17:08:52.037413 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c47ef162-cbaf-47e9-b385-ee881fbba200-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:08:52 crc kubenswrapper[4672]: I0217 17:08:52.060913 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c47ef162-cbaf-47e9-b385-ee881fbba200-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c47ef162-cbaf-47e9-b385-ee881fbba200" (UID: "c47ef162-cbaf-47e9-b385-ee881fbba200"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:08:52 crc kubenswrapper[4672]: I0217 17:08:52.139200 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c47ef162-cbaf-47e9-b385-ee881fbba200-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:08:52 crc kubenswrapper[4672]: I0217 17:08:52.141035 4672 generic.go:334] "Generic (PLEG): container finished" podID="c47ef162-cbaf-47e9-b385-ee881fbba200" containerID="1f633e2b889bf2198eac30e49374456b6ea76a2b959efaf00b76965d4ce15e29" exitCode=0 Feb 17 17:08:52 crc kubenswrapper[4672]: I0217 17:08:52.141073 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2dhd" event={"ID":"c47ef162-cbaf-47e9-b385-ee881fbba200","Type":"ContainerDied","Data":"1f633e2b889bf2198eac30e49374456b6ea76a2b959efaf00b76965d4ce15e29"} Feb 17 17:08:52 crc kubenswrapper[4672]: I0217 17:08:52.141107 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2dhd" event={"ID":"c47ef162-cbaf-47e9-b385-ee881fbba200","Type":"ContainerDied","Data":"543dd09413ee994817a2b50d0b950217ed1576b4985a0e799aed3902ecdcca80"} Feb 17 17:08:52 crc kubenswrapper[4672]: I0217 17:08:52.141126 4672 scope.go:117] "RemoveContainer" containerID="1f633e2b889bf2198eac30e49374456b6ea76a2b959efaf00b76965d4ce15e29" Feb 17 17:08:52 crc kubenswrapper[4672]: I0217 17:08:52.141128 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2dhd" Feb 17 17:08:52 crc kubenswrapper[4672]: I0217 17:08:52.178143 4672 scope.go:117] "RemoveContainer" containerID="af3190440f28006d8c319424bec1b69c5b5a3033d997d176c8e9e082f8bb2da1" Feb 17 17:08:52 crc kubenswrapper[4672]: I0217 17:08:52.193335 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k2dhd"] Feb 17 17:08:52 crc kubenswrapper[4672]: I0217 17:08:52.204393 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k2dhd"] Feb 17 17:08:52 crc kubenswrapper[4672]: I0217 17:08:52.219724 4672 scope.go:117] "RemoveContainer" containerID="0aa7b33afddefaf52520a7423c75542f431fba30372e612118141242350efe0f" Feb 17 17:08:52 crc kubenswrapper[4672]: I0217 17:08:52.284377 4672 scope.go:117] "RemoveContainer" containerID="1f633e2b889bf2198eac30e49374456b6ea76a2b959efaf00b76965d4ce15e29" Feb 17 17:08:52 crc kubenswrapper[4672]: E0217 17:08:52.284946 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f633e2b889bf2198eac30e49374456b6ea76a2b959efaf00b76965d4ce15e29\": container with ID starting with 1f633e2b889bf2198eac30e49374456b6ea76a2b959efaf00b76965d4ce15e29 not found: ID does not exist" containerID="1f633e2b889bf2198eac30e49374456b6ea76a2b959efaf00b76965d4ce15e29" Feb 17 17:08:52 crc kubenswrapper[4672]: I0217 17:08:52.284989 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f633e2b889bf2198eac30e49374456b6ea76a2b959efaf00b76965d4ce15e29"} err="failed to get container status \"1f633e2b889bf2198eac30e49374456b6ea76a2b959efaf00b76965d4ce15e29\": rpc error: code = NotFound desc = could not find container \"1f633e2b889bf2198eac30e49374456b6ea76a2b959efaf00b76965d4ce15e29\": container with ID starting with 1f633e2b889bf2198eac30e49374456b6ea76a2b959efaf00b76965d4ce15e29 not found: ID does not exist" Feb 17 17:08:52 crc kubenswrapper[4672]: I0217 17:08:52.285013 4672 scope.go:117] "RemoveContainer" containerID="af3190440f28006d8c319424bec1b69c5b5a3033d997d176c8e9e082f8bb2da1" Feb 17 17:08:52 crc kubenswrapper[4672]: E0217 17:08:52.285405 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af3190440f28006d8c319424bec1b69c5b5a3033d997d176c8e9e082f8bb2da1\": container with ID starting with af3190440f28006d8c319424bec1b69c5b5a3033d997d176c8e9e082f8bb2da1 not found: ID does not exist" containerID="af3190440f28006d8c319424bec1b69c5b5a3033d997d176c8e9e082f8bb2da1" Feb 17 17:08:52 crc kubenswrapper[4672]: I0217 17:08:52.285440 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af3190440f28006d8c319424bec1b69c5b5a3033d997d176c8e9e082f8bb2da1"} err="failed to get container status \"af3190440f28006d8c319424bec1b69c5b5a3033d997d176c8e9e082f8bb2da1\": rpc error: code = NotFound desc = could not find container \"af3190440f28006d8c319424bec1b69c5b5a3033d997d176c8e9e082f8bb2da1\": container with ID starting with af3190440f28006d8c319424bec1b69c5b5a3033d997d176c8e9e082f8bb2da1 not found: ID does not exist" Feb 17 17:08:52 crc kubenswrapper[4672]: I0217 17:08:52.285467 4672 scope.go:117] "RemoveContainer" containerID="0aa7b33afddefaf52520a7423c75542f431fba30372e612118141242350efe0f" Feb 17 17:08:52 crc kubenswrapper[4672]: E0217 17:08:52.285818 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0aa7b33afddefaf52520a7423c75542f431fba30372e612118141242350efe0f\": container with ID starting with 0aa7b33afddefaf52520a7423c75542f431fba30372e612118141242350efe0f not found: ID does not exist" containerID="0aa7b33afddefaf52520a7423c75542f431fba30372e612118141242350efe0f" Feb 17 17:08:52 crc kubenswrapper[4672]: I0217 17:08:52.285845 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aa7b33afddefaf52520a7423c75542f431fba30372e612118141242350efe0f"} err="failed to get container status \"0aa7b33afddefaf52520a7423c75542f431fba30372e612118141242350efe0f\": rpc error: code = NotFound desc = could not find container \"0aa7b33afddefaf52520a7423c75542f431fba30372e612118141242350efe0f\": container with ID starting with 0aa7b33afddefaf52520a7423c75542f431fba30372e612118141242350efe0f not found: ID does not exist" Feb 17 17:08:53 crc kubenswrapper[4672]: I0217 17:08:53.945280 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:08:53 crc kubenswrapper[4672]: E0217 17:08:53.945876 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:08:53 crc kubenswrapper[4672]: I0217 17:08:53.957318 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c47ef162-cbaf-47e9-b385-ee881fbba200" path="/var/lib/kubelet/pods/c47ef162-cbaf-47e9-b385-ee881fbba200/volumes" Feb 17 17:08:59 crc kubenswrapper[4672]: E0217 17:08:59.948871 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:09:01 crc kubenswrapper[4672]: E0217 17:09:01.079117 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:09:01 crc kubenswrapper[4672]: E0217 17:09:01.079486 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:09:01 crc kubenswrapper[4672]: E0217 17:09:01.079665 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66h7h644h64ch5f8h565hfch5dh56chfdh8hfdh5b5h567h6dh665h557h74h665hcbh96h659h554h589h57fh5d9h55h564hcfh5dhffhfdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tx4bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9e58ce9b-ddd5-42bb-8e07-08a22c8871a5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:09:01 crc kubenswrapper[4672]: E0217 17:09:01.080878 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:09:07 crc kubenswrapper[4672]: I0217 17:09:07.946263 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:09:07 crc kubenswrapper[4672]: E0217 17:09:07.947981 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:09:12 crc kubenswrapper[4672]: E0217 17:09:12.948105 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:09:15 crc kubenswrapper[4672]: E0217 17:09:15.947313 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:09:21 crc kubenswrapper[4672]: I0217 17:09:21.955117 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:09:21 crc kubenswrapper[4672]: E0217 17:09:21.956289 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:09:25 crc kubenswrapper[4672]: E0217 17:09:25.947084 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:09:26 crc kubenswrapper[4672]: E0217 17:09:26.947070 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:09:32 crc kubenswrapper[4672]: I0217 17:09:32.945849 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:09:33 crc kubenswrapper[4672]: I0217 17:09:33.672714 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerStarted","Data":"ae16538e69afb4601853cee3d1accdfd070930f88954c0c5c0200101ab1c5053"} Feb 17 17:09:40 crc kubenswrapper[4672]: E0217 17:09:40.946804 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:09:40 crc kubenswrapper[4672]: E0217 17:09:40.947537 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:09:51 crc kubenswrapper[4672]: E0217 17:09:51.961909 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:09:54 crc kubenswrapper[4672]: E0217 17:09:54.947582 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:09:55 crc kubenswrapper[4672]: I0217 17:09:55.035362 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6"] Feb 17 17:09:55 crc kubenswrapper[4672]: E0217 17:09:55.035775 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47ef162-cbaf-47e9-b385-ee881fbba200" containerName="registry-server" Feb 17 17:09:55 crc kubenswrapper[4672]: I0217 17:09:55.035791 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47ef162-cbaf-47e9-b385-ee881fbba200" containerName="registry-server" Feb 17 17:09:55 crc kubenswrapper[4672]: E0217 17:09:55.035822 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47ef162-cbaf-47e9-b385-ee881fbba200" containerName="extract-content" Feb 17 17:09:55 crc kubenswrapper[4672]: I0217 17:09:55.035830 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47ef162-cbaf-47e9-b385-ee881fbba200" containerName="extract-content" Feb 17 17:09:55 crc kubenswrapper[4672]: E0217 17:09:55.035844 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47ef162-cbaf-47e9-b385-ee881fbba200" containerName="extract-utilities" Feb 17 17:09:55 crc kubenswrapper[4672]: I0217 17:09:55.035851 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47ef162-cbaf-47e9-b385-ee881fbba200" containerName="extract-utilities" Feb 17 17:09:55 crc kubenswrapper[4672]: I0217 17:09:55.036167 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="c47ef162-cbaf-47e9-b385-ee881fbba200" containerName="registry-server" Feb 17 17:09:55 crc kubenswrapper[4672]: I0217 17:09:55.037155 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6" Feb 17 17:09:55 crc kubenswrapper[4672]: I0217 17:09:55.039629 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 17:09:55 crc kubenswrapper[4672]: I0217 17:09:55.039801 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 17:09:55 crc kubenswrapper[4672]: I0217 17:09:55.039911 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 17:09:55 crc kubenswrapper[4672]: I0217 17:09:55.040195 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6sng" Feb 17 17:09:55 crc kubenswrapper[4672]: I0217 17:09:55.077263 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6"] Feb 17 17:09:55 crc kubenswrapper[4672]: I0217 17:09:55.152186 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16cbb615-75bb-4298-90e4-6490dd64dd01-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6\" (UID: \"16cbb615-75bb-4298-90e4-6490dd64dd01\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6" Feb 17 17:09:55 crc kubenswrapper[4672]: I0217 17:09:55.152353 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16cbb615-75bb-4298-90e4-6490dd64dd01-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6\" (UID: \"16cbb615-75bb-4298-90e4-6490dd64dd01\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6" Feb 17 17:09:55 crc kubenswrapper[4672]: I0217 17:09:55.152485 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lftdt\" (UniqueName: \"kubernetes.io/projected/16cbb615-75bb-4298-90e4-6490dd64dd01-kube-api-access-lftdt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6\" (UID: \"16cbb615-75bb-4298-90e4-6490dd64dd01\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6" Feb 17 17:09:55 crc kubenswrapper[4672]: I0217 17:09:55.254657 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lftdt\" (UniqueName: \"kubernetes.io/projected/16cbb615-75bb-4298-90e4-6490dd64dd01-kube-api-access-lftdt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6\" (UID: \"16cbb615-75bb-4298-90e4-6490dd64dd01\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6" Feb 17 17:09:55 crc kubenswrapper[4672]: I0217 17:09:55.254746 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16cbb615-75bb-4298-90e4-6490dd64dd01-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6\" (UID: \"16cbb615-75bb-4298-90e4-6490dd64dd01\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6" Feb 17 17:09:55 crc kubenswrapper[4672]: I0217 17:09:55.254838 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16cbb615-75bb-4298-90e4-6490dd64dd01-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6\" (UID: \"16cbb615-75bb-4298-90e4-6490dd64dd01\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6" Feb 17 17:09:55 crc kubenswrapper[4672]: I0217 17:09:55.260790 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16cbb615-75bb-4298-90e4-6490dd64dd01-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6\" (UID: \"16cbb615-75bb-4298-90e4-6490dd64dd01\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6" Feb 17 17:09:55 crc kubenswrapper[4672]: I0217 17:09:55.266248 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16cbb615-75bb-4298-90e4-6490dd64dd01-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6\" (UID: \"16cbb615-75bb-4298-90e4-6490dd64dd01\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6" Feb 17 17:09:55 crc kubenswrapper[4672]: I0217 17:09:55.277249 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lftdt\" (UniqueName: \"kubernetes.io/projected/16cbb615-75bb-4298-90e4-6490dd64dd01-kube-api-access-lftdt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6\" (UID: \"16cbb615-75bb-4298-90e4-6490dd64dd01\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6" Feb 17 17:09:55 crc kubenswrapper[4672]: I0217 17:09:55.374270 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6" Feb 17 17:09:55 crc kubenswrapper[4672]: I0217 17:09:55.934155 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6"] Feb 17 17:09:55 crc kubenswrapper[4672]: W0217 17:09:55.941072 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16cbb615_75bb_4298_90e4_6490dd64dd01.slice/crio-164b57d69a2a5120aa0d831611eaf40893fb8c55b8fecd5aa2e62f99951435cc WatchSource:0}: Error finding container 164b57d69a2a5120aa0d831611eaf40893fb8c55b8fecd5aa2e62f99951435cc: Status 404 returned error can't find the container with id 164b57d69a2a5120aa0d831611eaf40893fb8c55b8fecd5aa2e62f99951435cc Feb 17 17:09:56 crc kubenswrapper[4672]: I0217 17:09:56.916720 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6" event={"ID":"16cbb615-75bb-4298-90e4-6490dd64dd01","Type":"ContainerStarted","Data":"ff8539a6242e86912f7b47abeb903fd831b1fcbc0c6933d4f8adfa238a027935"} Feb 17 17:09:56 crc kubenswrapper[4672]: I0217 17:09:56.917325 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6" event={"ID":"16cbb615-75bb-4298-90e4-6490dd64dd01","Type":"ContainerStarted","Data":"164b57d69a2a5120aa0d831611eaf40893fb8c55b8fecd5aa2e62f99951435cc"} Feb 17 17:09:56 crc kubenswrapper[4672]: I0217 17:09:56.947297 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6" podStartSLOduration=1.503870799 podStartE2EDuration="1.947273325s" podCreationTimestamp="2026-02-17 17:09:55 +0000 UTC" firstStartedPulling="2026-02-17 17:09:55.943755892 +0000 UTC m=+4004.697844634" lastFinishedPulling="2026-02-17 17:09:56.387158438 +0000 UTC m=+4005.141247160" observedRunningTime="2026-02-17 17:09:56.933248125 +0000 UTC m=+4005.687336867" watchObservedRunningTime="2026-02-17 17:09:56.947273325 +0000 UTC m=+4005.701362057" Feb 17 17:10:03 crc kubenswrapper[4672]: E0217 17:10:03.947614 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:10:09 crc kubenswrapper[4672]: E0217 17:10:09.951121 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:10:17 crc kubenswrapper[4672]: I0217 17:10:17.393834 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rhprn"] Feb 17 17:10:17 crc kubenswrapper[4672]: I0217 17:10:17.396287 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhprn" Feb 17 17:10:17 crc kubenswrapper[4672]: I0217 17:10:17.411421 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rhprn"] Feb 17 17:10:17 crc kubenswrapper[4672]: I0217 17:10:17.543593 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7551b5e-b547-4155-92e0-27c7257478ad-catalog-content\") pod \"certified-operators-rhprn\" (UID: \"c7551b5e-b547-4155-92e0-27c7257478ad\") " pod="openshift-marketplace/certified-operators-rhprn" Feb 17 17:10:17 crc kubenswrapper[4672]: I0217 17:10:17.543959 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7551b5e-b547-4155-92e0-27c7257478ad-utilities\") pod \"certified-operators-rhprn\" (UID: \"c7551b5e-b547-4155-92e0-27c7257478ad\") " pod="openshift-marketplace/certified-operators-rhprn" Feb 17 17:10:17 crc kubenswrapper[4672]: I0217 17:10:17.544171 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzf9n\" (UniqueName: \"kubernetes.io/projected/c7551b5e-b547-4155-92e0-27c7257478ad-kube-api-access-lzf9n\") pod \"certified-operators-rhprn\" (UID: \"c7551b5e-b547-4155-92e0-27c7257478ad\") " pod="openshift-marketplace/certified-operators-rhprn" Feb 17 17:10:17 crc kubenswrapper[4672]: I0217 17:10:17.646138 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7551b5e-b547-4155-92e0-27c7257478ad-utilities\") pod \"certified-operators-rhprn\" (UID: \"c7551b5e-b547-4155-92e0-27c7257478ad\") " pod="openshift-marketplace/certified-operators-rhprn" Feb 17 17:10:17 crc kubenswrapper[4672]: I0217 17:10:17.646206 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzf9n\" (UniqueName: \"kubernetes.io/projected/c7551b5e-b547-4155-92e0-27c7257478ad-kube-api-access-lzf9n\") pod \"certified-operators-rhprn\" (UID: \"c7551b5e-b547-4155-92e0-27c7257478ad\") " pod="openshift-marketplace/certified-operators-rhprn" Feb 17 17:10:17 crc kubenswrapper[4672]: I0217 17:10:17.646291 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7551b5e-b547-4155-92e0-27c7257478ad-catalog-content\") pod \"certified-operators-rhprn\" (UID: \"c7551b5e-b547-4155-92e0-27c7257478ad\") " pod="openshift-marketplace/certified-operators-rhprn" Feb 17 17:10:17 crc kubenswrapper[4672]: I0217 17:10:17.646856 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7551b5e-b547-4155-92e0-27c7257478ad-catalog-content\") pod \"certified-operators-rhprn\" (UID: \"c7551b5e-b547-4155-92e0-27c7257478ad\") " pod="openshift-marketplace/certified-operators-rhprn" Feb 17 17:10:17 crc kubenswrapper[4672]: I0217 17:10:17.646867 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7551b5e-b547-4155-92e0-27c7257478ad-utilities\") pod \"certified-operators-rhprn\" (UID: \"c7551b5e-b547-4155-92e0-27c7257478ad\") " pod="openshift-marketplace/certified-operators-rhprn" Feb 17 17:10:17 crc kubenswrapper[4672]: I0217 17:10:17.670763 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzf9n\" (UniqueName: \"kubernetes.io/projected/c7551b5e-b547-4155-92e0-27c7257478ad-kube-api-access-lzf9n\") pod \"certified-operators-rhprn\" (UID: \"c7551b5e-b547-4155-92e0-27c7257478ad\") " pod="openshift-marketplace/certified-operators-rhprn" Feb 17 17:10:17 crc kubenswrapper[4672]: I0217 17:10:17.713466 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhprn" Feb 17 17:10:18 crc kubenswrapper[4672]: I0217 17:10:18.207669 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rhprn"] Feb 17 17:10:18 crc kubenswrapper[4672]: E0217 17:10:18.946437 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:10:19 crc kubenswrapper[4672]: I0217 17:10:19.133894 4672 generic.go:334] "Generic (PLEG): container finished" podID="c7551b5e-b547-4155-92e0-27c7257478ad" containerID="53614d9bf6f65a8c297e5b4dc98bbd2fa88b3cee210219ad22895d59badcf6b4" exitCode=0 Feb 17 17:10:19 crc kubenswrapper[4672]: I0217 17:10:19.133940 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhprn" event={"ID":"c7551b5e-b547-4155-92e0-27c7257478ad","Type":"ContainerDied","Data":"53614d9bf6f65a8c297e5b4dc98bbd2fa88b3cee210219ad22895d59badcf6b4"} Feb 17 17:10:19 crc kubenswrapper[4672]: I0217 17:10:19.133970 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhprn" event={"ID":"c7551b5e-b547-4155-92e0-27c7257478ad","Type":"ContainerStarted","Data":"1b3c5009b8b8b7025374642b467d3a43fad8a5d0d7c6a3e82d2c07e57eaa5b12"} Feb 17 17:10:20 crc kubenswrapper[4672]: I0217 17:10:20.144643 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhprn" event={"ID":"c7551b5e-b547-4155-92e0-27c7257478ad","Type":"ContainerStarted","Data":"bbe0e8147a7888d8aa2bb5e441cda72e59044312e691a662d3d0e6064454320f"} Feb 17 17:10:22 crc kubenswrapper[4672]: I0217 17:10:22.164277 4672 generic.go:334] "Generic (PLEG): container finished" podID="c7551b5e-b547-4155-92e0-27c7257478ad" containerID="bbe0e8147a7888d8aa2bb5e441cda72e59044312e691a662d3d0e6064454320f" exitCode=0 Feb 17 17:10:22 crc kubenswrapper[4672]: I0217 17:10:22.164351 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhprn" event={"ID":"c7551b5e-b547-4155-92e0-27c7257478ad","Type":"ContainerDied","Data":"bbe0e8147a7888d8aa2bb5e441cda72e59044312e691a662d3d0e6064454320f"} Feb 17 17:10:23 crc kubenswrapper[4672]: I0217 17:10:23.177196 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhprn" event={"ID":"c7551b5e-b547-4155-92e0-27c7257478ad","Type":"ContainerStarted","Data":"ad3fa1ec413a512ae50fc728610a5f37d850905cb47be30bfed09a22975fa158"} Feb 17 17:10:23 crc kubenswrapper[4672]: I0217 17:10:23.201542 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rhprn" podStartSLOduration=2.795964617 podStartE2EDuration="6.201496591s" podCreationTimestamp="2026-02-17 17:10:17 +0000 UTC" firstStartedPulling="2026-02-17 17:10:19.135769258 +0000 UTC m=+4027.889858030" lastFinishedPulling="2026-02-17 17:10:22.541301272 +0000 UTC m=+4031.295390004" observedRunningTime="2026-02-17 17:10:23.191944458 +0000 UTC m=+4031.946033200" watchObservedRunningTime="2026-02-17 17:10:23.201496591 +0000 UTC m=+4031.955585323" Feb 17 17:10:23 crc kubenswrapper[4672]: E0217 17:10:23.953467 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:10:27 crc kubenswrapper[4672]: I0217 17:10:27.731565 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rhprn" Feb 17 17:10:27 crc kubenswrapper[4672]: I0217 17:10:27.732387 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rhprn" Feb 17 17:10:27 crc kubenswrapper[4672]: I0217 17:10:27.795959 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rhprn" Feb 17 17:10:28 crc kubenswrapper[4672]: I0217 17:10:28.288464 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rhprn" Feb 17 17:10:28 crc kubenswrapper[4672]: I0217 17:10:28.342586 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rhprn"] Feb 17 17:10:29 crc kubenswrapper[4672]: E0217 17:10:29.949742 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:10:30 crc kubenswrapper[4672]: I0217 17:10:30.253230 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rhprn" podUID="c7551b5e-b547-4155-92e0-27c7257478ad" containerName="registry-server" containerID="cri-o://ad3fa1ec413a512ae50fc728610a5f37d850905cb47be30bfed09a22975fa158" gracePeriod=2 Feb 17 17:10:30 crc kubenswrapper[4672]: I0217 17:10:30.799209 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhprn" Feb 17 17:10:30 crc kubenswrapper[4672]: I0217 17:10:30.915080 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7551b5e-b547-4155-92e0-27c7257478ad-catalog-content\") pod \"c7551b5e-b547-4155-92e0-27c7257478ad\" (UID: \"c7551b5e-b547-4155-92e0-27c7257478ad\") " Feb 17 17:10:30 crc kubenswrapper[4672]: I0217 17:10:30.915271 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7551b5e-b547-4155-92e0-27c7257478ad-utilities\") pod \"c7551b5e-b547-4155-92e0-27c7257478ad\" (UID: \"c7551b5e-b547-4155-92e0-27c7257478ad\") " Feb 17 17:10:30 crc kubenswrapper[4672]: I0217 17:10:30.915317 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf9n\" (UniqueName: \"kubernetes.io/projected/c7551b5e-b547-4155-92e0-27c7257478ad-kube-api-access-lzf9n\") pod \"c7551b5e-b547-4155-92e0-27c7257478ad\" (UID: \"c7551b5e-b547-4155-92e0-27c7257478ad\") " Feb 17 17:10:30 crc kubenswrapper[4672]: I0217 17:10:30.915962 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7551b5e-b547-4155-92e0-27c7257478ad-utilities" (OuterVolumeSpecName: "utilities") pod "c7551b5e-b547-4155-92e0-27c7257478ad" (UID: "c7551b5e-b547-4155-92e0-27c7257478ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:10:30 crc kubenswrapper[4672]: I0217 17:10:30.921772 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7551b5e-b547-4155-92e0-27c7257478ad-kube-api-access-lzf9n" (OuterVolumeSpecName: "kube-api-access-lzf9n") pod "c7551b5e-b547-4155-92e0-27c7257478ad" (UID: "c7551b5e-b547-4155-92e0-27c7257478ad"). InnerVolumeSpecName "kube-api-access-lzf9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:10:31 crc kubenswrapper[4672]: I0217 17:10:31.017982 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7551b5e-b547-4155-92e0-27c7257478ad-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:10:31 crc kubenswrapper[4672]: I0217 17:10:31.018023 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf9n\" (UniqueName: \"kubernetes.io/projected/c7551b5e-b547-4155-92e0-27c7257478ad-kube-api-access-lzf9n\") on node \"crc\" DevicePath \"\"" Feb 17 17:10:31 crc kubenswrapper[4672]: I0217 17:10:31.157272 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7551b5e-b547-4155-92e0-27c7257478ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7551b5e-b547-4155-92e0-27c7257478ad" (UID: "c7551b5e-b547-4155-92e0-27c7257478ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:10:31 crc kubenswrapper[4672]: I0217 17:10:31.223344 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7551b5e-b547-4155-92e0-27c7257478ad-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:10:31 crc kubenswrapper[4672]: I0217 17:10:31.266008 4672 generic.go:334] "Generic (PLEG): container finished" podID="c7551b5e-b547-4155-92e0-27c7257478ad" containerID="ad3fa1ec413a512ae50fc728610a5f37d850905cb47be30bfed09a22975fa158" exitCode=0 Feb 17 17:10:31 crc kubenswrapper[4672]: I0217 17:10:31.266070 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhprn" event={"ID":"c7551b5e-b547-4155-92e0-27c7257478ad","Type":"ContainerDied","Data":"ad3fa1ec413a512ae50fc728610a5f37d850905cb47be30bfed09a22975fa158"} Feb 17 17:10:31 crc kubenswrapper[4672]: I0217 17:10:31.266156 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhprn" event={"ID":"c7551b5e-b547-4155-92e0-27c7257478ad","Type":"ContainerDied","Data":"1b3c5009b8b8b7025374642b467d3a43fad8a5d0d7c6a3e82d2c07e57eaa5b12"} Feb 17 17:10:31 crc kubenswrapper[4672]: I0217 17:10:31.266164 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhprn" Feb 17 17:10:31 crc kubenswrapper[4672]: I0217 17:10:31.266179 4672 scope.go:117] "RemoveContainer" containerID="ad3fa1ec413a512ae50fc728610a5f37d850905cb47be30bfed09a22975fa158" Feb 17 17:10:31 crc kubenswrapper[4672]: I0217 17:10:31.299691 4672 scope.go:117] "RemoveContainer" containerID="bbe0e8147a7888d8aa2bb5e441cda72e59044312e691a662d3d0e6064454320f" Feb 17 17:10:31 crc kubenswrapper[4672]: I0217 17:10:31.320985 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rhprn"] Feb 17 17:10:31 crc kubenswrapper[4672]: I0217 17:10:31.332875 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rhprn"] Feb 17 17:10:31 crc kubenswrapper[4672]: I0217 17:10:31.578857 4672 scope.go:117] "RemoveContainer" containerID="53614d9bf6f65a8c297e5b4dc98bbd2fa88b3cee210219ad22895d59badcf6b4" Feb 17 17:10:31 crc kubenswrapper[4672]: I0217 17:10:31.639995 4672 scope.go:117] "RemoveContainer" containerID="ad3fa1ec413a512ae50fc728610a5f37d850905cb47be30bfed09a22975fa158" Feb 17 17:10:31 crc kubenswrapper[4672]: E0217 17:10:31.640831 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad3fa1ec413a512ae50fc728610a5f37d850905cb47be30bfed09a22975fa158\": container with ID starting with ad3fa1ec413a512ae50fc728610a5f37d850905cb47be30bfed09a22975fa158 not found: ID does not exist" containerID="ad3fa1ec413a512ae50fc728610a5f37d850905cb47be30bfed09a22975fa158" Feb 17 17:10:31 crc kubenswrapper[4672]: I0217 17:10:31.640876 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad3fa1ec413a512ae50fc728610a5f37d850905cb47be30bfed09a22975fa158"} err="failed to get container status \"ad3fa1ec413a512ae50fc728610a5f37d850905cb47be30bfed09a22975fa158\": rpc error: code = NotFound desc = could not find container \"ad3fa1ec413a512ae50fc728610a5f37d850905cb47be30bfed09a22975fa158\": container with ID starting with ad3fa1ec413a512ae50fc728610a5f37d850905cb47be30bfed09a22975fa158 not found: ID does not exist" Feb 17 17:10:31 crc kubenswrapper[4672]: I0217 17:10:31.640903 4672 scope.go:117] "RemoveContainer" containerID="bbe0e8147a7888d8aa2bb5e441cda72e59044312e691a662d3d0e6064454320f" Feb 17 17:10:31 crc kubenswrapper[4672]: E0217 17:10:31.641136 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbe0e8147a7888d8aa2bb5e441cda72e59044312e691a662d3d0e6064454320f\": container with ID starting with bbe0e8147a7888d8aa2bb5e441cda72e59044312e691a662d3d0e6064454320f not found: ID does not exist" containerID="bbe0e8147a7888d8aa2bb5e441cda72e59044312e691a662d3d0e6064454320f" Feb 17 17:10:31 crc kubenswrapper[4672]: I0217 17:10:31.641158 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbe0e8147a7888d8aa2bb5e441cda72e59044312e691a662d3d0e6064454320f"} err="failed to get container status \"bbe0e8147a7888d8aa2bb5e441cda72e59044312e691a662d3d0e6064454320f\": rpc error: code = NotFound desc = could not find container \"bbe0e8147a7888d8aa2bb5e441cda72e59044312e691a662d3d0e6064454320f\": container with ID starting with bbe0e8147a7888d8aa2bb5e441cda72e59044312e691a662d3d0e6064454320f not found: ID does not exist" Feb 17 17:10:31 crc kubenswrapper[4672]: I0217 17:10:31.641173 4672 scope.go:117] "RemoveContainer" containerID="53614d9bf6f65a8c297e5b4dc98bbd2fa88b3cee210219ad22895d59badcf6b4" Feb 17 17:10:31 crc kubenswrapper[4672]: E0217 17:10:31.641455 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53614d9bf6f65a8c297e5b4dc98bbd2fa88b3cee210219ad22895d59badcf6b4\": container with ID starting with 53614d9bf6f65a8c297e5b4dc98bbd2fa88b3cee210219ad22895d59badcf6b4 not found: ID does not exist" containerID="53614d9bf6f65a8c297e5b4dc98bbd2fa88b3cee210219ad22895d59badcf6b4" Feb 17 17:10:31 crc kubenswrapper[4672]: I0217 17:10:31.641477 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53614d9bf6f65a8c297e5b4dc98bbd2fa88b3cee210219ad22895d59badcf6b4"} err="failed to get container status \"53614d9bf6f65a8c297e5b4dc98bbd2fa88b3cee210219ad22895d59badcf6b4\": rpc error: code = NotFound desc = could not find container \"53614d9bf6f65a8c297e5b4dc98bbd2fa88b3cee210219ad22895d59badcf6b4\": container with ID starting with 53614d9bf6f65a8c297e5b4dc98bbd2fa88b3cee210219ad22895d59badcf6b4 not found: ID does not exist" Feb 17 17:10:31 crc kubenswrapper[4672]: I0217 17:10:31.955984 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7551b5e-b547-4155-92e0-27c7257478ad" path="/var/lib/kubelet/pods/c7551b5e-b547-4155-92e0-27c7257478ad/volumes" Feb 17 17:10:35 crc kubenswrapper[4672]: E0217 17:10:35.947798 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:10:41 crc kubenswrapper[4672]: E0217 17:10:41.974648 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:10:47 crc kubenswrapper[4672]: E0217 17:10:47.947231 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:10:53 crc kubenswrapper[4672]: E0217 17:10:53.947254 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:11:01 crc kubenswrapper[4672]: E0217 17:11:01.953649 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:11:05 crc kubenswrapper[4672]: E0217 17:11:05.946932 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:11:14 crc kubenswrapper[4672]: E0217 17:11:14.950027 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:11:20 crc kubenswrapper[4672]: E0217 17:11:20.947112 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:11:28 crc kubenswrapper[4672]: E0217 17:11:28.946938 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:11:34 crc kubenswrapper[4672]: E0217 17:11:34.948368 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:11:41 crc kubenswrapper[4672]: E0217 17:11:41.953027 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:11:49 crc kubenswrapper[4672]: E0217 17:11:49.948527 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:11:54 crc kubenswrapper[4672]: E0217 17:11:54.947976 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:11:57 crc kubenswrapper[4672]: I0217 17:11:57.566000 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:11:57 crc kubenswrapper[4672]: I0217 17:11:57.566370 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:12:04 crc kubenswrapper[4672]: E0217 17:12:04.950327 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:12:06 crc kubenswrapper[4672]: E0217 17:12:06.948225 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:12:17 crc kubenswrapper[4672]: E0217 17:12:17.947914 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:12:20 crc kubenswrapper[4672]: E0217 17:12:20.946954 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:12:27 crc kubenswrapper[4672]: I0217 17:12:27.565465 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:12:27 crc kubenswrapper[4672]: I0217 17:12:27.566278 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:12:28 crc kubenswrapper[4672]: E0217 17:12:28.947079 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:12:33 crc kubenswrapper[4672]: E0217 17:12:33.947431 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:12:43 crc kubenswrapper[4672]: E0217 17:12:43.946778 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:12:47 crc kubenswrapper[4672]: E0217 17:12:47.949747 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:12:57 crc kubenswrapper[4672]: I0217 17:12:57.566405 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:12:57 crc kubenswrapper[4672]: I0217 17:12:57.566898 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:12:57 crc kubenswrapper[4672]: I0217 17:12:57.566960 4672 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 17:12:57 crc kubenswrapper[4672]: I0217 17:12:57.567956 4672 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae16538e69afb4601853cee3d1accdfd070930f88954c0c5c0200101ab1c5053"} pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:12:57 crc kubenswrapper[4672]: I0217 17:12:57.568045 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" containerID="cri-o://ae16538e69afb4601853cee3d1accdfd070930f88954c0c5c0200101ab1c5053" gracePeriod=600 Feb 17 17:12:57 crc kubenswrapper[4672]: I0217 17:12:57.740685 4672 generic.go:334] "Generic (PLEG): container finished" podID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerID="ae16538e69afb4601853cee3d1accdfd070930f88954c0c5c0200101ab1c5053" exitCode=0 Feb 17 17:12:57 crc kubenswrapper[4672]: I0217 17:12:57.740744 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerDied","Data":"ae16538e69afb4601853cee3d1accdfd070930f88954c0c5c0200101ab1c5053"} Feb 17 17:12:57 crc kubenswrapper[4672]: I0217 17:12:57.741002 4672 scope.go:117] "RemoveContainer" containerID="89ca90c0b062cf33d871d85b80e45467b2dbcf33865b48786892cf4297ab65bd" Feb 17 17:12:57 crc kubenswrapper[4672]: E0217 17:12:57.947755 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:12:58 crc kubenswrapper[4672]: I0217 17:12:58.751213 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerStarted","Data":"3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f"} Feb 17 17:13:01 crc kubenswrapper[4672]: E0217 17:13:01.962385 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:13:05 crc kubenswrapper[4672]: I0217 17:13:05.265137 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8xcpl"] Feb 17 17:13:05 crc kubenswrapper[4672]: E0217 17:13:05.266383 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7551b5e-b547-4155-92e0-27c7257478ad" containerName="registry-server" Feb 17 17:13:05 crc kubenswrapper[4672]: I0217 17:13:05.266399 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7551b5e-b547-4155-92e0-27c7257478ad" containerName="registry-server" Feb 17 17:13:05 crc kubenswrapper[4672]: E0217 17:13:05.266433 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7551b5e-b547-4155-92e0-27c7257478ad" containerName="extract-content" Feb 17 17:13:05 crc kubenswrapper[4672]: I0217 17:13:05.266441 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7551b5e-b547-4155-92e0-27c7257478ad" containerName="extract-content" Feb 17 17:13:05 crc kubenswrapper[4672]: E0217 17:13:05.266457 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7551b5e-b547-4155-92e0-27c7257478ad" containerName="extract-utilities" Feb 17 17:13:05 crc kubenswrapper[4672]: I0217 17:13:05.266466 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7551b5e-b547-4155-92e0-27c7257478ad" containerName="extract-utilities" Feb 17 17:13:05 crc kubenswrapper[4672]: I0217 17:13:05.266691 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7551b5e-b547-4155-92e0-27c7257478ad" containerName="registry-server" Feb 17 17:13:05 crc kubenswrapper[4672]: I0217 17:13:05.268149 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xcpl" Feb 17 17:13:05 crc kubenswrapper[4672]: I0217 17:13:05.276245 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xcpl"] Feb 17 17:13:05 crc kubenswrapper[4672]: I0217 17:13:05.400559 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9e8f83c-2596-4c1b-ba9d-e72aa9fce370-catalog-content\") pod \"redhat-marketplace-8xcpl\" (UID: \"c9e8f83c-2596-4c1b-ba9d-e72aa9fce370\") " pod="openshift-marketplace/redhat-marketplace-8xcpl" Feb 17 17:13:05 crc kubenswrapper[4672]: I0217 17:13:05.402413 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9e8f83c-2596-4c1b-ba9d-e72aa9fce370-utilities\") pod \"redhat-marketplace-8xcpl\" (UID: \"c9e8f83c-2596-4c1b-ba9d-e72aa9fce370\") " pod="openshift-marketplace/redhat-marketplace-8xcpl" Feb 17 17:13:05 crc kubenswrapper[4672]: I0217 17:13:05.402533 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k598n\" (UniqueName: \"kubernetes.io/projected/c9e8f83c-2596-4c1b-ba9d-e72aa9fce370-kube-api-access-k598n\") pod \"redhat-marketplace-8xcpl\" (UID: \"c9e8f83c-2596-4c1b-ba9d-e72aa9fce370\") " pod="openshift-marketplace/redhat-marketplace-8xcpl" Feb 17 17:13:05 crc kubenswrapper[4672]: I0217 17:13:05.504158 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9e8f83c-2596-4c1b-ba9d-e72aa9fce370-utilities\") pod \"redhat-marketplace-8xcpl\" (UID: \"c9e8f83c-2596-4c1b-ba9d-e72aa9fce370\") " pod="openshift-marketplace/redhat-marketplace-8xcpl" Feb 17 17:13:05 crc kubenswrapper[4672]: I0217 17:13:05.504231 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k598n\" (UniqueName: \"kubernetes.io/projected/c9e8f83c-2596-4c1b-ba9d-e72aa9fce370-kube-api-access-k598n\") pod \"redhat-marketplace-8xcpl\" (UID: \"c9e8f83c-2596-4c1b-ba9d-e72aa9fce370\") " pod="openshift-marketplace/redhat-marketplace-8xcpl" Feb 17 17:13:05 crc kubenswrapper[4672]: I0217 17:13:05.504313 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9e8f83c-2596-4c1b-ba9d-e72aa9fce370-catalog-content\") pod \"redhat-marketplace-8xcpl\" (UID: \"c9e8f83c-2596-4c1b-ba9d-e72aa9fce370\") " pod="openshift-marketplace/redhat-marketplace-8xcpl" Feb 17 17:13:05 crc kubenswrapper[4672]: I0217 17:13:05.504816 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9e8f83c-2596-4c1b-ba9d-e72aa9fce370-utilities\") pod \"redhat-marketplace-8xcpl\" (UID: \"c9e8f83c-2596-4c1b-ba9d-e72aa9fce370\") " pod="openshift-marketplace/redhat-marketplace-8xcpl" Feb 17 17:13:05 crc kubenswrapper[4672]: I0217 17:13:05.504919 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9e8f83c-2596-4c1b-ba9d-e72aa9fce370-catalog-content\") pod \"redhat-marketplace-8xcpl\" (UID: \"c9e8f83c-2596-4c1b-ba9d-e72aa9fce370\") " pod="openshift-marketplace/redhat-marketplace-8xcpl" Feb 17 17:13:05 crc kubenswrapper[4672]: I0217 17:13:05.535220 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k598n\" (UniqueName: \"kubernetes.io/projected/c9e8f83c-2596-4c1b-ba9d-e72aa9fce370-kube-api-access-k598n\") pod \"redhat-marketplace-8xcpl\" (UID: \"c9e8f83c-2596-4c1b-ba9d-e72aa9fce370\") " pod="openshift-marketplace/redhat-marketplace-8xcpl" Feb 17 17:13:05 crc kubenswrapper[4672]: I0217 17:13:05.630415 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xcpl" Feb 17 17:13:06 crc kubenswrapper[4672]: I0217 17:13:06.195247 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xcpl"] Feb 17 17:13:06 crc kubenswrapper[4672]: I0217 17:13:06.847186 4672 generic.go:334] "Generic (PLEG): container finished" podID="c9e8f83c-2596-4c1b-ba9d-e72aa9fce370" containerID="50f1d8ef160f2c0a35f753cae5004cc8483a1d8b08cdce590f1750d2ff2f833c" exitCode=0 Feb 17 17:13:06 crc kubenswrapper[4672]: I0217 17:13:06.847354 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xcpl" event={"ID":"c9e8f83c-2596-4c1b-ba9d-e72aa9fce370","Type":"ContainerDied","Data":"50f1d8ef160f2c0a35f753cae5004cc8483a1d8b08cdce590f1750d2ff2f833c"} Feb 17 17:13:06 crc kubenswrapper[4672]: I0217 17:13:06.847715 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xcpl" event={"ID":"c9e8f83c-2596-4c1b-ba9d-e72aa9fce370","Type":"ContainerStarted","Data":"16885e38f8217cfb3f5d0a71ee0128166823ade582155d7c7c3ba63fd4509719"} Feb 17 17:13:07 crc kubenswrapper[4672]: I0217 17:13:07.861775 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xcpl" event={"ID":"c9e8f83c-2596-4c1b-ba9d-e72aa9fce370","Type":"ContainerStarted","Data":"9b5322cb118c6b9a810ac9b233ce3f6a86f779b213da54154f8e176ca086d460"} Feb 17 17:13:08 crc kubenswrapper[4672]: I0217 17:13:08.878614 4672 generic.go:334] "Generic (PLEG): container finished" podID="c9e8f83c-2596-4c1b-ba9d-e72aa9fce370" containerID="9b5322cb118c6b9a810ac9b233ce3f6a86f779b213da54154f8e176ca086d460" exitCode=0 Feb 17 17:13:08 crc kubenswrapper[4672]: I0217 17:13:08.878949 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xcpl" event={"ID":"c9e8f83c-2596-4c1b-ba9d-e72aa9fce370","Type":"ContainerDied","Data":"9b5322cb118c6b9a810ac9b233ce3f6a86f779b213da54154f8e176ca086d460"} Feb 17 17:13:08 crc kubenswrapper[4672]: E0217 17:13:08.946655 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:13:09 crc kubenswrapper[4672]: I0217 17:13:09.897985 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xcpl" event={"ID":"c9e8f83c-2596-4c1b-ba9d-e72aa9fce370","Type":"ContainerStarted","Data":"7dca14b4051e8c108241833d5b9da59737b9ea3551a3f144c94b689c6c7a7242"} Feb 17 17:13:09 crc kubenswrapper[4672]: I0217 17:13:09.923810 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8xcpl" podStartSLOduration=2.476349267 podStartE2EDuration="4.923782793s" podCreationTimestamp="2026-02-17 17:13:05 +0000 UTC" firstStartedPulling="2026-02-17 17:13:06.848870728 +0000 UTC m=+4195.602959460" lastFinishedPulling="2026-02-17 17:13:09.296304244 +0000 UTC m=+4198.050392986" observedRunningTime="2026-02-17 17:13:09.918877314 +0000 UTC m=+4198.672966046" watchObservedRunningTime="2026-02-17 17:13:09.923782793 +0000 UTC m=+4198.677871535" Feb 17 17:13:15 crc kubenswrapper[4672]: I0217 17:13:15.631384 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8xcpl" Feb 17 17:13:15 crc kubenswrapper[4672]: I0217 17:13:15.631994 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8xcpl" Feb 17 17:13:15 crc kubenswrapper[4672]: I0217 17:13:15.687673 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8xcpl" Feb 17 17:13:15 crc kubenswrapper[4672]: E0217 17:13:15.947505 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:13:16 crc kubenswrapper[4672]: I0217 17:13:16.022644 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8xcpl" Feb 17 17:13:16 crc kubenswrapper[4672]: I0217 17:13:16.076574 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xcpl"] Feb 17 17:13:17 crc kubenswrapper[4672]: I0217 17:13:17.982077 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8xcpl" podUID="c9e8f83c-2596-4c1b-ba9d-e72aa9fce370" containerName="registry-server" containerID="cri-o://7dca14b4051e8c108241833d5b9da59737b9ea3551a3f144c94b689c6c7a7242" gracePeriod=2 Feb 17 17:13:18 crc kubenswrapper[4672]: I0217 17:13:18.522317 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xcpl" Feb 17 17:13:18 crc kubenswrapper[4672]: I0217 17:13:18.592830 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9e8f83c-2596-4c1b-ba9d-e72aa9fce370-utilities\") pod \"c9e8f83c-2596-4c1b-ba9d-e72aa9fce370\" (UID: \"c9e8f83c-2596-4c1b-ba9d-e72aa9fce370\") " Feb 17 17:13:18 crc kubenswrapper[4672]: I0217 17:13:18.593028 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k598n\" (UniqueName: \"kubernetes.io/projected/c9e8f83c-2596-4c1b-ba9d-e72aa9fce370-kube-api-access-k598n\") pod \"c9e8f83c-2596-4c1b-ba9d-e72aa9fce370\" (UID: \"c9e8f83c-2596-4c1b-ba9d-e72aa9fce370\") " Feb 17 17:13:18 crc kubenswrapper[4672]: I0217 17:13:18.593099 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9e8f83c-2596-4c1b-ba9d-e72aa9fce370-catalog-content\") pod \"c9e8f83c-2596-4c1b-ba9d-e72aa9fce370\" (UID: \"c9e8f83c-2596-4c1b-ba9d-e72aa9fce370\") " Feb 17 17:13:18 crc kubenswrapper[4672]: I0217 17:13:18.594179 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9e8f83c-2596-4c1b-ba9d-e72aa9fce370-utilities" (OuterVolumeSpecName: "utilities") pod "c9e8f83c-2596-4c1b-ba9d-e72aa9fce370" (UID: "c9e8f83c-2596-4c1b-ba9d-e72aa9fce370"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:13:18 crc kubenswrapper[4672]: I0217 17:13:18.600158 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9e8f83c-2596-4c1b-ba9d-e72aa9fce370-kube-api-access-k598n" (OuterVolumeSpecName: "kube-api-access-k598n") pod "c9e8f83c-2596-4c1b-ba9d-e72aa9fce370" (UID: "c9e8f83c-2596-4c1b-ba9d-e72aa9fce370"). InnerVolumeSpecName "kube-api-access-k598n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:13:18 crc kubenswrapper[4672]: I0217 17:13:18.616774 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9e8f83c-2596-4c1b-ba9d-e72aa9fce370-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9e8f83c-2596-4c1b-ba9d-e72aa9fce370" (UID: "c9e8f83c-2596-4c1b-ba9d-e72aa9fce370"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:13:18 crc kubenswrapper[4672]: I0217 17:13:18.695476 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k598n\" (UniqueName: \"kubernetes.io/projected/c9e8f83c-2596-4c1b-ba9d-e72aa9fce370-kube-api-access-k598n\") on node \"crc\" DevicePath \"\"" Feb 17 17:13:18 crc kubenswrapper[4672]: I0217 17:13:18.695525 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9e8f83c-2596-4c1b-ba9d-e72aa9fce370-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:13:18 crc kubenswrapper[4672]: I0217 17:13:18.695539 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9e8f83c-2596-4c1b-ba9d-e72aa9fce370-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:13:18 crc kubenswrapper[4672]: I0217 17:13:18.993941 4672 generic.go:334] "Generic (PLEG): container finished" podID="c9e8f83c-2596-4c1b-ba9d-e72aa9fce370" containerID="7dca14b4051e8c108241833d5b9da59737b9ea3551a3f144c94b689c6c7a7242" exitCode=0 Feb 17 17:13:18 crc kubenswrapper[4672]: I0217 17:13:18.994000 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xcpl" event={"ID":"c9e8f83c-2596-4c1b-ba9d-e72aa9fce370","Type":"ContainerDied","Data":"7dca14b4051e8c108241833d5b9da59737b9ea3551a3f144c94b689c6c7a7242"} Feb 17 17:13:18 crc kubenswrapper[4672]: I0217 17:13:18.994038 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xcpl" event={"ID":"c9e8f83c-2596-4c1b-ba9d-e72aa9fce370","Type":"ContainerDied","Data":"16885e38f8217cfb3f5d0a71ee0128166823ade582155d7c7c3ba63fd4509719"} Feb 17 17:13:18 crc kubenswrapper[4672]: I0217 17:13:18.994060 4672 scope.go:117] "RemoveContainer" containerID="7dca14b4051e8c108241833d5b9da59737b9ea3551a3f144c94b689c6c7a7242" Feb 17 17:13:18 crc kubenswrapper[4672]: I0217 17:13:18.994249 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xcpl" Feb 17 17:13:19 crc kubenswrapper[4672]: I0217 17:13:19.024598 4672 scope.go:117] "RemoveContainer" containerID="9b5322cb118c6b9a810ac9b233ce3f6a86f779b213da54154f8e176ca086d460" Feb 17 17:13:19 crc kubenswrapper[4672]: I0217 17:13:19.043076 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xcpl"] Feb 17 17:13:19 crc kubenswrapper[4672]: I0217 17:13:19.053755 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xcpl"] Feb 17 17:13:19 crc kubenswrapper[4672]: I0217 17:13:19.057379 4672 scope.go:117] "RemoveContainer" containerID="50f1d8ef160f2c0a35f753cae5004cc8483a1d8b08cdce590f1750d2ff2f833c" Feb 17 17:13:19 crc kubenswrapper[4672]: I0217 17:13:19.112565 4672 scope.go:117] "RemoveContainer" containerID="7dca14b4051e8c108241833d5b9da59737b9ea3551a3f144c94b689c6c7a7242" Feb 17 17:13:19 crc kubenswrapper[4672]: E0217 17:13:19.113173 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dca14b4051e8c108241833d5b9da59737b9ea3551a3f144c94b689c6c7a7242\": container with ID starting with 7dca14b4051e8c108241833d5b9da59737b9ea3551a3f144c94b689c6c7a7242 not found: ID does not exist" containerID="7dca14b4051e8c108241833d5b9da59737b9ea3551a3f144c94b689c6c7a7242" Feb 17 17:13:19 crc kubenswrapper[4672]: I0217 17:13:19.113231 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dca14b4051e8c108241833d5b9da59737b9ea3551a3f144c94b689c6c7a7242"} err="failed to get container status \"7dca14b4051e8c108241833d5b9da59737b9ea3551a3f144c94b689c6c7a7242\": rpc error: code = NotFound desc = could not find container \"7dca14b4051e8c108241833d5b9da59737b9ea3551a3f144c94b689c6c7a7242\": container with ID starting with 7dca14b4051e8c108241833d5b9da59737b9ea3551a3f144c94b689c6c7a7242 not found: ID does not exist" Feb 17 17:13:19 crc kubenswrapper[4672]: I0217 17:13:19.113262 4672 scope.go:117] "RemoveContainer" containerID="9b5322cb118c6b9a810ac9b233ce3f6a86f779b213da54154f8e176ca086d460" Feb 17 17:13:19 crc kubenswrapper[4672]: E0217 17:13:19.114946 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b5322cb118c6b9a810ac9b233ce3f6a86f779b213da54154f8e176ca086d460\": container with ID starting with 9b5322cb118c6b9a810ac9b233ce3f6a86f779b213da54154f8e176ca086d460 not found: ID does not exist" containerID="9b5322cb118c6b9a810ac9b233ce3f6a86f779b213da54154f8e176ca086d460" Feb 17 17:13:19 crc kubenswrapper[4672]: I0217 17:13:19.114977 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b5322cb118c6b9a810ac9b233ce3f6a86f779b213da54154f8e176ca086d460"} err="failed to get container status \"9b5322cb118c6b9a810ac9b233ce3f6a86f779b213da54154f8e176ca086d460\": rpc error: code = NotFound desc = could not find container \"9b5322cb118c6b9a810ac9b233ce3f6a86f779b213da54154f8e176ca086d460\": container with ID starting with 9b5322cb118c6b9a810ac9b233ce3f6a86f779b213da54154f8e176ca086d460 not found: ID does not exist" Feb 17 17:13:19 crc kubenswrapper[4672]: I0217 17:13:19.114997 4672 scope.go:117] "RemoveContainer" containerID="50f1d8ef160f2c0a35f753cae5004cc8483a1d8b08cdce590f1750d2ff2f833c" Feb 17 17:13:19 crc kubenswrapper[4672]: E0217 17:13:19.115355 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50f1d8ef160f2c0a35f753cae5004cc8483a1d8b08cdce590f1750d2ff2f833c\": container with ID starting with 50f1d8ef160f2c0a35f753cae5004cc8483a1d8b08cdce590f1750d2ff2f833c not found: ID does not exist" containerID="50f1d8ef160f2c0a35f753cae5004cc8483a1d8b08cdce590f1750d2ff2f833c" Feb 17 17:13:19 crc kubenswrapper[4672]: I0217 17:13:19.115404 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50f1d8ef160f2c0a35f753cae5004cc8483a1d8b08cdce590f1750d2ff2f833c"} err="failed to get container status \"50f1d8ef160f2c0a35f753cae5004cc8483a1d8b08cdce590f1750d2ff2f833c\": rpc error: code = NotFound desc = could not find container \"50f1d8ef160f2c0a35f753cae5004cc8483a1d8b08cdce590f1750d2ff2f833c\": container with ID starting with 50f1d8ef160f2c0a35f753cae5004cc8483a1d8b08cdce590f1750d2ff2f833c not found: ID does not exist" Feb 17 17:13:19 crc kubenswrapper[4672]: E0217 17:13:19.143038 4672 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9e8f83c_2596_4c1b_ba9d_e72aa9fce370.slice\": RecentStats: unable to find data in memory cache]" Feb 17 17:13:19 crc kubenswrapper[4672]: I0217 17:13:19.960671 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9e8f83c-2596-4c1b-ba9d-e72aa9fce370" path="/var/lib/kubelet/pods/c9e8f83c-2596-4c1b-ba9d-e72aa9fce370/volumes" Feb 17 17:13:22 crc kubenswrapper[4672]: E0217 17:13:22.947807 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:13:28 crc kubenswrapper[4672]: E0217 17:13:28.948093 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:13:34 crc kubenswrapper[4672]: E0217 17:13:34.947410 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:13:40 crc kubenswrapper[4672]: E0217 17:13:40.948125 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:13:46 crc kubenswrapper[4672]: E0217 17:13:46.947579 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:13:54 crc kubenswrapper[4672]: E0217 17:13:54.946851 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:13:57 crc kubenswrapper[4672]: I0217 17:13:57.967972 4672 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:13:58 crc kubenswrapper[4672]: E0217 17:13:58.075193 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:13:58 crc kubenswrapper[4672]: E0217 17:13:58.075260 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:13:58 crc kubenswrapper[4672]: E0217 17:13:58.075426 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq9ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-qrhj8_openstack(dc5471f5-2491-4841-be45-09c8f14b35c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:13:58 crc kubenswrapper[4672]: E0217 17:13:58.076559 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:14:08 crc kubenswrapper[4672]: E0217 17:14:08.946717 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:14:10 crc kubenswrapper[4672]: E0217 17:14:10.042456 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:14:10 crc kubenswrapper[4672]: E0217 17:14:10.042813 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:14:10 crc kubenswrapper[4672]: E0217 17:14:10.042977 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66h7h644h64ch5f8h565hfch5dh56chfdh8hfdh5b5h567h6dh665h557h74h665hcbh96h659h554h589h57fh5d9h55h564hcfh5dhffhfdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tx4bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9e58ce9b-ddd5-42bb-8e07-08a22c8871a5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:14:10 crc kubenswrapper[4672]: E0217 17:14:10.044175 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:14:21 crc kubenswrapper[4672]: E0217 17:14:21.955401 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:14:22 crc kubenswrapper[4672]: E0217 17:14:22.946625 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:14:34 crc kubenswrapper[4672]: E0217 17:14:34.947160 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:14:36 crc kubenswrapper[4672]: I0217 17:14:36.122946 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-844c787c5c-l2cm9" podUID="9f2974b4-c465-4d64-b3b3-e79e4d1b74a2" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 17 17:14:37 crc kubenswrapper[4672]: E0217 17:14:37.948705 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:14:45 crc kubenswrapper[4672]: E0217 17:14:45.947302 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:14:51 crc kubenswrapper[4672]: E0217 17:14:51.963040 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:14:57 crc kubenswrapper[4672]: I0217 17:14:57.565820 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:14:57 crc kubenswrapper[4672]: I0217 17:14:57.566460 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:14:58 crc kubenswrapper[4672]: E0217 17:14:58.947227 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:15:00 crc kubenswrapper[4672]: I0217 17:15:00.162063 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522475-qgp8d"] Feb 17 17:15:00 crc kubenswrapper[4672]: E0217 17:15:00.162685 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9e8f83c-2596-4c1b-ba9d-e72aa9fce370" containerName="extract-content" Feb 17 17:15:00 crc kubenswrapper[4672]: I0217 17:15:00.162703 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e8f83c-2596-4c1b-ba9d-e72aa9fce370" containerName="extract-content" Feb 17 17:15:00 crc kubenswrapper[4672]: E0217 17:15:00.162736 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9e8f83c-2596-4c1b-ba9d-e72aa9fce370" containerName="extract-utilities" Feb 17 17:15:00 crc kubenswrapper[4672]: I0217 17:15:00.162746 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e8f83c-2596-4c1b-ba9d-e72aa9fce370" containerName="extract-utilities" Feb 17 17:15:00 crc kubenswrapper[4672]: E0217 17:15:00.162764 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9e8f83c-2596-4c1b-ba9d-e72aa9fce370" containerName="registry-server" Feb 17 17:15:00 crc kubenswrapper[4672]: I0217 17:15:00.162775 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e8f83c-2596-4c1b-ba9d-e72aa9fce370" containerName="registry-server" Feb 17 17:15:00 crc kubenswrapper[4672]: I0217 17:15:00.163053 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9e8f83c-2596-4c1b-ba9d-e72aa9fce370" containerName="registry-server" Feb 17 17:15:00 crc kubenswrapper[4672]: I0217 17:15:00.164123 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-qgp8d" Feb 17 17:15:00 crc kubenswrapper[4672]: I0217 17:15:00.167469 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 17:15:00 crc kubenswrapper[4672]: I0217 17:15:00.167968 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 17:15:00 crc kubenswrapper[4672]: I0217 17:15:00.186132 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522475-qgp8d"] Feb 17 17:15:00 crc kubenswrapper[4672]: I0217 17:15:00.265260 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv2ml\" (UniqueName: \"kubernetes.io/projected/24656782-3b91-4930-a98d-bb741bcc62f3-kube-api-access-qv2ml\") pod \"collect-profiles-29522475-qgp8d\" (UID: \"24656782-3b91-4930-a98d-bb741bcc62f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-qgp8d" Feb 17 17:15:00 crc kubenswrapper[4672]: I0217 17:15:00.265462 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24656782-3b91-4930-a98d-bb741bcc62f3-secret-volume\") pod \"collect-profiles-29522475-qgp8d\" (UID: \"24656782-3b91-4930-a98d-bb741bcc62f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-qgp8d" Feb 17 17:15:00 crc kubenswrapper[4672]: I0217 17:15:00.265619 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24656782-3b91-4930-a98d-bb741bcc62f3-config-volume\") pod \"collect-profiles-29522475-qgp8d\" (UID: \"24656782-3b91-4930-a98d-bb741bcc62f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-qgp8d" Feb 17 17:15:00 crc kubenswrapper[4672]: I0217 17:15:00.367742 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24656782-3b91-4930-a98d-bb741bcc62f3-secret-volume\") pod \"collect-profiles-29522475-qgp8d\" (UID: \"24656782-3b91-4930-a98d-bb741bcc62f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-qgp8d" Feb 17 17:15:00 crc kubenswrapper[4672]: I0217 17:15:00.367943 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24656782-3b91-4930-a98d-bb741bcc62f3-config-volume\") pod \"collect-profiles-29522475-qgp8d\" (UID: \"24656782-3b91-4930-a98d-bb741bcc62f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-qgp8d" Feb 17 17:15:00 crc kubenswrapper[4672]: I0217 17:15:00.368223 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv2ml\" (UniqueName: \"kubernetes.io/projected/24656782-3b91-4930-a98d-bb741bcc62f3-kube-api-access-qv2ml\") pod \"collect-profiles-29522475-qgp8d\" (UID: \"24656782-3b91-4930-a98d-bb741bcc62f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-qgp8d" Feb 17 17:15:00 crc kubenswrapper[4672]: I0217 17:15:00.371478 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24656782-3b91-4930-a98d-bb741bcc62f3-config-volume\") pod \"collect-profiles-29522475-qgp8d\" (UID: \"24656782-3b91-4930-a98d-bb741bcc62f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-qgp8d" Feb 17 17:15:00 crc kubenswrapper[4672]: I0217 17:15:00.379934 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24656782-3b91-4930-a98d-bb741bcc62f3-secret-volume\") pod \"collect-profiles-29522475-qgp8d\" (UID: \"24656782-3b91-4930-a98d-bb741bcc62f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-qgp8d" Feb 17 17:15:00 crc kubenswrapper[4672]: I0217 17:15:00.392851 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv2ml\" (UniqueName: \"kubernetes.io/projected/24656782-3b91-4930-a98d-bb741bcc62f3-kube-api-access-qv2ml\") pod \"collect-profiles-29522475-qgp8d\" (UID: \"24656782-3b91-4930-a98d-bb741bcc62f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-qgp8d" Feb 17 17:15:00 crc kubenswrapper[4672]: I0217 17:15:00.485611 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-qgp8d" Feb 17 17:15:00 crc kubenswrapper[4672]: I0217 17:15:00.968498 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522475-qgp8d"] Feb 17 17:15:01 crc kubenswrapper[4672]: I0217 17:15:01.957312 4672 generic.go:334] "Generic (PLEG): container finished" podID="24656782-3b91-4930-a98d-bb741bcc62f3" containerID="f08ce2cb206666f2dabbf8ad493c3a01442d105e24d598ec8cdf71877f49cb9c" exitCode=0 Feb 17 17:15:01 crc kubenswrapper[4672]: I0217 17:15:01.957855 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-qgp8d" event={"ID":"24656782-3b91-4930-a98d-bb741bcc62f3","Type":"ContainerDied","Data":"f08ce2cb206666f2dabbf8ad493c3a01442d105e24d598ec8cdf71877f49cb9c"} Feb 17 17:15:01 crc kubenswrapper[4672]: I0217 17:15:01.958998 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-qgp8d" event={"ID":"24656782-3b91-4930-a98d-bb741bcc62f3","Type":"ContainerStarted","Data":"da3570e3f4b496a5dbd2b6b33d9d42723e7176117a437c9725a65849c5725a2a"} Feb 17 17:15:03 crc kubenswrapper[4672]: I0217 17:15:03.410842 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-qgp8d" Feb 17 17:15:03 crc kubenswrapper[4672]: I0217 17:15:03.429217 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24656782-3b91-4930-a98d-bb741bcc62f3-config-volume\") pod \"24656782-3b91-4930-a98d-bb741bcc62f3\" (UID: \"24656782-3b91-4930-a98d-bb741bcc62f3\") " Feb 17 17:15:03 crc kubenswrapper[4672]: I0217 17:15:03.429466 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24656782-3b91-4930-a98d-bb741bcc62f3-secret-volume\") pod \"24656782-3b91-4930-a98d-bb741bcc62f3\" (UID: \"24656782-3b91-4930-a98d-bb741bcc62f3\") " Feb 17 17:15:03 crc kubenswrapper[4672]: I0217 17:15:03.429600 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv2ml\" (UniqueName: \"kubernetes.io/projected/24656782-3b91-4930-a98d-bb741bcc62f3-kube-api-access-qv2ml\") pod \"24656782-3b91-4930-a98d-bb741bcc62f3\" (UID: \"24656782-3b91-4930-a98d-bb741bcc62f3\") " Feb 17 17:15:03 crc kubenswrapper[4672]: I0217 17:15:03.430078 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24656782-3b91-4930-a98d-bb741bcc62f3-config-volume" (OuterVolumeSpecName: "config-volume") pod "24656782-3b91-4930-a98d-bb741bcc62f3" (UID: "24656782-3b91-4930-a98d-bb741bcc62f3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:15:03 crc kubenswrapper[4672]: I0217 17:15:03.436494 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24656782-3b91-4930-a98d-bb741bcc62f3-kube-api-access-qv2ml" (OuterVolumeSpecName: "kube-api-access-qv2ml") pod "24656782-3b91-4930-a98d-bb741bcc62f3" (UID: "24656782-3b91-4930-a98d-bb741bcc62f3"). InnerVolumeSpecName "kube-api-access-qv2ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:15:03 crc kubenswrapper[4672]: I0217 17:15:03.436650 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24656782-3b91-4930-a98d-bb741bcc62f3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "24656782-3b91-4930-a98d-bb741bcc62f3" (UID: "24656782-3b91-4930-a98d-bb741bcc62f3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:15:03 crc kubenswrapper[4672]: I0217 17:15:03.531733 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv2ml\" (UniqueName: \"kubernetes.io/projected/24656782-3b91-4930-a98d-bb741bcc62f3-kube-api-access-qv2ml\") on node \"crc\" DevicePath \"\"" Feb 17 17:15:03 crc kubenswrapper[4672]: I0217 17:15:03.531783 4672 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24656782-3b91-4930-a98d-bb741bcc62f3-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:15:03 crc kubenswrapper[4672]: I0217 17:15:03.531796 4672 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24656782-3b91-4930-a98d-bb741bcc62f3-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:15:03 crc kubenswrapper[4672]: I0217 17:15:03.985380 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-qgp8d" event={"ID":"24656782-3b91-4930-a98d-bb741bcc62f3","Type":"ContainerDied","Data":"da3570e3f4b496a5dbd2b6b33d9d42723e7176117a437c9725a65849c5725a2a"} Feb 17 17:15:03 crc kubenswrapper[4672]: I0217 17:15:03.985427 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da3570e3f4b496a5dbd2b6b33d9d42723e7176117a437c9725a65849c5725a2a" Feb 17 17:15:03 crc kubenswrapper[4672]: I0217 17:15:03.985433 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-qgp8d" Feb 17 17:15:04 crc kubenswrapper[4672]: I0217 17:15:04.487805 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-mvmlp"] Feb 17 17:15:04 crc kubenswrapper[4672]: I0217 17:15:04.499103 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-mvmlp"] Feb 17 17:15:04 crc kubenswrapper[4672]: E0217 17:15:04.948236 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:15:05 crc kubenswrapper[4672]: I0217 17:15:05.959279 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7c33111-5ce4-4e4d-b36c-58896f808426" path="/var/lib/kubelet/pods/a7c33111-5ce4-4e4d-b36c-58896f808426/volumes" Feb 17 17:15:13 crc kubenswrapper[4672]: E0217 17:15:13.949014 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:15:19 crc kubenswrapper[4672]: I0217 17:15:19.583784 4672 scope.go:117] "RemoveContainer" containerID="c83dce869c5f306dabb7d2a96a97af980ba8258cb2d22b0fd1cd077022c17de5" Feb 17 17:15:19 crc kubenswrapper[4672]: E0217 17:15:19.947125 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:15:25 crc kubenswrapper[4672]: E0217 17:15:25.946624 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:15:27 crc kubenswrapper[4672]: I0217 17:15:27.565929 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:15:27 crc kubenswrapper[4672]: I0217 17:15:27.566316 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:15:32 crc kubenswrapper[4672]: E0217 17:15:32.946898 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:15:40 crc kubenswrapper[4672]: E0217 17:15:40.947300 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:15:47 crc kubenswrapper[4672]: E0217 17:15:47.947809 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:15:55 crc kubenswrapper[4672]: E0217 17:15:55.948906 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:15:57 crc kubenswrapper[4672]: I0217 17:15:57.565735 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:15:57 crc kubenswrapper[4672]: I0217 17:15:57.566065 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:15:57 crc kubenswrapper[4672]: I0217 17:15:57.566108 4672 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 17:15:57 crc kubenswrapper[4672]: I0217 17:15:57.566879 4672 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f"} pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:15:57 crc kubenswrapper[4672]: I0217 17:15:57.566941 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" containerID="cri-o://3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" gracePeriod=600 Feb 17 17:15:57 crc kubenswrapper[4672]: E0217 17:15:57.698896 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:15:58 crc kubenswrapper[4672]: I0217 17:15:58.541147 4672 generic.go:334] "Generic (PLEG): container finished" podID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" exitCode=0 Feb 17 17:15:58 crc kubenswrapper[4672]: I0217 17:15:58.541237 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerDied","Data":"3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f"} Feb 17 17:15:58 crc kubenswrapper[4672]: I0217 17:15:58.541582 4672 scope.go:117] "RemoveContainer" containerID="ae16538e69afb4601853cee3d1accdfd070930f88954c0c5c0200101ab1c5053" Feb 17 17:15:58 crc kubenswrapper[4672]: I0217 17:15:58.542237 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:15:58 crc kubenswrapper[4672]: E0217 17:15:58.542630 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:16:02 crc kubenswrapper[4672]: E0217 17:16:02.946649 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:16:09 crc kubenswrapper[4672]: E0217 17:16:09.947754 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:16:10 crc kubenswrapper[4672]: I0217 17:16:10.945203 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:16:10 crc kubenswrapper[4672]: E0217 17:16:10.945712 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:16:13 crc kubenswrapper[4672]: I0217 17:16:13.702547 4672 generic.go:334] "Generic (PLEG): container finished" podID="16cbb615-75bb-4298-90e4-6490dd64dd01" containerID="ff8539a6242e86912f7b47abeb903fd831b1fcbc0c6933d4f8adfa238a027935" exitCode=2 Feb 17 17:16:13 crc kubenswrapper[4672]: I0217 17:16:13.703012 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6" event={"ID":"16cbb615-75bb-4298-90e4-6490dd64dd01","Type":"ContainerDied","Data":"ff8539a6242e86912f7b47abeb903fd831b1fcbc0c6933d4f8adfa238a027935"} Feb 17 17:16:15 crc kubenswrapper[4672]: I0217 17:16:15.209905 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6" Feb 17 17:16:15 crc kubenswrapper[4672]: I0217 17:16:15.364191 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16cbb615-75bb-4298-90e4-6490dd64dd01-ssh-key-openstack-edpm-ipam\") pod \"16cbb615-75bb-4298-90e4-6490dd64dd01\" (UID: \"16cbb615-75bb-4298-90e4-6490dd64dd01\") " Feb 17 17:16:15 crc kubenswrapper[4672]: I0217 17:16:15.364325 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16cbb615-75bb-4298-90e4-6490dd64dd01-inventory\") pod \"16cbb615-75bb-4298-90e4-6490dd64dd01\" (UID: \"16cbb615-75bb-4298-90e4-6490dd64dd01\") " Feb 17 17:16:15 crc kubenswrapper[4672]: I0217 17:16:15.364418 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lftdt\" (UniqueName: \"kubernetes.io/projected/16cbb615-75bb-4298-90e4-6490dd64dd01-kube-api-access-lftdt\") pod \"16cbb615-75bb-4298-90e4-6490dd64dd01\" (UID: \"16cbb615-75bb-4298-90e4-6490dd64dd01\") " Feb 17 17:16:15 crc kubenswrapper[4672]: I0217 17:16:15.375112 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16cbb615-75bb-4298-90e4-6490dd64dd01-kube-api-access-lftdt" (OuterVolumeSpecName: "kube-api-access-lftdt") pod "16cbb615-75bb-4298-90e4-6490dd64dd01" (UID: "16cbb615-75bb-4298-90e4-6490dd64dd01"). InnerVolumeSpecName "kube-api-access-lftdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:16:15 crc kubenswrapper[4672]: I0217 17:16:15.396199 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16cbb615-75bb-4298-90e4-6490dd64dd01-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "16cbb615-75bb-4298-90e4-6490dd64dd01" (UID: "16cbb615-75bb-4298-90e4-6490dd64dd01"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:16:15 crc kubenswrapper[4672]: I0217 17:16:15.397727 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16cbb615-75bb-4298-90e4-6490dd64dd01-inventory" (OuterVolumeSpecName: "inventory") pod "16cbb615-75bb-4298-90e4-6490dd64dd01" (UID: "16cbb615-75bb-4298-90e4-6490dd64dd01"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:16:15 crc kubenswrapper[4672]: I0217 17:16:15.466414 4672 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16cbb615-75bb-4298-90e4-6490dd64dd01-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 17:16:15 crc kubenswrapper[4672]: I0217 17:16:15.466445 4672 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16cbb615-75bb-4298-90e4-6490dd64dd01-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 17:16:15 crc kubenswrapper[4672]: I0217 17:16:15.466455 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lftdt\" (UniqueName: \"kubernetes.io/projected/16cbb615-75bb-4298-90e4-6490dd64dd01-kube-api-access-lftdt\") on node \"crc\" DevicePath \"\"" Feb 17 17:16:15 crc kubenswrapper[4672]: I0217 17:16:15.727183 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6" event={"ID":"16cbb615-75bb-4298-90e4-6490dd64dd01","Type":"ContainerDied","Data":"164b57d69a2a5120aa0d831611eaf40893fb8c55b8fecd5aa2e62f99951435cc"} Feb 17 17:16:15 crc kubenswrapper[4672]: I0217 17:16:15.727228 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="164b57d69a2a5120aa0d831611eaf40893fb8c55b8fecd5aa2e62f99951435cc" Feb 17 17:16:15 crc kubenswrapper[4672]: I0217 17:16:15.727248 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6" Feb 17 17:16:17 crc kubenswrapper[4672]: E0217 17:16:17.947811 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:16:20 crc kubenswrapper[4672]: E0217 17:16:20.947854 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:16:21 crc kubenswrapper[4672]: I0217 17:16:21.741550 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gnnfs"] Feb 17 17:16:21 crc kubenswrapper[4672]: E0217 17:16:21.742698 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16cbb615-75bb-4298-90e4-6490dd64dd01" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:16:21 crc kubenswrapper[4672]: I0217 17:16:21.742723 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="16cbb615-75bb-4298-90e4-6490dd64dd01" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:16:21 crc kubenswrapper[4672]: E0217 17:16:21.742751 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24656782-3b91-4930-a98d-bb741bcc62f3" containerName="collect-profiles" Feb 17 17:16:21 crc kubenswrapper[4672]: I0217 17:16:21.742760 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="24656782-3b91-4930-a98d-bb741bcc62f3" containerName="collect-profiles" Feb 17 17:16:21 crc kubenswrapper[4672]: I0217 17:16:21.743027 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="24656782-3b91-4930-a98d-bb741bcc62f3" containerName="collect-profiles" Feb 17 17:16:21 crc kubenswrapper[4672]: I0217 17:16:21.743047 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="16cbb615-75bb-4298-90e4-6490dd64dd01" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:16:21 crc kubenswrapper[4672]: I0217 17:16:21.744963 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gnnfs" Feb 17 17:16:21 crc kubenswrapper[4672]: I0217 17:16:21.756924 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gnnfs"] Feb 17 17:16:21 crc kubenswrapper[4672]: I0217 17:16:21.903219 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnzkl\" (UniqueName: \"kubernetes.io/projected/c7abddea-5b20-4b02-ae80-aabf9bf66bfd-kube-api-access-tnzkl\") pod \"community-operators-gnnfs\" (UID: \"c7abddea-5b20-4b02-ae80-aabf9bf66bfd\") " pod="openshift-marketplace/community-operators-gnnfs" Feb 17 17:16:21 crc kubenswrapper[4672]: I0217 17:16:21.903280 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7abddea-5b20-4b02-ae80-aabf9bf66bfd-catalog-content\") pod \"community-operators-gnnfs\" (UID: \"c7abddea-5b20-4b02-ae80-aabf9bf66bfd\") " pod="openshift-marketplace/community-operators-gnnfs" Feb 17 17:16:21 crc kubenswrapper[4672]: I0217 17:16:21.903309 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7abddea-5b20-4b02-ae80-aabf9bf66bfd-utilities\") pod \"community-operators-gnnfs\" (UID: \"c7abddea-5b20-4b02-ae80-aabf9bf66bfd\") " pod="openshift-marketplace/community-operators-gnnfs" Feb 17 17:16:22 crc kubenswrapper[4672]: I0217 17:16:22.005293 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnzkl\" (UniqueName: \"kubernetes.io/projected/c7abddea-5b20-4b02-ae80-aabf9bf66bfd-kube-api-access-tnzkl\") pod \"community-operators-gnnfs\" (UID: \"c7abddea-5b20-4b02-ae80-aabf9bf66bfd\") " pod="openshift-marketplace/community-operators-gnnfs" Feb 17 17:16:22 crc kubenswrapper[4672]: I0217 17:16:22.005359 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7abddea-5b20-4b02-ae80-aabf9bf66bfd-catalog-content\") pod \"community-operators-gnnfs\" (UID: \"c7abddea-5b20-4b02-ae80-aabf9bf66bfd\") " pod="openshift-marketplace/community-operators-gnnfs" Feb 17 17:16:22 crc kubenswrapper[4672]: I0217 17:16:22.005390 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7abddea-5b20-4b02-ae80-aabf9bf66bfd-utilities\") pod \"community-operators-gnnfs\" (UID: \"c7abddea-5b20-4b02-ae80-aabf9bf66bfd\") " pod="openshift-marketplace/community-operators-gnnfs" Feb 17 17:16:22 crc kubenswrapper[4672]: I0217 17:16:22.005852 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7abddea-5b20-4b02-ae80-aabf9bf66bfd-utilities\") pod \"community-operators-gnnfs\" (UID: \"c7abddea-5b20-4b02-ae80-aabf9bf66bfd\") " pod="openshift-marketplace/community-operators-gnnfs" Feb 17 17:16:22 crc kubenswrapper[4672]: I0217 17:16:22.006320 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7abddea-5b20-4b02-ae80-aabf9bf66bfd-catalog-content\") pod \"community-operators-gnnfs\" (UID: \"c7abddea-5b20-4b02-ae80-aabf9bf66bfd\") " pod="openshift-marketplace/community-operators-gnnfs" Feb 17 17:16:22 crc kubenswrapper[4672]: I0217 17:16:22.269272 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnzkl\" (UniqueName: \"kubernetes.io/projected/c7abddea-5b20-4b02-ae80-aabf9bf66bfd-kube-api-access-tnzkl\") pod \"community-operators-gnnfs\" (UID: \"c7abddea-5b20-4b02-ae80-aabf9bf66bfd\") " pod="openshift-marketplace/community-operators-gnnfs" Feb 17 17:16:22 crc kubenswrapper[4672]: I0217 17:16:22.379954 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gnnfs" Feb 17 17:16:23 crc kubenswrapper[4672]: I0217 17:16:22.879396 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gnnfs"] Feb 17 17:16:23 crc kubenswrapper[4672]: I0217 17:16:23.863900 4672 generic.go:334] "Generic (PLEG): container finished" podID="c7abddea-5b20-4b02-ae80-aabf9bf66bfd" containerID="e27083d52ed5c1d5fb8f382c347946a511720df20de70b247e835d41d3c07d96" exitCode=0 Feb 17 17:16:23 crc kubenswrapper[4672]: I0217 17:16:23.864067 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnnfs" event={"ID":"c7abddea-5b20-4b02-ae80-aabf9bf66bfd","Type":"ContainerDied","Data":"e27083d52ed5c1d5fb8f382c347946a511720df20de70b247e835d41d3c07d96"} Feb 17 17:16:23 crc kubenswrapper[4672]: I0217 17:16:23.864713 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnnfs" event={"ID":"c7abddea-5b20-4b02-ae80-aabf9bf66bfd","Type":"ContainerStarted","Data":"5fbb91c0da5a9729792f9a11b234344657afc94d174ba126d28ffc00cd4fb5c7"} Feb 17 17:16:23 crc kubenswrapper[4672]: I0217 17:16:23.944897 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:16:23 crc kubenswrapper[4672]: E0217 17:16:23.945224 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:16:25 crc kubenswrapper[4672]: I0217 17:16:25.885899 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnnfs" event={"ID":"c7abddea-5b20-4b02-ae80-aabf9bf66bfd","Type":"ContainerStarted","Data":"85b4f6d8d9b163775c7d50072d86b1bab84b60f9b85098aeb4958da9b30e7fe7"} Feb 17 17:16:26 crc kubenswrapper[4672]: I0217 17:16:26.897575 4672 generic.go:334] "Generic (PLEG): container finished" podID="c7abddea-5b20-4b02-ae80-aabf9bf66bfd" containerID="85b4f6d8d9b163775c7d50072d86b1bab84b60f9b85098aeb4958da9b30e7fe7" exitCode=0 Feb 17 17:16:26 crc kubenswrapper[4672]: I0217 17:16:26.897673 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnnfs" event={"ID":"c7abddea-5b20-4b02-ae80-aabf9bf66bfd","Type":"ContainerDied","Data":"85b4f6d8d9b163775c7d50072d86b1bab84b60f9b85098aeb4958da9b30e7fe7"} Feb 17 17:16:27 crc kubenswrapper[4672]: I0217 17:16:27.908861 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnnfs" event={"ID":"c7abddea-5b20-4b02-ae80-aabf9bf66bfd","Type":"ContainerStarted","Data":"ea12402f767786f09fd630ef7290ec593320fa8b565694a8657ed1ed5e9c4b33"} Feb 17 17:16:27 crc kubenswrapper[4672]: I0217 17:16:27.930096 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gnnfs" podStartSLOduration=3.495839002 podStartE2EDuration="6.930078499s" podCreationTimestamp="2026-02-17 17:16:21 +0000 UTC" firstStartedPulling="2026-02-17 17:16:23.865994509 +0000 UTC m=+4392.620083261" lastFinishedPulling="2026-02-17 17:16:27.300234026 +0000 UTC m=+4396.054322758" observedRunningTime="2026-02-17 17:16:27.928906978 +0000 UTC m=+4396.682995700" watchObservedRunningTime="2026-02-17 17:16:27.930078499 +0000 UTC m=+4396.684167231" Feb 17 17:16:31 crc kubenswrapper[4672]: E0217 17:16:31.954763 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:16:31 crc kubenswrapper[4672]: E0217 17:16:31.954893 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:16:32 crc kubenswrapper[4672]: I0217 17:16:32.380844 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gnnfs" Feb 17 17:16:32 crc kubenswrapper[4672]: I0217 17:16:32.380906 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gnnfs" Feb 17 17:16:33 crc kubenswrapper[4672]: I0217 17:16:33.011013 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gnnfs" Feb 17 17:16:33 crc kubenswrapper[4672]: I0217 17:16:33.072974 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gnnfs" Feb 17 17:16:33 crc kubenswrapper[4672]: I0217 17:16:33.249372 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gnnfs"] Feb 17 17:16:35 crc kubenswrapper[4672]: I0217 17:16:35.004399 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gnnfs" podUID="c7abddea-5b20-4b02-ae80-aabf9bf66bfd" containerName="registry-server" containerID="cri-o://ea12402f767786f09fd630ef7290ec593320fa8b565694a8657ed1ed5e9c4b33" gracePeriod=2 Feb 17 17:16:35 crc kubenswrapper[4672]: I0217 17:16:35.533777 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gnnfs" Feb 17 17:16:35 crc kubenswrapper[4672]: I0217 17:16:35.628813 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7abddea-5b20-4b02-ae80-aabf9bf66bfd-utilities\") pod \"c7abddea-5b20-4b02-ae80-aabf9bf66bfd\" (UID: \"c7abddea-5b20-4b02-ae80-aabf9bf66bfd\") " Feb 17 17:16:35 crc kubenswrapper[4672]: I0217 17:16:35.628914 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7abddea-5b20-4b02-ae80-aabf9bf66bfd-catalog-content\") pod \"c7abddea-5b20-4b02-ae80-aabf9bf66bfd\" (UID: \"c7abddea-5b20-4b02-ae80-aabf9bf66bfd\") " Feb 17 17:16:35 crc kubenswrapper[4672]: I0217 17:16:35.629043 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnzkl\" (UniqueName: \"kubernetes.io/projected/c7abddea-5b20-4b02-ae80-aabf9bf66bfd-kube-api-access-tnzkl\") pod \"c7abddea-5b20-4b02-ae80-aabf9bf66bfd\" (UID: \"c7abddea-5b20-4b02-ae80-aabf9bf66bfd\") " Feb 17 17:16:35 crc kubenswrapper[4672]: I0217 17:16:35.629973 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7abddea-5b20-4b02-ae80-aabf9bf66bfd-utilities" (OuterVolumeSpecName: "utilities") pod "c7abddea-5b20-4b02-ae80-aabf9bf66bfd" (UID: "c7abddea-5b20-4b02-ae80-aabf9bf66bfd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:16:35 crc kubenswrapper[4672]: I0217 17:16:35.636862 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7abddea-5b20-4b02-ae80-aabf9bf66bfd-kube-api-access-tnzkl" (OuterVolumeSpecName: "kube-api-access-tnzkl") pod "c7abddea-5b20-4b02-ae80-aabf9bf66bfd" (UID: "c7abddea-5b20-4b02-ae80-aabf9bf66bfd"). InnerVolumeSpecName "kube-api-access-tnzkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:16:35 crc kubenswrapper[4672]: I0217 17:16:35.680616 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7abddea-5b20-4b02-ae80-aabf9bf66bfd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7abddea-5b20-4b02-ae80-aabf9bf66bfd" (UID: "c7abddea-5b20-4b02-ae80-aabf9bf66bfd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:16:35 crc kubenswrapper[4672]: I0217 17:16:35.732122 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7abddea-5b20-4b02-ae80-aabf9bf66bfd-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:16:35 crc kubenswrapper[4672]: I0217 17:16:35.732155 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7abddea-5b20-4b02-ae80-aabf9bf66bfd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:16:35 crc kubenswrapper[4672]: I0217 17:16:35.732167 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnzkl\" (UniqueName: \"kubernetes.io/projected/c7abddea-5b20-4b02-ae80-aabf9bf66bfd-kube-api-access-tnzkl\") on node \"crc\" DevicePath \"\"" Feb 17 17:16:36 crc kubenswrapper[4672]: I0217 17:16:36.043530 4672 generic.go:334] "Generic (PLEG): container finished" podID="c7abddea-5b20-4b02-ae80-aabf9bf66bfd" containerID="ea12402f767786f09fd630ef7290ec593320fa8b565694a8657ed1ed5e9c4b33" exitCode=0 Feb 17 17:16:36 crc kubenswrapper[4672]: I0217 17:16:36.043585 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnnfs" event={"ID":"c7abddea-5b20-4b02-ae80-aabf9bf66bfd","Type":"ContainerDied","Data":"ea12402f767786f09fd630ef7290ec593320fa8b565694a8657ed1ed5e9c4b33"} Feb 17 17:16:36 crc kubenswrapper[4672]: I0217 17:16:36.043619 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnnfs" event={"ID":"c7abddea-5b20-4b02-ae80-aabf9bf66bfd","Type":"ContainerDied","Data":"5fbb91c0da5a9729792f9a11b234344657afc94d174ba126d28ffc00cd4fb5c7"} Feb 17 17:16:36 crc kubenswrapper[4672]: I0217 17:16:36.043643 4672 scope.go:117] "RemoveContainer" containerID="ea12402f767786f09fd630ef7290ec593320fa8b565694a8657ed1ed5e9c4b33" Feb 17 17:16:36 crc kubenswrapper[4672]: I0217 17:16:36.043655 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gnnfs" Feb 17 17:16:36 crc kubenswrapper[4672]: I0217 17:16:36.070837 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gnnfs"] Feb 17 17:16:36 crc kubenswrapper[4672]: I0217 17:16:36.084598 4672 scope.go:117] "RemoveContainer" containerID="85b4f6d8d9b163775c7d50072d86b1bab84b60f9b85098aeb4958da9b30e7fe7" Feb 17 17:16:36 crc kubenswrapper[4672]: I0217 17:16:36.087802 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gnnfs"] Feb 17 17:16:36 crc kubenswrapper[4672]: I0217 17:16:36.110782 4672 scope.go:117] "RemoveContainer" containerID="e27083d52ed5c1d5fb8f382c347946a511720df20de70b247e835d41d3c07d96" Feb 17 17:16:36 crc kubenswrapper[4672]: I0217 17:16:36.159072 4672 scope.go:117] "RemoveContainer" containerID="ea12402f767786f09fd630ef7290ec593320fa8b565694a8657ed1ed5e9c4b33" Feb 17 17:16:36 crc kubenswrapper[4672]: E0217 17:16:36.159821 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea12402f767786f09fd630ef7290ec593320fa8b565694a8657ed1ed5e9c4b33\": container with ID starting with ea12402f767786f09fd630ef7290ec593320fa8b565694a8657ed1ed5e9c4b33 not found: ID does not exist" containerID="ea12402f767786f09fd630ef7290ec593320fa8b565694a8657ed1ed5e9c4b33" Feb 17 17:16:36 crc kubenswrapper[4672]: I0217 17:16:36.159875 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea12402f767786f09fd630ef7290ec593320fa8b565694a8657ed1ed5e9c4b33"} err="failed to get container status \"ea12402f767786f09fd630ef7290ec593320fa8b565694a8657ed1ed5e9c4b33\": rpc error: code = NotFound desc = could not find container \"ea12402f767786f09fd630ef7290ec593320fa8b565694a8657ed1ed5e9c4b33\": container with ID starting with ea12402f767786f09fd630ef7290ec593320fa8b565694a8657ed1ed5e9c4b33 not found: ID does not exist" Feb 17 17:16:36 crc kubenswrapper[4672]: I0217 17:16:36.159909 4672 scope.go:117] "RemoveContainer" containerID="85b4f6d8d9b163775c7d50072d86b1bab84b60f9b85098aeb4958da9b30e7fe7" Feb 17 17:16:36 crc kubenswrapper[4672]: E0217 17:16:36.160500 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85b4f6d8d9b163775c7d50072d86b1bab84b60f9b85098aeb4958da9b30e7fe7\": container with ID starting with 85b4f6d8d9b163775c7d50072d86b1bab84b60f9b85098aeb4958da9b30e7fe7 not found: ID does not exist" containerID="85b4f6d8d9b163775c7d50072d86b1bab84b60f9b85098aeb4958da9b30e7fe7" Feb 17 17:16:36 crc kubenswrapper[4672]: I0217 17:16:36.160612 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85b4f6d8d9b163775c7d50072d86b1bab84b60f9b85098aeb4958da9b30e7fe7"} err="failed to get container status \"85b4f6d8d9b163775c7d50072d86b1bab84b60f9b85098aeb4958da9b30e7fe7\": rpc error: code = NotFound desc = could not find container \"85b4f6d8d9b163775c7d50072d86b1bab84b60f9b85098aeb4958da9b30e7fe7\": container with ID starting with 85b4f6d8d9b163775c7d50072d86b1bab84b60f9b85098aeb4958da9b30e7fe7 not found: ID does not exist" Feb 17 17:16:36 crc kubenswrapper[4672]: I0217 17:16:36.160630 4672 scope.go:117] "RemoveContainer" containerID="e27083d52ed5c1d5fb8f382c347946a511720df20de70b247e835d41d3c07d96" Feb 17 17:16:36 crc kubenswrapper[4672]: E0217 17:16:36.161129 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e27083d52ed5c1d5fb8f382c347946a511720df20de70b247e835d41d3c07d96\": container with ID starting with e27083d52ed5c1d5fb8f382c347946a511720df20de70b247e835d41d3c07d96 not found: ID does not exist" containerID="e27083d52ed5c1d5fb8f382c347946a511720df20de70b247e835d41d3c07d96" Feb 17 17:16:36 crc kubenswrapper[4672]: I0217 17:16:36.161217 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e27083d52ed5c1d5fb8f382c347946a511720df20de70b247e835d41d3c07d96"} err="failed to get container status \"e27083d52ed5c1d5fb8f382c347946a511720df20de70b247e835d41d3c07d96\": rpc error: code = NotFound desc = could not find container \"e27083d52ed5c1d5fb8f382c347946a511720df20de70b247e835d41d3c07d96\": container with ID starting with e27083d52ed5c1d5fb8f382c347946a511720df20de70b247e835d41d3c07d96 not found: ID does not exist" Feb 17 17:16:36 crc kubenswrapper[4672]: I0217 17:16:36.945101 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:16:36 crc kubenswrapper[4672]: E0217 17:16:36.945470 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:16:37 crc kubenswrapper[4672]: I0217 17:16:37.956981 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7abddea-5b20-4b02-ae80-aabf9bf66bfd" path="/var/lib/kubelet/pods/c7abddea-5b20-4b02-ae80-aabf9bf66bfd/volumes" Feb 17 17:16:45 crc kubenswrapper[4672]: E0217 17:16:45.947991 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:16:46 crc kubenswrapper[4672]: E0217 17:16:46.946608 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:16:49 crc kubenswrapper[4672]: I0217 17:16:49.945689 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:16:49 crc kubenswrapper[4672]: E0217 17:16:49.946483 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:16:56 crc kubenswrapper[4672]: E0217 17:16:56.947163 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:16:58 crc kubenswrapper[4672]: E0217 17:16:58.946890 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:17:04 crc kubenswrapper[4672]: I0217 17:17:04.946008 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:17:04 crc kubenswrapper[4672]: E0217 17:17:04.947627 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:17:09 crc kubenswrapper[4672]: E0217 17:17:09.948135 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:17:10 crc kubenswrapper[4672]: E0217 17:17:10.948659 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:17:19 crc kubenswrapper[4672]: I0217 17:17:19.949081 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:17:19 crc kubenswrapper[4672]: E0217 17:17:19.950078 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:17:23 crc kubenswrapper[4672]: E0217 17:17:23.947662 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:17:25 crc kubenswrapper[4672]: E0217 17:17:25.948251 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:17:32 crc kubenswrapper[4672]: I0217 17:17:32.945639 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:17:32 crc kubenswrapper[4672]: E0217 17:17:32.946347 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:17:34 crc kubenswrapper[4672]: E0217 17:17:34.947317 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:17:37 crc kubenswrapper[4672]: E0217 17:17:37.947855 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:17:44 crc kubenswrapper[4672]: I0217 17:17:44.945207 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:17:44 crc kubenswrapper[4672]: E0217 17:17:44.946027 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:17:48 crc kubenswrapper[4672]: E0217 17:17:48.948413 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:17:48 crc kubenswrapper[4672]: E0217 17:17:48.948644 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:17:56 crc kubenswrapper[4672]: I0217 17:17:56.945142 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:17:56 crc kubenswrapper[4672]: E0217 17:17:56.946089 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:18:01 crc kubenswrapper[4672]: E0217 17:18:01.956775 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:18:03 crc kubenswrapper[4672]: E0217 17:18:03.955393 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:18:09 crc kubenswrapper[4672]: I0217 17:18:09.945681 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:18:09 crc kubenswrapper[4672]: E0217 17:18:09.946579 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:18:12 crc kubenswrapper[4672]: E0217 17:18:12.948359 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:18:16 crc kubenswrapper[4672]: E0217 17:18:16.947573 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:18:20 crc kubenswrapper[4672]: I0217 17:18:20.944781 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:18:20 crc kubenswrapper[4672]: E0217 17:18:20.945504 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:18:24 crc kubenswrapper[4672]: E0217 17:18:24.949106 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:18:28 crc kubenswrapper[4672]: E0217 17:18:28.947541 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:18:33 crc kubenswrapper[4672]: I0217 17:18:33.265230 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vs74f"] Feb 17 17:18:33 crc kubenswrapper[4672]: E0217 17:18:33.266913 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7abddea-5b20-4b02-ae80-aabf9bf66bfd" containerName="registry-server" Feb 17 17:18:33 crc kubenswrapper[4672]: I0217 17:18:33.266993 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7abddea-5b20-4b02-ae80-aabf9bf66bfd" containerName="registry-server" Feb 17 17:18:33 crc kubenswrapper[4672]: E0217 17:18:33.267057 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7abddea-5b20-4b02-ae80-aabf9bf66bfd" containerName="extract-utilities" Feb 17 17:18:33 crc kubenswrapper[4672]: I0217 17:18:33.267109 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7abddea-5b20-4b02-ae80-aabf9bf66bfd" containerName="extract-utilities" Feb 17 17:18:33 crc kubenswrapper[4672]: E0217 17:18:33.267184 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7abddea-5b20-4b02-ae80-aabf9bf66bfd" containerName="extract-content" Feb 17 17:18:33 crc kubenswrapper[4672]: I0217 17:18:33.267233 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7abddea-5b20-4b02-ae80-aabf9bf66bfd" containerName="extract-content" Feb 17 17:18:33 crc kubenswrapper[4672]: I0217 17:18:33.267483 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7abddea-5b20-4b02-ae80-aabf9bf66bfd" containerName="registry-server" Feb 17 17:18:33 crc kubenswrapper[4672]: I0217 17:18:33.269202 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vs74f" Feb 17 17:18:33 crc kubenswrapper[4672]: I0217 17:18:33.281561 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vs74f"] Feb 17 17:18:33 crc kubenswrapper[4672]: I0217 17:18:33.332040 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4-catalog-content\") pod \"redhat-operators-vs74f\" (UID: \"eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4\") " pod="openshift-marketplace/redhat-operators-vs74f" Feb 17 17:18:33 crc kubenswrapper[4672]: I0217 17:18:33.332108 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psql8\" (UniqueName: \"kubernetes.io/projected/eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4-kube-api-access-psql8\") pod \"redhat-operators-vs74f\" (UID: \"eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4\") " pod="openshift-marketplace/redhat-operators-vs74f" Feb 17 17:18:33 crc kubenswrapper[4672]: I0217 17:18:33.332274 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4-utilities\") pod \"redhat-operators-vs74f\" (UID: \"eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4\") " pod="openshift-marketplace/redhat-operators-vs74f" Feb 17 17:18:33 crc kubenswrapper[4672]: I0217 17:18:33.433726 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4-utilities\") pod \"redhat-operators-vs74f\" (UID: \"eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4\") " pod="openshift-marketplace/redhat-operators-vs74f" Feb 17 17:18:33 crc kubenswrapper[4672]: I0217 17:18:33.433794 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4-catalog-content\") pod \"redhat-operators-vs74f\" (UID: \"eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4\") " pod="openshift-marketplace/redhat-operators-vs74f" Feb 17 17:18:33 crc kubenswrapper[4672]: I0217 17:18:33.433838 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psql8\" (UniqueName: \"kubernetes.io/projected/eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4-kube-api-access-psql8\") pod \"redhat-operators-vs74f\" (UID: \"eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4\") " pod="openshift-marketplace/redhat-operators-vs74f" Feb 17 17:18:33 crc kubenswrapper[4672]: I0217 17:18:33.434346 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4-utilities\") pod \"redhat-operators-vs74f\" (UID: \"eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4\") " pod="openshift-marketplace/redhat-operators-vs74f" Feb 17 17:18:33 crc kubenswrapper[4672]: I0217 17:18:33.434333 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4-catalog-content\") pod \"redhat-operators-vs74f\" (UID: \"eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4\") " pod="openshift-marketplace/redhat-operators-vs74f" Feb 17 17:18:33 crc kubenswrapper[4672]: I0217 17:18:33.463235 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psql8\" (UniqueName: \"kubernetes.io/projected/eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4-kube-api-access-psql8\") pod \"redhat-operators-vs74f\" (UID: \"eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4\") " pod="openshift-marketplace/redhat-operators-vs74f" Feb 17 17:18:33 crc kubenswrapper[4672]: I0217 17:18:33.598398 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vs74f" Feb 17 17:18:34 crc kubenswrapper[4672]: I0217 17:18:34.107925 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vs74f"] Feb 17 17:18:34 crc kubenswrapper[4672]: I0217 17:18:34.243051 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vs74f" event={"ID":"eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4","Type":"ContainerStarted","Data":"66819421812aba8973950b773506987ae98a8afc24ccc2d5a44730610c458ff4"} Feb 17 17:18:35 crc kubenswrapper[4672]: I0217 17:18:35.262090 4672 generic.go:334] "Generic (PLEG): container finished" podID="eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4" containerID="5004087a9e2e04029dce266675e96ce0423dbdcb09a24d0672f068a371f678ab" exitCode=0 Feb 17 17:18:35 crc kubenswrapper[4672]: I0217 17:18:35.262154 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vs74f" event={"ID":"eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4","Type":"ContainerDied","Data":"5004087a9e2e04029dce266675e96ce0423dbdcb09a24d0672f068a371f678ab"} Feb 17 17:18:35 crc kubenswrapper[4672]: I0217 17:18:35.945736 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:18:35 crc kubenswrapper[4672]: E0217 17:18:35.946522 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:18:37 crc kubenswrapper[4672]: I0217 17:18:37.281592 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vs74f" event={"ID":"eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4","Type":"ContainerStarted","Data":"d64e46250d8716eacd842de561f81e971c730d4425010939bef7812137695b0b"} Feb 17 17:18:39 crc kubenswrapper[4672]: E0217 17:18:39.947768 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:18:40 crc kubenswrapper[4672]: I0217 17:18:40.314684 4672 generic.go:334] "Generic (PLEG): container finished" podID="eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4" containerID="d64e46250d8716eacd842de561f81e971c730d4425010939bef7812137695b0b" exitCode=0 Feb 17 17:18:40 crc kubenswrapper[4672]: I0217 17:18:40.314745 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vs74f" event={"ID":"eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4","Type":"ContainerDied","Data":"d64e46250d8716eacd842de561f81e971c730d4425010939bef7812137695b0b"} Feb 17 17:18:41 crc kubenswrapper[4672]: I0217 17:18:41.327312 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vs74f" event={"ID":"eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4","Type":"ContainerStarted","Data":"32623493dae9761335ad045ecf3389a055d191d180fb072eb76d3f2974849df4"} Feb 17 17:18:41 crc kubenswrapper[4672]: I0217 17:18:41.359338 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vs74f" podStartSLOduration=2.890785713 podStartE2EDuration="8.359314109s" podCreationTimestamp="2026-02-17 17:18:33 +0000 UTC" firstStartedPulling="2026-02-17 17:18:35.264738792 +0000 UTC m=+4524.018827524" lastFinishedPulling="2026-02-17 17:18:40.733267188 +0000 UTC m=+4529.487355920" observedRunningTime="2026-02-17 17:18:41.349498629 +0000 UTC m=+4530.103587371" watchObservedRunningTime="2026-02-17 17:18:41.359314109 +0000 UTC m=+4530.113402841" Feb 17 17:18:43 crc kubenswrapper[4672]: I0217 17:18:43.599041 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vs74f" Feb 17 17:18:43 crc kubenswrapper[4672]: I0217 17:18:43.599311 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vs74f" Feb 17 17:18:43 crc kubenswrapper[4672]: E0217 17:18:43.949462 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:18:44 crc kubenswrapper[4672]: I0217 17:18:44.654826 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vs74f" podUID="eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4" containerName="registry-server" probeResult="failure" output=< Feb 17 17:18:44 crc kubenswrapper[4672]: timeout: failed to connect service ":50051" within 1s Feb 17 17:18:44 crc kubenswrapper[4672]: > Feb 17 17:18:46 crc kubenswrapper[4672]: I0217 17:18:46.945678 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:18:46 crc kubenswrapper[4672]: E0217 17:18:46.946345 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:18:50 crc kubenswrapper[4672]: E0217 17:18:50.947817 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:18:53 crc kubenswrapper[4672]: I0217 17:18:53.657401 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vs74f" Feb 17 17:18:53 crc kubenswrapper[4672]: I0217 17:18:53.723021 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vs74f" Feb 17 17:18:53 crc kubenswrapper[4672]: I0217 17:18:53.910881 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vs74f"] Feb 17 17:18:55 crc kubenswrapper[4672]: I0217 17:18:55.448942 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vs74f" podUID="eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4" containerName="registry-server" containerID="cri-o://32623493dae9761335ad045ecf3389a055d191d180fb072eb76d3f2974849df4" gracePeriod=2 Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.139338 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vs74f" Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.255854 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psql8\" (UniqueName: \"kubernetes.io/projected/eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4-kube-api-access-psql8\") pod \"eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4\" (UID: \"eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4\") " Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.256054 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4-catalog-content\") pod \"eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4\" (UID: \"eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4\") " Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.256159 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4-utilities\") pod \"eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4\" (UID: \"eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4\") " Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.256826 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4-utilities" (OuterVolumeSpecName: "utilities") pod "eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4" (UID: "eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.263781 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4-kube-api-access-psql8" (OuterVolumeSpecName: "kube-api-access-psql8") pod "eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4" (UID: "eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4"). InnerVolumeSpecName "kube-api-access-psql8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.358739 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psql8\" (UniqueName: \"kubernetes.io/projected/eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4-kube-api-access-psql8\") on node \"crc\" DevicePath \"\"" Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.359066 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.387946 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4" (UID: "eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.461083 4672 generic.go:334] "Generic (PLEG): container finished" podID="eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4" containerID="32623493dae9761335ad045ecf3389a055d191d180fb072eb76d3f2974849df4" exitCode=0 Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.461125 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vs74f" event={"ID":"eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4","Type":"ContainerDied","Data":"32623493dae9761335ad045ecf3389a055d191d180fb072eb76d3f2974849df4"} Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.461152 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vs74f" event={"ID":"eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4","Type":"ContainerDied","Data":"66819421812aba8973950b773506987ae98a8afc24ccc2d5a44730610c458ff4"} Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.461149 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vs74f" Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.461169 4672 scope.go:117] "RemoveContainer" containerID="32623493dae9761335ad045ecf3389a055d191d180fb072eb76d3f2974849df4" Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.462405 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.485179 4672 scope.go:117] "RemoveContainer" containerID="d64e46250d8716eacd842de561f81e971c730d4425010939bef7812137695b0b" Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.505991 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vs74f"] Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.515893 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vs74f"] Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.516956 4672 scope.go:117] "RemoveContainer" containerID="5004087a9e2e04029dce266675e96ce0423dbdcb09a24d0672f068a371f678ab" Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.561148 4672 scope.go:117] "RemoveContainer" containerID="32623493dae9761335ad045ecf3389a055d191d180fb072eb76d3f2974849df4" Feb 17 17:18:56 crc kubenswrapper[4672]: E0217 17:18:56.561689 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32623493dae9761335ad045ecf3389a055d191d180fb072eb76d3f2974849df4\": container with ID starting with 32623493dae9761335ad045ecf3389a055d191d180fb072eb76d3f2974849df4 not found: ID does not exist" containerID="32623493dae9761335ad045ecf3389a055d191d180fb072eb76d3f2974849df4" Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.561744 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32623493dae9761335ad045ecf3389a055d191d180fb072eb76d3f2974849df4"} err="failed to get container status \"32623493dae9761335ad045ecf3389a055d191d180fb072eb76d3f2974849df4\": rpc error: code = NotFound desc = could not find container \"32623493dae9761335ad045ecf3389a055d191d180fb072eb76d3f2974849df4\": container with ID starting with 32623493dae9761335ad045ecf3389a055d191d180fb072eb76d3f2974849df4 not found: ID does not exist" Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.561780 4672 scope.go:117] "RemoveContainer" containerID="d64e46250d8716eacd842de561f81e971c730d4425010939bef7812137695b0b" Feb 17 17:18:56 crc kubenswrapper[4672]: E0217 17:18:56.562578 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d64e46250d8716eacd842de561f81e971c730d4425010939bef7812137695b0b\": container with ID starting with d64e46250d8716eacd842de561f81e971c730d4425010939bef7812137695b0b not found: ID does not exist" containerID="d64e46250d8716eacd842de561f81e971c730d4425010939bef7812137695b0b" Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.562621 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d64e46250d8716eacd842de561f81e971c730d4425010939bef7812137695b0b"} err="failed to get container status \"d64e46250d8716eacd842de561f81e971c730d4425010939bef7812137695b0b\": rpc error: code = NotFound desc = could not find container \"d64e46250d8716eacd842de561f81e971c730d4425010939bef7812137695b0b\": container with ID starting with d64e46250d8716eacd842de561f81e971c730d4425010939bef7812137695b0b not found: ID does not exist" Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.562648 4672 scope.go:117] "RemoveContainer" containerID="5004087a9e2e04029dce266675e96ce0423dbdcb09a24d0672f068a371f678ab" Feb 17 17:18:56 crc kubenswrapper[4672]: E0217 17:18:56.562956 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5004087a9e2e04029dce266675e96ce0423dbdcb09a24d0672f068a371f678ab\": container with ID starting with 5004087a9e2e04029dce266675e96ce0423dbdcb09a24d0672f068a371f678ab not found: ID does not exist" containerID="5004087a9e2e04029dce266675e96ce0423dbdcb09a24d0672f068a371f678ab" Feb 17 17:18:56 crc kubenswrapper[4672]: I0217 17:18:56.563000 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5004087a9e2e04029dce266675e96ce0423dbdcb09a24d0672f068a371f678ab"} err="failed to get container status \"5004087a9e2e04029dce266675e96ce0423dbdcb09a24d0672f068a371f678ab\": rpc error: code = NotFound desc = could not find container \"5004087a9e2e04029dce266675e96ce0423dbdcb09a24d0672f068a371f678ab\": container with ID starting with 5004087a9e2e04029dce266675e96ce0423dbdcb09a24d0672f068a371f678ab not found: ID does not exist" Feb 17 17:18:56 crc kubenswrapper[4672]: E0217 17:18:56.947406 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:18:57 crc kubenswrapper[4672]: I0217 17:18:57.982277 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4" path="/var/lib/kubelet/pods/eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4/volumes" Feb 17 17:18:58 crc kubenswrapper[4672]: I0217 17:18:58.945499 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:18:58 crc kubenswrapper[4672]: E0217 17:18:58.946137 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:19:01 crc kubenswrapper[4672]: I0217 17:19:01.956605 4672 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:19:02 crc kubenswrapper[4672]: E0217 17:19:02.064455 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:19:02 crc kubenswrapper[4672]: E0217 17:19:02.064506 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:19:02 crc kubenswrapper[4672]: E0217 17:19:02.064622 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq9ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-qrhj8_openstack(dc5471f5-2491-4841-be45-09c8f14b35c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:19:02 crc kubenswrapper[4672]: E0217 17:19:02.065914 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:19:09 crc kubenswrapper[4672]: I0217 17:19:09.945335 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:19:09 crc kubenswrapper[4672]: E0217 17:19:09.946318 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:19:11 crc kubenswrapper[4672]: E0217 17:19:11.077834 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:19:11 crc kubenswrapper[4672]: E0217 17:19:11.078197 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:19:11 crc kubenswrapper[4672]: E0217 17:19:11.078340 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66h7h644h64ch5f8h565hfch5dh56chfdh8hfdh5b5h567h6dh665h557h74h665hcbh96h659h554h589h57fh5d9h55h564hcfh5dhffhfdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tx4bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9e58ce9b-ddd5-42bb-8e07-08a22c8871a5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:19:11 crc kubenswrapper[4672]: E0217 17:19:11.079571 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:19:16 crc kubenswrapper[4672]: E0217 17:19:16.946945 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:19:20 crc kubenswrapper[4672]: I0217 17:19:20.944893 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:19:20 crc kubenswrapper[4672]: E0217 17:19:20.945657 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:19:24 crc kubenswrapper[4672]: E0217 17:19:24.947731 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:19:31 crc kubenswrapper[4672]: E0217 17:19:31.960969 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:19:33 crc kubenswrapper[4672]: I0217 17:19:33.945177 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:19:33 crc kubenswrapper[4672]: E0217 17:19:33.945672 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:19:39 crc kubenswrapper[4672]: E0217 17:19:39.948916 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:19:44 crc kubenswrapper[4672]: I0217 17:19:44.945946 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:19:44 crc kubenswrapper[4672]: E0217 17:19:44.946639 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:19:44 crc kubenswrapper[4672]: E0217 17:19:44.947239 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:19:51 crc kubenswrapper[4672]: E0217 17:19:51.958322 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:19:58 crc kubenswrapper[4672]: I0217 17:19:58.945243 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:19:58 crc kubenswrapper[4672]: E0217 17:19:58.945984 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:19:58 crc kubenswrapper[4672]: E0217 17:19:58.948470 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:20:06 crc kubenswrapper[4672]: E0217 17:20:06.947474 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:20:13 crc kubenswrapper[4672]: I0217 17:20:13.945676 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:20:13 crc kubenswrapper[4672]: E0217 17:20:13.946742 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:20:13 crc kubenswrapper[4672]: E0217 17:20:13.949828 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:20:19 crc kubenswrapper[4672]: E0217 17:20:19.946745 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:20:26 crc kubenswrapper[4672]: I0217 17:20:26.945802 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:20:26 crc kubenswrapper[4672]: E0217 17:20:26.946565 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:20:27 crc kubenswrapper[4672]: E0217 17:20:27.947725 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:20:29 crc kubenswrapper[4672]: I0217 17:20:29.665597 4672 trace.go:236] Trace[2091261491]: "Calculate volume metrics of storage for pod minio-dev/minio" (17-Feb-2026 17:20:28.642) (total time: 1022ms): Feb 17 17:20:29 crc kubenswrapper[4672]: Trace[2091261491]: [1.022797243s] [1.022797243s] END Feb 17 17:20:32 crc kubenswrapper[4672]: E0217 17:20:32.959040 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:20:38 crc kubenswrapper[4672]: E0217 17:20:38.948731 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:20:41 crc kubenswrapper[4672]: I0217 17:20:41.951033 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:20:41 crc kubenswrapper[4672]: E0217 17:20:41.951854 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:20:44 crc kubenswrapper[4672]: E0217 17:20:44.948155 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:20:52 crc kubenswrapper[4672]: E0217 17:20:52.947103 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:20:55 crc kubenswrapper[4672]: I0217 17:20:55.945457 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:20:55 crc kubenswrapper[4672]: E0217 17:20:55.946308 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:20:59 crc kubenswrapper[4672]: E0217 17:20:59.948218 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:21:04 crc kubenswrapper[4672]: I0217 17:21:04.288430 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g67w2"] Feb 17 17:21:04 crc kubenswrapper[4672]: E0217 17:21:04.290291 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4" containerName="registry-server" Feb 17 17:21:04 crc kubenswrapper[4672]: I0217 17:21:04.290384 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4" containerName="registry-server" Feb 17 17:21:04 crc kubenswrapper[4672]: E0217 17:21:04.290451 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4" containerName="extract-content" Feb 17 17:21:04 crc kubenswrapper[4672]: I0217 17:21:04.290500 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4" containerName="extract-content" Feb 17 17:21:04 crc kubenswrapper[4672]: E0217 17:21:04.290587 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4" containerName="extract-utilities" Feb 17 17:21:04 crc kubenswrapper[4672]: I0217 17:21:04.290637 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4" containerName="extract-utilities" Feb 17 17:21:04 crc kubenswrapper[4672]: I0217 17:21:04.290891 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea2304b-b9a9-4f8c-85d4-9a364b0bf1e4" containerName="registry-server" Feb 17 17:21:04 crc kubenswrapper[4672]: I0217 17:21:04.292425 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g67w2" Feb 17 17:21:04 crc kubenswrapper[4672]: I0217 17:21:04.301440 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g67w2"] Feb 17 17:21:04 crc kubenswrapper[4672]: I0217 17:21:04.417963 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba8cc55f-3331-4d74-aa1c-30a5efcb9063-catalog-content\") pod \"certified-operators-g67w2\" (UID: \"ba8cc55f-3331-4d74-aa1c-30a5efcb9063\") " pod="openshift-marketplace/certified-operators-g67w2" Feb 17 17:21:04 crc kubenswrapper[4672]: I0217 17:21:04.418024 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6svp8\" (UniqueName: \"kubernetes.io/projected/ba8cc55f-3331-4d74-aa1c-30a5efcb9063-kube-api-access-6svp8\") pod \"certified-operators-g67w2\" (UID: \"ba8cc55f-3331-4d74-aa1c-30a5efcb9063\") " pod="openshift-marketplace/certified-operators-g67w2" Feb 17 17:21:04 crc kubenswrapper[4672]: I0217 17:21:04.418116 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba8cc55f-3331-4d74-aa1c-30a5efcb9063-utilities\") pod \"certified-operators-g67w2\" (UID: \"ba8cc55f-3331-4d74-aa1c-30a5efcb9063\") " pod="openshift-marketplace/certified-operators-g67w2" Feb 17 17:21:04 crc kubenswrapper[4672]: I0217 17:21:04.519655 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba8cc55f-3331-4d74-aa1c-30a5efcb9063-catalog-content\") pod \"certified-operators-g67w2\" (UID: \"ba8cc55f-3331-4d74-aa1c-30a5efcb9063\") " pod="openshift-marketplace/certified-operators-g67w2" Feb 17 17:21:04 crc kubenswrapper[4672]: I0217 17:21:04.519718 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6svp8\" (UniqueName: \"kubernetes.io/projected/ba8cc55f-3331-4d74-aa1c-30a5efcb9063-kube-api-access-6svp8\") pod \"certified-operators-g67w2\" (UID: \"ba8cc55f-3331-4d74-aa1c-30a5efcb9063\") " pod="openshift-marketplace/certified-operators-g67w2" Feb 17 17:21:04 crc kubenswrapper[4672]: I0217 17:21:04.519812 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba8cc55f-3331-4d74-aa1c-30a5efcb9063-utilities\") pod \"certified-operators-g67w2\" (UID: \"ba8cc55f-3331-4d74-aa1c-30a5efcb9063\") " pod="openshift-marketplace/certified-operators-g67w2" Feb 17 17:21:04 crc kubenswrapper[4672]: I0217 17:21:04.520264 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba8cc55f-3331-4d74-aa1c-30a5efcb9063-catalog-content\") pod \"certified-operators-g67w2\" (UID: \"ba8cc55f-3331-4d74-aa1c-30a5efcb9063\") " pod="openshift-marketplace/certified-operators-g67w2" Feb 17 17:21:04 crc kubenswrapper[4672]: I0217 17:21:04.520323 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba8cc55f-3331-4d74-aa1c-30a5efcb9063-utilities\") pod \"certified-operators-g67w2\" (UID: \"ba8cc55f-3331-4d74-aa1c-30a5efcb9063\") " pod="openshift-marketplace/certified-operators-g67w2" Feb 17 17:21:04 crc kubenswrapper[4672]: I0217 17:21:04.760475 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6svp8\" (UniqueName: \"kubernetes.io/projected/ba8cc55f-3331-4d74-aa1c-30a5efcb9063-kube-api-access-6svp8\") pod \"certified-operators-g67w2\" (UID: \"ba8cc55f-3331-4d74-aa1c-30a5efcb9063\") " pod="openshift-marketplace/certified-operators-g67w2" Feb 17 17:21:04 crc kubenswrapper[4672]: I0217 17:21:04.931008 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g67w2" Feb 17 17:21:05 crc kubenswrapper[4672]: I0217 17:21:05.397724 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g67w2"] Feb 17 17:21:05 crc kubenswrapper[4672]: I0217 17:21:05.656538 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g67w2" event={"ID":"ba8cc55f-3331-4d74-aa1c-30a5efcb9063","Type":"ContainerStarted","Data":"6fa464aa580276b19264e5c6c059beb081b58baf9916fee37e77d047d94c4bef"} Feb 17 17:21:05 crc kubenswrapper[4672]: E0217 17:21:05.947652 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:21:06 crc kubenswrapper[4672]: I0217 17:21:06.670637 4672 generic.go:334] "Generic (PLEG): container finished" podID="ba8cc55f-3331-4d74-aa1c-30a5efcb9063" containerID="56269fd697a174b2a3d9c3b84662c49d6e4db1332053a8d4332afc0d9d237824" exitCode=0 Feb 17 17:21:06 crc kubenswrapper[4672]: I0217 17:21:06.670721 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g67w2" event={"ID":"ba8cc55f-3331-4d74-aa1c-30a5efcb9063","Type":"ContainerDied","Data":"56269fd697a174b2a3d9c3b84662c49d6e4db1332053a8d4332afc0d9d237824"} Feb 17 17:21:07 crc kubenswrapper[4672]: I0217 17:21:07.684631 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g67w2" event={"ID":"ba8cc55f-3331-4d74-aa1c-30a5efcb9063","Type":"ContainerStarted","Data":"3b218e2ac89aae5507b8f1453529748c4dc4899a36adf985bd44927aa53c7f4c"} Feb 17 17:21:09 crc kubenswrapper[4672]: I0217 17:21:09.708835 4672 generic.go:334] "Generic (PLEG): container finished" podID="ba8cc55f-3331-4d74-aa1c-30a5efcb9063" containerID="3b218e2ac89aae5507b8f1453529748c4dc4899a36adf985bd44927aa53c7f4c" exitCode=0 Feb 17 17:21:09 crc kubenswrapper[4672]: I0217 17:21:09.708876 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g67w2" event={"ID":"ba8cc55f-3331-4d74-aa1c-30a5efcb9063","Type":"ContainerDied","Data":"3b218e2ac89aae5507b8f1453529748c4dc4899a36adf985bd44927aa53c7f4c"} Feb 17 17:21:09 crc kubenswrapper[4672]: I0217 17:21:09.946580 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:21:10 crc kubenswrapper[4672]: I0217 17:21:10.724702 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerStarted","Data":"aa9de33917b7512b8f4e9b4de1c674b9cb275a2571d728832df984331852db30"} Feb 17 17:21:10 crc kubenswrapper[4672]: I0217 17:21:10.731709 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g67w2" event={"ID":"ba8cc55f-3331-4d74-aa1c-30a5efcb9063","Type":"ContainerStarted","Data":"78541d0166b1fb8406dc01294d736e673a3a120ccbfce3704bab15735270a447"} Feb 17 17:21:12 crc kubenswrapper[4672]: E0217 17:21:12.947458 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:21:14 crc kubenswrapper[4672]: I0217 17:21:14.931285 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g67w2" Feb 17 17:21:14 crc kubenswrapper[4672]: I0217 17:21:14.932131 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g67w2" Feb 17 17:21:14 crc kubenswrapper[4672]: I0217 17:21:14.992505 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g67w2" Feb 17 17:21:15 crc kubenswrapper[4672]: I0217 17:21:15.017393 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g67w2" podStartSLOduration=7.492881057 podStartE2EDuration="11.017376832s" podCreationTimestamp="2026-02-17 17:21:04 +0000 UTC" firstStartedPulling="2026-02-17 17:21:06.67310053 +0000 UTC m=+4675.427189292" lastFinishedPulling="2026-02-17 17:21:10.197596335 +0000 UTC m=+4678.951685067" observedRunningTime="2026-02-17 17:21:10.775068389 +0000 UTC m=+4679.529157141" watchObservedRunningTime="2026-02-17 17:21:15.017376832 +0000 UTC m=+4683.771465564" Feb 17 17:21:15 crc kubenswrapper[4672]: I0217 17:21:15.904318 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g67w2" Feb 17 17:21:15 crc kubenswrapper[4672]: I0217 17:21:15.955634 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g67w2"] Feb 17 17:21:16 crc kubenswrapper[4672]: E0217 17:21:16.947339 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:21:17 crc kubenswrapper[4672]: I0217 17:21:17.822809 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g67w2" podUID="ba8cc55f-3331-4d74-aa1c-30a5efcb9063" containerName="registry-server" containerID="cri-o://78541d0166b1fb8406dc01294d736e673a3a120ccbfce3704bab15735270a447" gracePeriod=2 Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.494286 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g67w2" Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.640248 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6svp8\" (UniqueName: \"kubernetes.io/projected/ba8cc55f-3331-4d74-aa1c-30a5efcb9063-kube-api-access-6svp8\") pod \"ba8cc55f-3331-4d74-aa1c-30a5efcb9063\" (UID: \"ba8cc55f-3331-4d74-aa1c-30a5efcb9063\") " Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.640643 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba8cc55f-3331-4d74-aa1c-30a5efcb9063-catalog-content\") pod \"ba8cc55f-3331-4d74-aa1c-30a5efcb9063\" (UID: \"ba8cc55f-3331-4d74-aa1c-30a5efcb9063\") " Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.640802 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba8cc55f-3331-4d74-aa1c-30a5efcb9063-utilities\") pod \"ba8cc55f-3331-4d74-aa1c-30a5efcb9063\" (UID: \"ba8cc55f-3331-4d74-aa1c-30a5efcb9063\") " Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.641966 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba8cc55f-3331-4d74-aa1c-30a5efcb9063-utilities" (OuterVolumeSpecName: "utilities") pod "ba8cc55f-3331-4d74-aa1c-30a5efcb9063" (UID: "ba8cc55f-3331-4d74-aa1c-30a5efcb9063"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.642895 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba8cc55f-3331-4d74-aa1c-30a5efcb9063-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.648179 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba8cc55f-3331-4d74-aa1c-30a5efcb9063-kube-api-access-6svp8" (OuterVolumeSpecName: "kube-api-access-6svp8") pod "ba8cc55f-3331-4d74-aa1c-30a5efcb9063" (UID: "ba8cc55f-3331-4d74-aa1c-30a5efcb9063"). InnerVolumeSpecName "kube-api-access-6svp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.713669 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba8cc55f-3331-4d74-aa1c-30a5efcb9063-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba8cc55f-3331-4d74-aa1c-30a5efcb9063" (UID: "ba8cc55f-3331-4d74-aa1c-30a5efcb9063"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.745129 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba8cc55f-3331-4d74-aa1c-30a5efcb9063-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.745170 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6svp8\" (UniqueName: \"kubernetes.io/projected/ba8cc55f-3331-4d74-aa1c-30a5efcb9063-kube-api-access-6svp8\") on node \"crc\" DevicePath \"\"" Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.834678 4672 generic.go:334] "Generic (PLEG): container finished" podID="ba8cc55f-3331-4d74-aa1c-30a5efcb9063" containerID="78541d0166b1fb8406dc01294d736e673a3a120ccbfce3704bab15735270a447" exitCode=0 Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.834770 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g67w2" Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.834771 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g67w2" event={"ID":"ba8cc55f-3331-4d74-aa1c-30a5efcb9063","Type":"ContainerDied","Data":"78541d0166b1fb8406dc01294d736e673a3a120ccbfce3704bab15735270a447"} Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.835787 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g67w2" event={"ID":"ba8cc55f-3331-4d74-aa1c-30a5efcb9063","Type":"ContainerDied","Data":"6fa464aa580276b19264e5c6c059beb081b58baf9916fee37e77d047d94c4bef"} Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.835813 4672 scope.go:117] "RemoveContainer" containerID="78541d0166b1fb8406dc01294d736e673a3a120ccbfce3704bab15735270a447" Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.879720 4672 scope.go:117] "RemoveContainer" containerID="3b218e2ac89aae5507b8f1453529748c4dc4899a36adf985bd44927aa53c7f4c" Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.881251 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g67w2"] Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.895469 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g67w2"] Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.921963 4672 scope.go:117] "RemoveContainer" containerID="56269fd697a174b2a3d9c3b84662c49d6e4db1332053a8d4332afc0d9d237824" Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.975854 4672 scope.go:117] "RemoveContainer" containerID="78541d0166b1fb8406dc01294d736e673a3a120ccbfce3704bab15735270a447" Feb 17 17:21:18 crc kubenswrapper[4672]: E0217 17:21:18.976581 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78541d0166b1fb8406dc01294d736e673a3a120ccbfce3704bab15735270a447\": container with ID starting with 78541d0166b1fb8406dc01294d736e673a3a120ccbfce3704bab15735270a447 not found: ID does not exist" containerID="78541d0166b1fb8406dc01294d736e673a3a120ccbfce3704bab15735270a447" Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.976626 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78541d0166b1fb8406dc01294d736e673a3a120ccbfce3704bab15735270a447"} err="failed to get container status \"78541d0166b1fb8406dc01294d736e673a3a120ccbfce3704bab15735270a447\": rpc error: code = NotFound desc = could not find container \"78541d0166b1fb8406dc01294d736e673a3a120ccbfce3704bab15735270a447\": container with ID starting with 78541d0166b1fb8406dc01294d736e673a3a120ccbfce3704bab15735270a447 not found: ID does not exist" Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.976655 4672 scope.go:117] "RemoveContainer" containerID="3b218e2ac89aae5507b8f1453529748c4dc4899a36adf985bd44927aa53c7f4c" Feb 17 17:21:18 crc kubenswrapper[4672]: E0217 17:21:18.977067 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b218e2ac89aae5507b8f1453529748c4dc4899a36adf985bd44927aa53c7f4c\": container with ID starting with 3b218e2ac89aae5507b8f1453529748c4dc4899a36adf985bd44927aa53c7f4c not found: ID does not exist" containerID="3b218e2ac89aae5507b8f1453529748c4dc4899a36adf985bd44927aa53c7f4c" Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.977098 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b218e2ac89aae5507b8f1453529748c4dc4899a36adf985bd44927aa53c7f4c"} err="failed to get container status \"3b218e2ac89aae5507b8f1453529748c4dc4899a36adf985bd44927aa53c7f4c\": rpc error: code = NotFound desc = could not find container \"3b218e2ac89aae5507b8f1453529748c4dc4899a36adf985bd44927aa53c7f4c\": container with ID starting with 3b218e2ac89aae5507b8f1453529748c4dc4899a36adf985bd44927aa53c7f4c not found: ID does not exist" Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.977117 4672 scope.go:117] "RemoveContainer" containerID="56269fd697a174b2a3d9c3b84662c49d6e4db1332053a8d4332afc0d9d237824" Feb 17 17:21:18 crc kubenswrapper[4672]: E0217 17:21:18.977394 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56269fd697a174b2a3d9c3b84662c49d6e4db1332053a8d4332afc0d9d237824\": container with ID starting with 56269fd697a174b2a3d9c3b84662c49d6e4db1332053a8d4332afc0d9d237824 not found: ID does not exist" containerID="56269fd697a174b2a3d9c3b84662c49d6e4db1332053a8d4332afc0d9d237824" Feb 17 17:21:18 crc kubenswrapper[4672]: I0217 17:21:18.977415 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56269fd697a174b2a3d9c3b84662c49d6e4db1332053a8d4332afc0d9d237824"} err="failed to get container status \"56269fd697a174b2a3d9c3b84662c49d6e4db1332053a8d4332afc0d9d237824\": rpc error: code = NotFound desc = could not find container \"56269fd697a174b2a3d9c3b84662c49d6e4db1332053a8d4332afc0d9d237824\": container with ID starting with 56269fd697a174b2a3d9c3b84662c49d6e4db1332053a8d4332afc0d9d237824 not found: ID does not exist" Feb 17 17:21:19 crc kubenswrapper[4672]: I0217 17:21:19.960369 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba8cc55f-3331-4d74-aa1c-30a5efcb9063" path="/var/lib/kubelet/pods/ba8cc55f-3331-4d74-aa1c-30a5efcb9063/volumes" Feb 17 17:21:23 crc kubenswrapper[4672]: E0217 17:21:23.947845 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:21:29 crc kubenswrapper[4672]: E0217 17:21:29.946968 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:21:33 crc kubenswrapper[4672]: I0217 17:21:33.031103 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq"] Feb 17 17:21:33 crc kubenswrapper[4672]: E0217 17:21:33.031863 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8cc55f-3331-4d74-aa1c-30a5efcb9063" containerName="extract-utilities" Feb 17 17:21:33 crc kubenswrapper[4672]: I0217 17:21:33.031879 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8cc55f-3331-4d74-aa1c-30a5efcb9063" containerName="extract-utilities" Feb 17 17:21:33 crc kubenswrapper[4672]: E0217 17:21:33.031921 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8cc55f-3331-4d74-aa1c-30a5efcb9063" containerName="extract-content" Feb 17 17:21:33 crc kubenswrapper[4672]: I0217 17:21:33.031929 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8cc55f-3331-4d74-aa1c-30a5efcb9063" containerName="extract-content" Feb 17 17:21:33 crc kubenswrapper[4672]: E0217 17:21:33.031949 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8cc55f-3331-4d74-aa1c-30a5efcb9063" containerName="registry-server" Feb 17 17:21:33 crc kubenswrapper[4672]: I0217 17:21:33.031956 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8cc55f-3331-4d74-aa1c-30a5efcb9063" containerName="registry-server" Feb 17 17:21:33 crc kubenswrapper[4672]: I0217 17:21:33.032160 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8cc55f-3331-4d74-aa1c-30a5efcb9063" containerName="registry-server" Feb 17 17:21:33 crc kubenswrapper[4672]: I0217 17:21:33.033000 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq" Feb 17 17:21:33 crc kubenswrapper[4672]: I0217 17:21:33.035697 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 17:21:33 crc kubenswrapper[4672]: I0217 17:21:33.036165 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 17:21:33 crc kubenswrapper[4672]: I0217 17:21:33.036263 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z6sng" Feb 17 17:21:33 crc kubenswrapper[4672]: I0217 17:21:33.036262 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 17:21:33 crc kubenswrapper[4672]: I0217 17:21:33.057892 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq"] Feb 17 17:21:33 crc kubenswrapper[4672]: I0217 17:21:33.151141 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6162f29e-528b-4131-9e8d-1391db930dd5-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq\" (UID: \"6162f29e-528b-4131-9e8d-1391db930dd5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq" Feb 17 17:21:33 crc kubenswrapper[4672]: I0217 17:21:33.151214 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x25v\" (UniqueName: \"kubernetes.io/projected/6162f29e-528b-4131-9e8d-1391db930dd5-kube-api-access-7x25v\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq\" (UID: \"6162f29e-528b-4131-9e8d-1391db930dd5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq" Feb 17 17:21:33 crc kubenswrapper[4672]: I0217 17:21:33.151475 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6162f29e-528b-4131-9e8d-1391db930dd5-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq\" (UID: \"6162f29e-528b-4131-9e8d-1391db930dd5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq" Feb 17 17:21:33 crc kubenswrapper[4672]: I0217 17:21:33.253077 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6162f29e-528b-4131-9e8d-1391db930dd5-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq\" (UID: \"6162f29e-528b-4131-9e8d-1391db930dd5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq" Feb 17 17:21:33 crc kubenswrapper[4672]: I0217 17:21:33.253293 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6162f29e-528b-4131-9e8d-1391db930dd5-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq\" (UID: \"6162f29e-528b-4131-9e8d-1391db930dd5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq" Feb 17 17:21:33 crc kubenswrapper[4672]: I0217 17:21:33.253333 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x25v\" (UniqueName: \"kubernetes.io/projected/6162f29e-528b-4131-9e8d-1391db930dd5-kube-api-access-7x25v\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq\" (UID: \"6162f29e-528b-4131-9e8d-1391db930dd5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq" Feb 17 17:21:33 crc kubenswrapper[4672]: I0217 17:21:33.259414 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6162f29e-528b-4131-9e8d-1391db930dd5-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq\" (UID: \"6162f29e-528b-4131-9e8d-1391db930dd5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq" Feb 17 17:21:33 crc kubenswrapper[4672]: I0217 17:21:33.276997 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6162f29e-528b-4131-9e8d-1391db930dd5-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq\" (UID: \"6162f29e-528b-4131-9e8d-1391db930dd5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq" Feb 17 17:21:33 crc kubenswrapper[4672]: I0217 17:21:33.278055 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x25v\" (UniqueName: \"kubernetes.io/projected/6162f29e-528b-4131-9e8d-1391db930dd5-kube-api-access-7x25v\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq\" (UID: \"6162f29e-528b-4131-9e8d-1391db930dd5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq" Feb 17 17:21:33 crc kubenswrapper[4672]: I0217 17:21:33.355963 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq" Feb 17 17:21:33 crc kubenswrapper[4672]: I0217 17:21:33.916492 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq"] Feb 17 17:21:33 crc kubenswrapper[4672]: I0217 17:21:33.995343 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq" event={"ID":"6162f29e-528b-4131-9e8d-1391db930dd5","Type":"ContainerStarted","Data":"77a4c5742149de094de3c17db98f77a3f135bae115512ba0824819a3bd87f5e6"} Feb 17 17:21:35 crc kubenswrapper[4672]: I0217 17:21:35.006729 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq" event={"ID":"6162f29e-528b-4131-9e8d-1391db930dd5","Type":"ContainerStarted","Data":"be45882887b3e1e6f0573f44f9ef2e1d6e469b1eef7d2f240e6338c66277c918"} Feb 17 17:21:35 crc kubenswrapper[4672]: I0217 17:21:35.037753 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq" podStartSLOduration=1.576067954 podStartE2EDuration="2.037730077s" podCreationTimestamp="2026-02-17 17:21:33 +0000 UTC" firstStartedPulling="2026-02-17 17:21:33.922982147 +0000 UTC m=+4702.677070889" lastFinishedPulling="2026-02-17 17:21:34.38464428 +0000 UTC m=+4703.138733012" observedRunningTime="2026-02-17 17:21:35.025745048 +0000 UTC m=+4703.779833780" watchObservedRunningTime="2026-02-17 17:21:35.037730077 +0000 UTC m=+4703.791818809" Feb 17 17:21:37 crc kubenswrapper[4672]: E0217 17:21:37.948183 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:21:42 crc kubenswrapper[4672]: E0217 17:21:42.948078 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:21:51 crc kubenswrapper[4672]: E0217 17:21:51.953285 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:21:55 crc kubenswrapper[4672]: E0217 17:21:55.947248 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:22:04 crc kubenswrapper[4672]: E0217 17:22:04.947195 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:22:08 crc kubenswrapper[4672]: E0217 17:22:08.947685 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:22:18 crc kubenswrapper[4672]: E0217 17:22:18.946565 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:22:20 crc kubenswrapper[4672]: E0217 17:22:20.947188 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:22:31 crc kubenswrapper[4672]: E0217 17:22:31.953005 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:22:32 crc kubenswrapper[4672]: E0217 17:22:32.947659 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:22:44 crc kubenswrapper[4672]: E0217 17:22:44.947755 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:22:44 crc kubenswrapper[4672]: E0217 17:22:44.947826 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:22:56 crc kubenswrapper[4672]: E0217 17:22:56.946848 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:22:58 crc kubenswrapper[4672]: E0217 17:22:58.947726 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:23:10 crc kubenswrapper[4672]: E0217 17:23:10.946986 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:23:13 crc kubenswrapper[4672]: E0217 17:23:13.947074 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:23:22 crc kubenswrapper[4672]: E0217 17:23:22.950272 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:23:27 crc kubenswrapper[4672]: I0217 17:23:27.565542 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:23:27 crc kubenswrapper[4672]: I0217 17:23:27.566329 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:23:28 crc kubenswrapper[4672]: E0217 17:23:28.949119 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:23:35 crc kubenswrapper[4672]: E0217 17:23:35.947882 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:23:40 crc kubenswrapper[4672]: E0217 17:23:40.947543 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:23:49 crc kubenswrapper[4672]: E0217 17:23:49.947232 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:23:52 crc kubenswrapper[4672]: E0217 17:23:52.948646 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:23:57 crc kubenswrapper[4672]: I0217 17:23:57.566292 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:23:57 crc kubenswrapper[4672]: I0217 17:23:57.567551 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:24:01 crc kubenswrapper[4672]: E0217 17:24:01.947466 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:24:03 crc kubenswrapper[4672]: E0217 17:24:03.947017 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:24:13 crc kubenswrapper[4672]: I0217 17:24:13.949167 4672 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:24:14 crc kubenswrapper[4672]: E0217 17:24:14.090683 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:24:14 crc kubenswrapper[4672]: E0217 17:24:14.090760 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:24:14 crc kubenswrapper[4672]: E0217 17:24:14.090902 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq9ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-qrhj8_openstack(dc5471f5-2491-4841-be45-09c8f14b35c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:24:14 crc kubenswrapper[4672]: E0217 17:24:14.092731 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:24:14 crc kubenswrapper[4672]: I0217 17:24:14.165098 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v29jn"] Feb 17 17:24:14 crc kubenswrapper[4672]: I0217 17:24:14.167113 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v29jn" Feb 17 17:24:14 crc kubenswrapper[4672]: I0217 17:24:14.183500 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v29jn"] Feb 17 17:24:14 crc kubenswrapper[4672]: I0217 17:24:14.293624 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jcjq\" (UniqueName: \"kubernetes.io/projected/3ab9dc89-5639-423d-a95f-3deba1971171-kube-api-access-5jcjq\") pod \"redhat-marketplace-v29jn\" (UID: \"3ab9dc89-5639-423d-a95f-3deba1971171\") " pod="openshift-marketplace/redhat-marketplace-v29jn" Feb 17 17:24:14 crc kubenswrapper[4672]: I0217 17:24:14.293748 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ab9dc89-5639-423d-a95f-3deba1971171-utilities\") pod \"redhat-marketplace-v29jn\" (UID: \"3ab9dc89-5639-423d-a95f-3deba1971171\") " pod="openshift-marketplace/redhat-marketplace-v29jn" Feb 17 17:24:14 crc kubenswrapper[4672]: I0217 17:24:14.294216 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ab9dc89-5639-423d-a95f-3deba1971171-catalog-content\") pod \"redhat-marketplace-v29jn\" (UID: \"3ab9dc89-5639-423d-a95f-3deba1971171\") " pod="openshift-marketplace/redhat-marketplace-v29jn" Feb 17 17:24:14 crc kubenswrapper[4672]: I0217 17:24:14.396588 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ab9dc89-5639-423d-a95f-3deba1971171-utilities\") pod \"redhat-marketplace-v29jn\" (UID: \"3ab9dc89-5639-423d-a95f-3deba1971171\") " pod="openshift-marketplace/redhat-marketplace-v29jn" Feb 17 17:24:14 crc kubenswrapper[4672]: I0217 17:24:14.397056 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ab9dc89-5639-423d-a95f-3deba1971171-catalog-content\") pod \"redhat-marketplace-v29jn\" (UID: \"3ab9dc89-5639-423d-a95f-3deba1971171\") " pod="openshift-marketplace/redhat-marketplace-v29jn" Feb 17 17:24:14 crc kubenswrapper[4672]: I0217 17:24:14.397123 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jcjq\" (UniqueName: \"kubernetes.io/projected/3ab9dc89-5639-423d-a95f-3deba1971171-kube-api-access-5jcjq\") pod \"redhat-marketplace-v29jn\" (UID: \"3ab9dc89-5639-423d-a95f-3deba1971171\") " pod="openshift-marketplace/redhat-marketplace-v29jn" Feb 17 17:24:14 crc kubenswrapper[4672]: I0217 17:24:14.397165 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ab9dc89-5639-423d-a95f-3deba1971171-utilities\") pod \"redhat-marketplace-v29jn\" (UID: \"3ab9dc89-5639-423d-a95f-3deba1971171\") " pod="openshift-marketplace/redhat-marketplace-v29jn" Feb 17 17:24:14 crc kubenswrapper[4672]: I0217 17:24:14.397570 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ab9dc89-5639-423d-a95f-3deba1971171-catalog-content\") pod \"redhat-marketplace-v29jn\" (UID: \"3ab9dc89-5639-423d-a95f-3deba1971171\") " pod="openshift-marketplace/redhat-marketplace-v29jn" Feb 17 17:24:14 crc kubenswrapper[4672]: I0217 17:24:14.429169 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jcjq\" (UniqueName: \"kubernetes.io/projected/3ab9dc89-5639-423d-a95f-3deba1971171-kube-api-access-5jcjq\") pod \"redhat-marketplace-v29jn\" (UID: \"3ab9dc89-5639-423d-a95f-3deba1971171\") " pod="openshift-marketplace/redhat-marketplace-v29jn" Feb 17 17:24:14 crc kubenswrapper[4672]: I0217 17:24:14.487486 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v29jn" Feb 17 17:24:14 crc kubenswrapper[4672]: I0217 17:24:14.988296 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v29jn"] Feb 17 17:24:15 crc kubenswrapper[4672]: E0217 17:24:15.034969 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:24:15 crc kubenswrapper[4672]: E0217 17:24:15.035026 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:24:15 crc kubenswrapper[4672]: E0217 17:24:15.035152 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66h7h644h64ch5f8h565hfch5dh56chfdh8hfdh5b5h567h6dh665h557h74h665hcbh96h659h554h589h57fh5d9h55h564hcfh5dhffhfdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tx4bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9e58ce9b-ddd5-42bb-8e07-08a22c8871a5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:24:15 crc kubenswrapper[4672]: E0217 17:24:15.036476 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:24:15 crc kubenswrapper[4672]: I0217 17:24:15.629156 4672 generic.go:334] "Generic (PLEG): container finished" podID="3ab9dc89-5639-423d-a95f-3deba1971171" containerID="a16aa62fa2f996035e4ffbd9dca6215a93eb9dd0eab29aab9b689bb870ad196d" exitCode=0 Feb 17 17:24:15 crc kubenswrapper[4672]: I0217 17:24:15.629205 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v29jn" event={"ID":"3ab9dc89-5639-423d-a95f-3deba1971171","Type":"ContainerDied","Data":"a16aa62fa2f996035e4ffbd9dca6215a93eb9dd0eab29aab9b689bb870ad196d"} Feb 17 17:24:15 crc kubenswrapper[4672]: I0217 17:24:15.629587 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v29jn" event={"ID":"3ab9dc89-5639-423d-a95f-3deba1971171","Type":"ContainerStarted","Data":"5261cf81ab2ebbd2ca6d0f619895337c1cb049d63f497e07b0b97072743bc420"} Feb 17 17:24:16 crc kubenswrapper[4672]: I0217 17:24:16.639902 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v29jn" event={"ID":"3ab9dc89-5639-423d-a95f-3deba1971171","Type":"ContainerStarted","Data":"94667dd531059b378e802d3989503fcec108c2d180c0e1df5fd0ed103ae6d960"} Feb 17 17:24:17 crc kubenswrapper[4672]: I0217 17:24:17.652234 4672 generic.go:334] "Generic (PLEG): container finished" podID="3ab9dc89-5639-423d-a95f-3deba1971171" containerID="94667dd531059b378e802d3989503fcec108c2d180c0e1df5fd0ed103ae6d960" exitCode=0 Feb 17 17:24:17 crc kubenswrapper[4672]: I0217 17:24:17.652290 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v29jn" event={"ID":"3ab9dc89-5639-423d-a95f-3deba1971171","Type":"ContainerDied","Data":"94667dd531059b378e802d3989503fcec108c2d180c0e1df5fd0ed103ae6d960"} Feb 17 17:24:18 crc kubenswrapper[4672]: I0217 17:24:18.668442 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v29jn" event={"ID":"3ab9dc89-5639-423d-a95f-3deba1971171","Type":"ContainerStarted","Data":"8794785114b01e31e1db95bca287264d838660ecc1f33a8fd3993f1b22ca6c37"} Feb 17 17:24:18 crc kubenswrapper[4672]: I0217 17:24:18.686610 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v29jn" podStartSLOduration=2.13201157 podStartE2EDuration="4.686595035s" podCreationTimestamp="2026-02-17 17:24:14 +0000 UTC" firstStartedPulling="2026-02-17 17:24:15.631323843 +0000 UTC m=+4864.385412575" lastFinishedPulling="2026-02-17 17:24:18.185907308 +0000 UTC m=+4866.939996040" observedRunningTime="2026-02-17 17:24:18.68493894 +0000 UTC m=+4867.439027672" watchObservedRunningTime="2026-02-17 17:24:18.686595035 +0000 UTC m=+4867.440683767" Feb 17 17:24:24 crc kubenswrapper[4672]: I0217 17:24:24.487816 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v29jn" Feb 17 17:24:24 crc kubenswrapper[4672]: I0217 17:24:24.488438 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v29jn" Feb 17 17:24:24 crc kubenswrapper[4672]: I0217 17:24:24.544212 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v29jn" Feb 17 17:24:24 crc kubenswrapper[4672]: I0217 17:24:24.775656 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v29jn" Feb 17 17:24:24 crc kubenswrapper[4672]: I0217 17:24:24.826938 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v29jn"] Feb 17 17:24:25 crc kubenswrapper[4672]: E0217 17:24:25.948409 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:24:26 crc kubenswrapper[4672]: I0217 17:24:26.739923 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v29jn" podUID="3ab9dc89-5639-423d-a95f-3deba1971171" containerName="registry-server" containerID="cri-o://8794785114b01e31e1db95bca287264d838660ecc1f33a8fd3993f1b22ca6c37" gracePeriod=2 Feb 17 17:24:26 crc kubenswrapper[4672]: E0217 17:24:26.802967 4672 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ab9dc89_5639_423d_a95f_3deba1971171.slice/crio-8794785114b01e31e1db95bca287264d838660ecc1f33a8fd3993f1b22ca6c37.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ab9dc89_5639_423d_a95f_3deba1971171.slice/crio-conmon-8794785114b01e31e1db95bca287264d838660ecc1f33a8fd3993f1b22ca6c37.scope\": RecentStats: unable to find data in memory cache]" Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.319667 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v29jn" Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.385542 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ab9dc89-5639-423d-a95f-3deba1971171-utilities\") pod \"3ab9dc89-5639-423d-a95f-3deba1971171\" (UID: \"3ab9dc89-5639-423d-a95f-3deba1971171\") " Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.385607 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ab9dc89-5639-423d-a95f-3deba1971171-catalog-content\") pod \"3ab9dc89-5639-423d-a95f-3deba1971171\" (UID: \"3ab9dc89-5639-423d-a95f-3deba1971171\") " Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.385639 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jcjq\" (UniqueName: \"kubernetes.io/projected/3ab9dc89-5639-423d-a95f-3deba1971171-kube-api-access-5jcjq\") pod \"3ab9dc89-5639-423d-a95f-3deba1971171\" (UID: \"3ab9dc89-5639-423d-a95f-3deba1971171\") " Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.386830 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ab9dc89-5639-423d-a95f-3deba1971171-utilities" (OuterVolumeSpecName: "utilities") pod "3ab9dc89-5639-423d-a95f-3deba1971171" (UID: "3ab9dc89-5639-423d-a95f-3deba1971171"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.396538 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ab9dc89-5639-423d-a95f-3deba1971171-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.398041 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab9dc89-5639-423d-a95f-3deba1971171-kube-api-access-5jcjq" (OuterVolumeSpecName: "kube-api-access-5jcjq") pod "3ab9dc89-5639-423d-a95f-3deba1971171" (UID: "3ab9dc89-5639-423d-a95f-3deba1971171"). InnerVolumeSpecName "kube-api-access-5jcjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.434248 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ab9dc89-5639-423d-a95f-3deba1971171-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ab9dc89-5639-423d-a95f-3deba1971171" (UID: "3ab9dc89-5639-423d-a95f-3deba1971171"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.498890 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ab9dc89-5639-423d-a95f-3deba1971171-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.498937 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jcjq\" (UniqueName: \"kubernetes.io/projected/3ab9dc89-5639-423d-a95f-3deba1971171-kube-api-access-5jcjq\") on node \"crc\" DevicePath \"\"" Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.566446 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.566523 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.566573 4672 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.567364 4672 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa9de33917b7512b8f4e9b4de1c674b9cb275a2571d728832df984331852db30"} pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.567436 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" containerID="cri-o://aa9de33917b7512b8f4e9b4de1c674b9cb275a2571d728832df984331852db30" gracePeriod=600 Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.820371 4672 generic.go:334] "Generic (PLEG): container finished" podID="3ab9dc89-5639-423d-a95f-3deba1971171" containerID="8794785114b01e31e1db95bca287264d838660ecc1f33a8fd3993f1b22ca6c37" exitCode=0 Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.820532 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v29jn" event={"ID":"3ab9dc89-5639-423d-a95f-3deba1971171","Type":"ContainerDied","Data":"8794785114b01e31e1db95bca287264d838660ecc1f33a8fd3993f1b22ca6c37"} Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.820749 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v29jn" event={"ID":"3ab9dc89-5639-423d-a95f-3deba1971171","Type":"ContainerDied","Data":"5261cf81ab2ebbd2ca6d0f619895337c1cb049d63f497e07b0b97072743bc420"} Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.820769 4672 scope.go:117] "RemoveContainer" containerID="8794785114b01e31e1db95bca287264d838660ecc1f33a8fd3993f1b22ca6c37" Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.820607 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v29jn" Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.824967 4672 generic.go:334] "Generic (PLEG): container finished" podID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerID="aa9de33917b7512b8f4e9b4de1c674b9cb275a2571d728832df984331852db30" exitCode=0 Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.825014 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerDied","Data":"aa9de33917b7512b8f4e9b4de1c674b9cb275a2571d728832df984331852db30"} Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.858822 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v29jn"] Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.867460 4672 scope.go:117] "RemoveContainer" containerID="94667dd531059b378e802d3989503fcec108c2d180c0e1df5fd0ed103ae6d960" Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.868843 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v29jn"] Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.924900 4672 scope.go:117] "RemoveContainer" containerID="a16aa62fa2f996035e4ffbd9dca6215a93eb9dd0eab29aab9b689bb870ad196d" Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.956657 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab9dc89-5639-423d-a95f-3deba1971171" path="/var/lib/kubelet/pods/3ab9dc89-5639-423d-a95f-3deba1971171/volumes" Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.973157 4672 scope.go:117] "RemoveContainer" containerID="8794785114b01e31e1db95bca287264d838660ecc1f33a8fd3993f1b22ca6c37" Feb 17 17:24:27 crc kubenswrapper[4672]: E0217 17:24:27.973605 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8794785114b01e31e1db95bca287264d838660ecc1f33a8fd3993f1b22ca6c37\": container with ID starting with 8794785114b01e31e1db95bca287264d838660ecc1f33a8fd3993f1b22ca6c37 not found: ID does not exist" containerID="8794785114b01e31e1db95bca287264d838660ecc1f33a8fd3993f1b22ca6c37" Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.973662 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8794785114b01e31e1db95bca287264d838660ecc1f33a8fd3993f1b22ca6c37"} err="failed to get container status \"8794785114b01e31e1db95bca287264d838660ecc1f33a8fd3993f1b22ca6c37\": rpc error: code = NotFound desc = could not find container \"8794785114b01e31e1db95bca287264d838660ecc1f33a8fd3993f1b22ca6c37\": container with ID starting with 8794785114b01e31e1db95bca287264d838660ecc1f33a8fd3993f1b22ca6c37 not found: ID does not exist" Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.973702 4672 scope.go:117] "RemoveContainer" containerID="94667dd531059b378e802d3989503fcec108c2d180c0e1df5fd0ed103ae6d960" Feb 17 17:24:27 crc kubenswrapper[4672]: E0217 17:24:27.974023 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94667dd531059b378e802d3989503fcec108c2d180c0e1df5fd0ed103ae6d960\": container with ID starting with 94667dd531059b378e802d3989503fcec108c2d180c0e1df5fd0ed103ae6d960 not found: ID does not exist" containerID="94667dd531059b378e802d3989503fcec108c2d180c0e1df5fd0ed103ae6d960" Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.974057 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94667dd531059b378e802d3989503fcec108c2d180c0e1df5fd0ed103ae6d960"} err="failed to get container status \"94667dd531059b378e802d3989503fcec108c2d180c0e1df5fd0ed103ae6d960\": rpc error: code = NotFound desc = could not find container \"94667dd531059b378e802d3989503fcec108c2d180c0e1df5fd0ed103ae6d960\": container with ID starting with 94667dd531059b378e802d3989503fcec108c2d180c0e1df5fd0ed103ae6d960 not found: ID does not exist" Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.974077 4672 scope.go:117] "RemoveContainer" containerID="a16aa62fa2f996035e4ffbd9dca6215a93eb9dd0eab29aab9b689bb870ad196d" Feb 17 17:24:27 crc kubenswrapper[4672]: E0217 17:24:27.974409 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a16aa62fa2f996035e4ffbd9dca6215a93eb9dd0eab29aab9b689bb870ad196d\": container with ID starting with a16aa62fa2f996035e4ffbd9dca6215a93eb9dd0eab29aab9b689bb870ad196d not found: ID does not exist" containerID="a16aa62fa2f996035e4ffbd9dca6215a93eb9dd0eab29aab9b689bb870ad196d" Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.974440 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a16aa62fa2f996035e4ffbd9dca6215a93eb9dd0eab29aab9b689bb870ad196d"} err="failed to get container status \"a16aa62fa2f996035e4ffbd9dca6215a93eb9dd0eab29aab9b689bb870ad196d\": rpc error: code = NotFound desc = could not find container \"a16aa62fa2f996035e4ffbd9dca6215a93eb9dd0eab29aab9b689bb870ad196d\": container with ID starting with a16aa62fa2f996035e4ffbd9dca6215a93eb9dd0eab29aab9b689bb870ad196d not found: ID does not exist" Feb 17 17:24:27 crc kubenswrapper[4672]: I0217 17:24:27.974456 4672 scope.go:117] "RemoveContainer" containerID="3ab9a5bb7bcca494f2468e60d90c6fbb76c4f0fac85c8d2f812ff26f9021b93f" Feb 17 17:24:28 crc kubenswrapper[4672]: I0217 17:24:28.838232 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerStarted","Data":"56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa"} Feb 17 17:24:29 crc kubenswrapper[4672]: E0217 17:24:29.946604 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:24:37 crc kubenswrapper[4672]: E0217 17:24:37.947709 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:24:40 crc kubenswrapper[4672]: E0217 17:24:40.948699 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:24:52 crc kubenswrapper[4672]: E0217 17:24:52.947192 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:24:55 crc kubenswrapper[4672]: E0217 17:24:55.950888 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:25:07 crc kubenswrapper[4672]: E0217 17:25:07.947056 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:25:10 crc kubenswrapper[4672]: E0217 17:25:10.948203 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:25:22 crc kubenswrapper[4672]: E0217 17:25:22.946168 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:25:25 crc kubenswrapper[4672]: E0217 17:25:25.947789 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:25:37 crc kubenswrapper[4672]: E0217 17:25:37.948185 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:25:38 crc kubenswrapper[4672]: E0217 17:25:38.946824 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:25:49 crc kubenswrapper[4672]: E0217 17:25:49.948837 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:25:51 crc kubenswrapper[4672]: E0217 17:25:51.952768 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:26:00 crc kubenswrapper[4672]: E0217 17:26:00.948065 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:26:06 crc kubenswrapper[4672]: E0217 17:26:06.949972 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:26:15 crc kubenswrapper[4672]: E0217 17:26:15.948180 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:26:19 crc kubenswrapper[4672]: E0217 17:26:19.947934 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:26:27 crc kubenswrapper[4672]: I0217 17:26:27.565772 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:26:27 crc kubenswrapper[4672]: I0217 17:26:27.566355 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:26:30 crc kubenswrapper[4672]: E0217 17:26:30.947029 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:26:32 crc kubenswrapper[4672]: E0217 17:26:32.946944 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:26:42 crc kubenswrapper[4672]: E0217 17:26:42.947929 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:26:44 crc kubenswrapper[4672]: E0217 17:26:44.947713 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:26:56 crc kubenswrapper[4672]: E0217 17:26:56.947727 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:26:57 crc kubenswrapper[4672]: I0217 17:26:57.566123 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:26:57 crc kubenswrapper[4672]: I0217 17:26:57.566194 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:26:59 crc kubenswrapper[4672]: E0217 17:26:59.947086 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:27:10 crc kubenswrapper[4672]: E0217 17:27:10.953450 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:27:14 crc kubenswrapper[4672]: E0217 17:27:14.948307 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:27:23 crc kubenswrapper[4672]: I0217 17:27:23.039797 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m8jj4"] Feb 17 17:27:23 crc kubenswrapper[4672]: E0217 17:27:23.040969 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab9dc89-5639-423d-a95f-3deba1971171" containerName="extract-content" Feb 17 17:27:23 crc kubenswrapper[4672]: I0217 17:27:23.041004 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab9dc89-5639-423d-a95f-3deba1971171" containerName="extract-content" Feb 17 17:27:23 crc kubenswrapper[4672]: E0217 17:27:23.041031 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab9dc89-5639-423d-a95f-3deba1971171" containerName="extract-utilities" Feb 17 17:27:23 crc kubenswrapper[4672]: I0217 17:27:23.041043 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab9dc89-5639-423d-a95f-3deba1971171" containerName="extract-utilities" Feb 17 17:27:23 crc kubenswrapper[4672]: E0217 17:27:23.041055 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab9dc89-5639-423d-a95f-3deba1971171" containerName="registry-server" Feb 17 17:27:23 crc kubenswrapper[4672]: I0217 17:27:23.041062 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab9dc89-5639-423d-a95f-3deba1971171" containerName="registry-server" Feb 17 17:27:23 crc kubenswrapper[4672]: I0217 17:27:23.041340 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab9dc89-5639-423d-a95f-3deba1971171" containerName="registry-server" Feb 17 17:27:23 crc kubenswrapper[4672]: I0217 17:27:23.043456 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8jj4" Feb 17 17:27:23 crc kubenswrapper[4672]: I0217 17:27:23.081195 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m8jj4"] Feb 17 17:27:23 crc kubenswrapper[4672]: I0217 17:27:23.105064 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c9e120-3bb7-459c-a35d-f41f933cbee7-utilities\") pod \"community-operators-m8jj4\" (UID: \"65c9e120-3bb7-459c-a35d-f41f933cbee7\") " pod="openshift-marketplace/community-operators-m8jj4" Feb 17 17:27:23 crc kubenswrapper[4672]: I0217 17:27:23.105726 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhkpg\" (UniqueName: \"kubernetes.io/projected/65c9e120-3bb7-459c-a35d-f41f933cbee7-kube-api-access-jhkpg\") pod \"community-operators-m8jj4\" (UID: \"65c9e120-3bb7-459c-a35d-f41f933cbee7\") " pod="openshift-marketplace/community-operators-m8jj4" Feb 17 17:27:23 crc kubenswrapper[4672]: I0217 17:27:23.105833 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c9e120-3bb7-459c-a35d-f41f933cbee7-catalog-content\") pod \"community-operators-m8jj4\" (UID: \"65c9e120-3bb7-459c-a35d-f41f933cbee7\") " pod="openshift-marketplace/community-operators-m8jj4" Feb 17 17:27:23 crc kubenswrapper[4672]: I0217 17:27:23.207656 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhkpg\" (UniqueName: \"kubernetes.io/projected/65c9e120-3bb7-459c-a35d-f41f933cbee7-kube-api-access-jhkpg\") pod \"community-operators-m8jj4\" (UID: \"65c9e120-3bb7-459c-a35d-f41f933cbee7\") " pod="openshift-marketplace/community-operators-m8jj4" Feb 17 17:27:23 crc kubenswrapper[4672]: I0217 17:27:23.207757 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c9e120-3bb7-459c-a35d-f41f933cbee7-catalog-content\") pod \"community-operators-m8jj4\" (UID: \"65c9e120-3bb7-459c-a35d-f41f933cbee7\") " pod="openshift-marketplace/community-operators-m8jj4" Feb 17 17:27:23 crc kubenswrapper[4672]: I0217 17:27:23.207864 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c9e120-3bb7-459c-a35d-f41f933cbee7-utilities\") pod \"community-operators-m8jj4\" (UID: \"65c9e120-3bb7-459c-a35d-f41f933cbee7\") " pod="openshift-marketplace/community-operators-m8jj4" Feb 17 17:27:23 crc kubenswrapper[4672]: I0217 17:27:23.208344 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c9e120-3bb7-459c-a35d-f41f933cbee7-utilities\") pod \"community-operators-m8jj4\" (UID: \"65c9e120-3bb7-459c-a35d-f41f933cbee7\") " pod="openshift-marketplace/community-operators-m8jj4" Feb 17 17:27:23 crc kubenswrapper[4672]: I0217 17:27:23.208502 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c9e120-3bb7-459c-a35d-f41f933cbee7-catalog-content\") pod \"community-operators-m8jj4\" (UID: \"65c9e120-3bb7-459c-a35d-f41f933cbee7\") " pod="openshift-marketplace/community-operators-m8jj4" Feb 17 17:27:23 crc kubenswrapper[4672]: I0217 17:27:23.231432 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhkpg\" (UniqueName: \"kubernetes.io/projected/65c9e120-3bb7-459c-a35d-f41f933cbee7-kube-api-access-jhkpg\") pod \"community-operators-m8jj4\" (UID: \"65c9e120-3bb7-459c-a35d-f41f933cbee7\") " pod="openshift-marketplace/community-operators-m8jj4" Feb 17 17:27:23 crc kubenswrapper[4672]: I0217 17:27:23.385617 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8jj4" Feb 17 17:27:24 crc kubenswrapper[4672]: I0217 17:27:24.093950 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m8jj4"] Feb 17 17:27:24 crc kubenswrapper[4672]: I0217 17:27:24.604124 4672 generic.go:334] "Generic (PLEG): container finished" podID="65c9e120-3bb7-459c-a35d-f41f933cbee7" containerID="c53303faf1ff4fe89938945f58cf8f8dd9ba1e8da2c8a3a4dda3a53c7ebe721b" exitCode=0 Feb 17 17:27:24 crc kubenswrapper[4672]: I0217 17:27:24.604199 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8jj4" event={"ID":"65c9e120-3bb7-459c-a35d-f41f933cbee7","Type":"ContainerDied","Data":"c53303faf1ff4fe89938945f58cf8f8dd9ba1e8da2c8a3a4dda3a53c7ebe721b"} Feb 17 17:27:24 crc kubenswrapper[4672]: I0217 17:27:24.604232 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8jj4" event={"ID":"65c9e120-3bb7-459c-a35d-f41f933cbee7","Type":"ContainerStarted","Data":"e24778de103b66a36a8feb6d873e75ed8eb81e68d67a0e0dbc73da195a263aae"} Feb 17 17:27:24 crc kubenswrapper[4672]: E0217 17:27:24.947414 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:27:25 crc kubenswrapper[4672]: I0217 17:27:25.614362 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8jj4" event={"ID":"65c9e120-3bb7-459c-a35d-f41f933cbee7","Type":"ContainerStarted","Data":"ecb095f457133ec91f6c3e83105849d2718cbd8fe198b35dfccab90bac62a0c4"} Feb 17 17:27:27 crc kubenswrapper[4672]: I0217 17:27:27.565453 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:27:27 crc kubenswrapper[4672]: I0217 17:27:27.565847 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:27:27 crc kubenswrapper[4672]: I0217 17:27:27.565901 4672 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 17:27:27 crc kubenswrapper[4672]: I0217 17:27:27.566817 4672 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa"} pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:27:27 crc kubenswrapper[4672]: I0217 17:27:27.566885 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" containerID="cri-o://56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" gracePeriod=600 Feb 17 17:27:27 crc kubenswrapper[4672]: I0217 17:27:27.636375 4672 generic.go:334] "Generic (PLEG): container finished" podID="65c9e120-3bb7-459c-a35d-f41f933cbee7" containerID="ecb095f457133ec91f6c3e83105849d2718cbd8fe198b35dfccab90bac62a0c4" exitCode=0 Feb 17 17:27:27 crc kubenswrapper[4672]: I0217 17:27:27.636409 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8jj4" event={"ID":"65c9e120-3bb7-459c-a35d-f41f933cbee7","Type":"ContainerDied","Data":"ecb095f457133ec91f6c3e83105849d2718cbd8fe198b35dfccab90bac62a0c4"} Feb 17 17:27:27 crc kubenswrapper[4672]: E0217 17:27:27.698059 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:27:27 crc kubenswrapper[4672]: E0217 17:27:27.946178 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:27:28 crc kubenswrapper[4672]: I0217 17:27:28.647425 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8jj4" event={"ID":"65c9e120-3bb7-459c-a35d-f41f933cbee7","Type":"ContainerStarted","Data":"62ea8739035d4dc2e26347ac96b91b9e80c97b7cccffa90d49c47142b95916a1"} Feb 17 17:27:28 crc kubenswrapper[4672]: I0217 17:27:28.658854 4672 generic.go:334] "Generic (PLEG): container finished" podID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" exitCode=0 Feb 17 17:27:28 crc kubenswrapper[4672]: I0217 17:27:28.658907 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerDied","Data":"56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa"} Feb 17 17:27:28 crc kubenswrapper[4672]: I0217 17:27:28.658977 4672 scope.go:117] "RemoveContainer" containerID="aa9de33917b7512b8f4e9b4de1c674b9cb275a2571d728832df984331852db30" Feb 17 17:27:28 crc kubenswrapper[4672]: I0217 17:27:28.659848 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:27:28 crc kubenswrapper[4672]: E0217 17:27:28.660189 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:27:28 crc kubenswrapper[4672]: I0217 17:27:28.680242 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m8jj4" podStartSLOduration=2.275539995 podStartE2EDuration="5.680219406s" podCreationTimestamp="2026-02-17 17:27:23 +0000 UTC" firstStartedPulling="2026-02-17 17:27:24.606046024 +0000 UTC m=+5053.360134756" lastFinishedPulling="2026-02-17 17:27:28.010725435 +0000 UTC m=+5056.764814167" observedRunningTime="2026-02-17 17:27:28.66750751 +0000 UTC m=+5057.421596262" watchObservedRunningTime="2026-02-17 17:27:28.680219406 +0000 UTC m=+5057.434308138" Feb 17 17:27:33 crc kubenswrapper[4672]: I0217 17:27:33.386457 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m8jj4" Feb 17 17:27:33 crc kubenswrapper[4672]: I0217 17:27:33.386989 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m8jj4" Feb 17 17:27:33 crc kubenswrapper[4672]: I0217 17:27:33.435247 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m8jj4" Feb 17 17:27:33 crc kubenswrapper[4672]: I0217 17:27:33.753079 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m8jj4" Feb 17 17:27:33 crc kubenswrapper[4672]: I0217 17:27:33.810071 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m8jj4"] Feb 17 17:27:35 crc kubenswrapper[4672]: I0217 17:27:35.725611 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m8jj4" podUID="65c9e120-3bb7-459c-a35d-f41f933cbee7" containerName="registry-server" containerID="cri-o://62ea8739035d4dc2e26347ac96b91b9e80c97b7cccffa90d49c47142b95916a1" gracePeriod=2 Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.296880 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8jj4" Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.422104 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhkpg\" (UniqueName: \"kubernetes.io/projected/65c9e120-3bb7-459c-a35d-f41f933cbee7-kube-api-access-jhkpg\") pod \"65c9e120-3bb7-459c-a35d-f41f933cbee7\" (UID: \"65c9e120-3bb7-459c-a35d-f41f933cbee7\") " Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.422303 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c9e120-3bb7-459c-a35d-f41f933cbee7-catalog-content\") pod \"65c9e120-3bb7-459c-a35d-f41f933cbee7\" (UID: \"65c9e120-3bb7-459c-a35d-f41f933cbee7\") " Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.422356 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c9e120-3bb7-459c-a35d-f41f933cbee7-utilities\") pod \"65c9e120-3bb7-459c-a35d-f41f933cbee7\" (UID: \"65c9e120-3bb7-459c-a35d-f41f933cbee7\") " Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.423577 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65c9e120-3bb7-459c-a35d-f41f933cbee7-utilities" (OuterVolumeSpecName: "utilities") pod "65c9e120-3bb7-459c-a35d-f41f933cbee7" (UID: "65c9e120-3bb7-459c-a35d-f41f933cbee7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.427239 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65c9e120-3bb7-459c-a35d-f41f933cbee7-kube-api-access-jhkpg" (OuterVolumeSpecName: "kube-api-access-jhkpg") pod "65c9e120-3bb7-459c-a35d-f41f933cbee7" (UID: "65c9e120-3bb7-459c-a35d-f41f933cbee7"). InnerVolumeSpecName "kube-api-access-jhkpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.477090 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65c9e120-3bb7-459c-a35d-f41f933cbee7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65c9e120-3bb7-459c-a35d-f41f933cbee7" (UID: "65c9e120-3bb7-459c-a35d-f41f933cbee7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.524872 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c9e120-3bb7-459c-a35d-f41f933cbee7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.524904 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c9e120-3bb7-459c-a35d-f41f933cbee7-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.524914 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhkpg\" (UniqueName: \"kubernetes.io/projected/65c9e120-3bb7-459c-a35d-f41f933cbee7-kube-api-access-jhkpg\") on node \"crc\" DevicePath \"\"" Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.748914 4672 generic.go:334] "Generic (PLEG): container finished" podID="65c9e120-3bb7-459c-a35d-f41f933cbee7" containerID="62ea8739035d4dc2e26347ac96b91b9e80c97b7cccffa90d49c47142b95916a1" exitCode=0 Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.748955 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8jj4" event={"ID":"65c9e120-3bb7-459c-a35d-f41f933cbee7","Type":"ContainerDied","Data":"62ea8739035d4dc2e26347ac96b91b9e80c97b7cccffa90d49c47142b95916a1"} Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.748984 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8jj4" event={"ID":"65c9e120-3bb7-459c-a35d-f41f933cbee7","Type":"ContainerDied","Data":"e24778de103b66a36a8feb6d873e75ed8eb81e68d67a0e0dbc73da195a263aae"} Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.749005 4672 scope.go:117] "RemoveContainer" containerID="62ea8739035d4dc2e26347ac96b91b9e80c97b7cccffa90d49c47142b95916a1" Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.749177 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8jj4" Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.778229 4672 scope.go:117] "RemoveContainer" containerID="ecb095f457133ec91f6c3e83105849d2718cbd8fe198b35dfccab90bac62a0c4" Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.800414 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m8jj4"] Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.809572 4672 scope.go:117] "RemoveContainer" containerID="c53303faf1ff4fe89938945f58cf8f8dd9ba1e8da2c8a3a4dda3a53c7ebe721b" Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.809682 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m8jj4"] Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.861616 4672 scope.go:117] "RemoveContainer" containerID="62ea8739035d4dc2e26347ac96b91b9e80c97b7cccffa90d49c47142b95916a1" Feb 17 17:27:36 crc kubenswrapper[4672]: E0217 17:27:36.866104 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62ea8739035d4dc2e26347ac96b91b9e80c97b7cccffa90d49c47142b95916a1\": container with ID starting with 62ea8739035d4dc2e26347ac96b91b9e80c97b7cccffa90d49c47142b95916a1 not found: ID does not exist" containerID="62ea8739035d4dc2e26347ac96b91b9e80c97b7cccffa90d49c47142b95916a1" Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.866171 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62ea8739035d4dc2e26347ac96b91b9e80c97b7cccffa90d49c47142b95916a1"} err="failed to get container status \"62ea8739035d4dc2e26347ac96b91b9e80c97b7cccffa90d49c47142b95916a1\": rpc error: code = NotFound desc = could not find container \"62ea8739035d4dc2e26347ac96b91b9e80c97b7cccffa90d49c47142b95916a1\": container with ID starting with 62ea8739035d4dc2e26347ac96b91b9e80c97b7cccffa90d49c47142b95916a1 not found: ID does not exist" Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.866201 4672 scope.go:117] "RemoveContainer" containerID="ecb095f457133ec91f6c3e83105849d2718cbd8fe198b35dfccab90bac62a0c4" Feb 17 17:27:36 crc kubenswrapper[4672]: E0217 17:27:36.866827 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecb095f457133ec91f6c3e83105849d2718cbd8fe198b35dfccab90bac62a0c4\": container with ID starting with ecb095f457133ec91f6c3e83105849d2718cbd8fe198b35dfccab90bac62a0c4 not found: ID does not exist" containerID="ecb095f457133ec91f6c3e83105849d2718cbd8fe198b35dfccab90bac62a0c4" Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.866944 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecb095f457133ec91f6c3e83105849d2718cbd8fe198b35dfccab90bac62a0c4"} err="failed to get container status \"ecb095f457133ec91f6c3e83105849d2718cbd8fe198b35dfccab90bac62a0c4\": rpc error: code = NotFound desc = could not find container \"ecb095f457133ec91f6c3e83105849d2718cbd8fe198b35dfccab90bac62a0c4\": container with ID starting with ecb095f457133ec91f6c3e83105849d2718cbd8fe198b35dfccab90bac62a0c4 not found: ID does not exist" Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.866986 4672 scope.go:117] "RemoveContainer" containerID="c53303faf1ff4fe89938945f58cf8f8dd9ba1e8da2c8a3a4dda3a53c7ebe721b" Feb 17 17:27:36 crc kubenswrapper[4672]: E0217 17:27:36.871080 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c53303faf1ff4fe89938945f58cf8f8dd9ba1e8da2c8a3a4dda3a53c7ebe721b\": container with ID starting with c53303faf1ff4fe89938945f58cf8f8dd9ba1e8da2c8a3a4dda3a53c7ebe721b not found: ID does not exist" containerID="c53303faf1ff4fe89938945f58cf8f8dd9ba1e8da2c8a3a4dda3a53c7ebe721b" Feb 17 17:27:36 crc kubenswrapper[4672]: I0217 17:27:36.871141 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c53303faf1ff4fe89938945f58cf8f8dd9ba1e8da2c8a3a4dda3a53c7ebe721b"} err="failed to get container status \"c53303faf1ff4fe89938945f58cf8f8dd9ba1e8da2c8a3a4dda3a53c7ebe721b\": rpc error: code = NotFound desc = could not find container \"c53303faf1ff4fe89938945f58cf8f8dd9ba1e8da2c8a3a4dda3a53c7ebe721b\": container with ID starting with c53303faf1ff4fe89938945f58cf8f8dd9ba1e8da2c8a3a4dda3a53c7ebe721b not found: ID does not exist" Feb 17 17:27:36 crc kubenswrapper[4672]: E0217 17:27:36.946937 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:27:37 crc kubenswrapper[4672]: I0217 17:27:37.955171 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65c9e120-3bb7-459c-a35d-f41f933cbee7" path="/var/lib/kubelet/pods/65c9e120-3bb7-459c-a35d-f41f933cbee7/volumes" Feb 17 17:27:39 crc kubenswrapper[4672]: E0217 17:27:39.948072 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:27:41 crc kubenswrapper[4672]: I0217 17:27:41.953726 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:27:41 crc kubenswrapper[4672]: E0217 17:27:41.955051 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:27:49 crc kubenswrapper[4672]: I0217 17:27:49.874046 4672 generic.go:334] "Generic (PLEG): container finished" podID="6162f29e-528b-4131-9e8d-1391db930dd5" containerID="be45882887b3e1e6f0573f44f9ef2e1d6e469b1eef7d2f240e6338c66277c918" exitCode=2 Feb 17 17:27:49 crc kubenswrapper[4672]: I0217 17:27:49.874129 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq" event={"ID":"6162f29e-528b-4131-9e8d-1391db930dd5","Type":"ContainerDied","Data":"be45882887b3e1e6f0573f44f9ef2e1d6e469b1eef7d2f240e6338c66277c918"} Feb 17 17:27:51 crc kubenswrapper[4672]: I0217 17:27:51.580925 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq" Feb 17 17:27:51 crc kubenswrapper[4672]: I0217 17:27:51.730709 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6162f29e-528b-4131-9e8d-1391db930dd5-inventory\") pod \"6162f29e-528b-4131-9e8d-1391db930dd5\" (UID: \"6162f29e-528b-4131-9e8d-1391db930dd5\") " Feb 17 17:27:51 crc kubenswrapper[4672]: I0217 17:27:51.731307 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6162f29e-528b-4131-9e8d-1391db930dd5-ssh-key-openstack-edpm-ipam\") pod \"6162f29e-528b-4131-9e8d-1391db930dd5\" (UID: \"6162f29e-528b-4131-9e8d-1391db930dd5\") " Feb 17 17:27:51 crc kubenswrapper[4672]: I0217 17:27:51.731369 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x25v\" (UniqueName: \"kubernetes.io/projected/6162f29e-528b-4131-9e8d-1391db930dd5-kube-api-access-7x25v\") pod \"6162f29e-528b-4131-9e8d-1391db930dd5\" (UID: \"6162f29e-528b-4131-9e8d-1391db930dd5\") " Feb 17 17:27:51 crc kubenswrapper[4672]: I0217 17:27:51.738832 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6162f29e-528b-4131-9e8d-1391db930dd5-kube-api-access-7x25v" (OuterVolumeSpecName: "kube-api-access-7x25v") pod "6162f29e-528b-4131-9e8d-1391db930dd5" (UID: "6162f29e-528b-4131-9e8d-1391db930dd5"). InnerVolumeSpecName "kube-api-access-7x25v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:27:51 crc kubenswrapper[4672]: I0217 17:27:51.761064 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6162f29e-528b-4131-9e8d-1391db930dd5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6162f29e-528b-4131-9e8d-1391db930dd5" (UID: "6162f29e-528b-4131-9e8d-1391db930dd5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:27:51 crc kubenswrapper[4672]: I0217 17:27:51.772176 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6162f29e-528b-4131-9e8d-1391db930dd5-inventory" (OuterVolumeSpecName: "inventory") pod "6162f29e-528b-4131-9e8d-1391db930dd5" (UID: "6162f29e-528b-4131-9e8d-1391db930dd5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:27:51 crc kubenswrapper[4672]: I0217 17:27:51.833695 4672 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6162f29e-528b-4131-9e8d-1391db930dd5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 17:27:51 crc kubenswrapper[4672]: I0217 17:27:51.833734 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x25v\" (UniqueName: \"kubernetes.io/projected/6162f29e-528b-4131-9e8d-1391db930dd5-kube-api-access-7x25v\") on node \"crc\" DevicePath \"\"" Feb 17 17:27:51 crc kubenswrapper[4672]: I0217 17:27:51.833749 4672 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6162f29e-528b-4131-9e8d-1391db930dd5-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 17:27:51 crc kubenswrapper[4672]: I0217 17:27:51.894643 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq" event={"ID":"6162f29e-528b-4131-9e8d-1391db930dd5","Type":"ContainerDied","Data":"77a4c5742149de094de3c17db98f77a3f135bae115512ba0824819a3bd87f5e6"} Feb 17 17:27:51 crc kubenswrapper[4672]: I0217 17:27:51.894689 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77a4c5742149de094de3c17db98f77a3f135bae115512ba0824819a3bd87f5e6" Feb 17 17:27:51 crc kubenswrapper[4672]: I0217 17:27:51.894756 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq" Feb 17 17:27:51 crc kubenswrapper[4672]: E0217 17:27:51.982813 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:27:52 crc kubenswrapper[4672]: E0217 17:27:52.007345 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:27:53 crc kubenswrapper[4672]: I0217 17:27:53.945480 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:27:53 crc kubenswrapper[4672]: E0217 17:27:53.946106 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:28:04 crc kubenswrapper[4672]: I0217 17:28:04.945993 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:28:04 crc kubenswrapper[4672]: E0217 17:28:04.946874 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:28:04 crc kubenswrapper[4672]: E0217 17:28:04.947627 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:28:06 crc kubenswrapper[4672]: E0217 17:28:06.947760 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:28:17 crc kubenswrapper[4672]: I0217 17:28:17.925605 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tsjx5/must-gather-s5rvf"] Feb 17 17:28:17 crc kubenswrapper[4672]: E0217 17:28:17.928402 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c9e120-3bb7-459c-a35d-f41f933cbee7" containerName="extract-content" Feb 17 17:28:17 crc kubenswrapper[4672]: I0217 17:28:17.928457 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c9e120-3bb7-459c-a35d-f41f933cbee7" containerName="extract-content" Feb 17 17:28:17 crc kubenswrapper[4672]: E0217 17:28:17.928488 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c9e120-3bb7-459c-a35d-f41f933cbee7" containerName="extract-utilities" Feb 17 17:28:17 crc kubenswrapper[4672]: I0217 17:28:17.928498 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c9e120-3bb7-459c-a35d-f41f933cbee7" containerName="extract-utilities" Feb 17 17:28:17 crc kubenswrapper[4672]: E0217 17:28:17.928556 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c9e120-3bb7-459c-a35d-f41f933cbee7" containerName="registry-server" Feb 17 17:28:17 crc kubenswrapper[4672]: I0217 17:28:17.928567 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c9e120-3bb7-459c-a35d-f41f933cbee7" containerName="registry-server" Feb 17 17:28:17 crc kubenswrapper[4672]: E0217 17:28:17.928601 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6162f29e-528b-4131-9e8d-1391db930dd5" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:28:17 crc kubenswrapper[4672]: I0217 17:28:17.928618 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="6162f29e-528b-4131-9e8d-1391db930dd5" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:28:17 crc kubenswrapper[4672]: I0217 17:28:17.929246 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="6162f29e-528b-4131-9e8d-1391db930dd5" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:28:17 crc kubenswrapper[4672]: I0217 17:28:17.929287 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="65c9e120-3bb7-459c-a35d-f41f933cbee7" containerName="registry-server" Feb 17 17:28:17 crc kubenswrapper[4672]: I0217 17:28:17.932162 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tsjx5/must-gather-s5rvf" Feb 17 17:28:17 crc kubenswrapper[4672]: I0217 17:28:17.957641 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-tsjx5"/"default-dockercfg-zfggj" Feb 17 17:28:17 crc kubenswrapper[4672]: I0217 17:28:17.957910 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tsjx5"/"openshift-service-ca.crt" Feb 17 17:28:17 crc kubenswrapper[4672]: I0217 17:28:17.958075 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tsjx5"/"kube-root-ca.crt" Feb 17 17:28:17 crc kubenswrapper[4672]: E0217 17:28:17.966344 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:28:18 crc kubenswrapper[4672]: I0217 17:28:18.042110 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tsjx5/must-gather-s5rvf"] Feb 17 17:28:18 crc kubenswrapper[4672]: I0217 17:28:18.075714 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7n4j\" (UniqueName: \"kubernetes.io/projected/189b761b-ad0c-41f5-892c-54ece21c8ab8-kube-api-access-x7n4j\") pod \"must-gather-s5rvf\" (UID: \"189b761b-ad0c-41f5-892c-54ece21c8ab8\") " pod="openshift-must-gather-tsjx5/must-gather-s5rvf" Feb 17 17:28:18 crc kubenswrapper[4672]: I0217 17:28:18.075991 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/189b761b-ad0c-41f5-892c-54ece21c8ab8-must-gather-output\") pod \"must-gather-s5rvf\" (UID: \"189b761b-ad0c-41f5-892c-54ece21c8ab8\") " pod="openshift-must-gather-tsjx5/must-gather-s5rvf" Feb 17 17:28:18 crc kubenswrapper[4672]: I0217 17:28:18.177460 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/189b761b-ad0c-41f5-892c-54ece21c8ab8-must-gather-output\") pod \"must-gather-s5rvf\" (UID: \"189b761b-ad0c-41f5-892c-54ece21c8ab8\") " pod="openshift-must-gather-tsjx5/must-gather-s5rvf" Feb 17 17:28:18 crc kubenswrapper[4672]: I0217 17:28:18.177611 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7n4j\" (UniqueName: \"kubernetes.io/projected/189b761b-ad0c-41f5-892c-54ece21c8ab8-kube-api-access-x7n4j\") pod \"must-gather-s5rvf\" (UID: \"189b761b-ad0c-41f5-892c-54ece21c8ab8\") " pod="openshift-must-gather-tsjx5/must-gather-s5rvf" Feb 17 17:28:18 crc kubenswrapper[4672]: I0217 17:28:18.177947 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/189b761b-ad0c-41f5-892c-54ece21c8ab8-must-gather-output\") pod \"must-gather-s5rvf\" (UID: \"189b761b-ad0c-41f5-892c-54ece21c8ab8\") " pod="openshift-must-gather-tsjx5/must-gather-s5rvf" Feb 17 17:28:18 crc kubenswrapper[4672]: I0217 17:28:18.198230 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7n4j\" (UniqueName: \"kubernetes.io/projected/189b761b-ad0c-41f5-892c-54ece21c8ab8-kube-api-access-x7n4j\") pod \"must-gather-s5rvf\" (UID: \"189b761b-ad0c-41f5-892c-54ece21c8ab8\") " pod="openshift-must-gather-tsjx5/must-gather-s5rvf" Feb 17 17:28:18 crc kubenswrapper[4672]: I0217 17:28:18.310687 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tsjx5/must-gather-s5rvf" Feb 17 17:28:18 crc kubenswrapper[4672]: I0217 17:28:18.840695 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tsjx5/must-gather-s5rvf"] Feb 17 17:28:18 crc kubenswrapper[4672]: W0217 17:28:18.840720 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod189b761b_ad0c_41f5_892c_54ece21c8ab8.slice/crio-44d05974b11ad20fa1028cd0a08d8df39761b6df4d6ae6a273c6bcd3971fc642 WatchSource:0}: Error finding container 44d05974b11ad20fa1028cd0a08d8df39761b6df4d6ae6a273c6bcd3971fc642: Status 404 returned error can't find the container with id 44d05974b11ad20fa1028cd0a08d8df39761b6df4d6ae6a273c6bcd3971fc642 Feb 17 17:28:19 crc kubenswrapper[4672]: I0217 17:28:19.178898 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tsjx5/must-gather-s5rvf" event={"ID":"189b761b-ad0c-41f5-892c-54ece21c8ab8","Type":"ContainerStarted","Data":"44d05974b11ad20fa1028cd0a08d8df39761b6df4d6ae6a273c6bcd3971fc642"} Feb 17 17:28:19 crc kubenswrapper[4672]: I0217 17:28:19.949890 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:28:19 crc kubenswrapper[4672]: E0217 17:28:19.950691 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:28:20 crc kubenswrapper[4672]: E0217 17:28:20.954715 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:28:28 crc kubenswrapper[4672]: I0217 17:28:28.269579 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tsjx5/must-gather-s5rvf" event={"ID":"189b761b-ad0c-41f5-892c-54ece21c8ab8","Type":"ContainerStarted","Data":"4e316c4b786fceb930c362b7a5bbe6e6af2700f151b473d0fb02a6594e3053bb"} Feb 17 17:28:29 crc kubenswrapper[4672]: I0217 17:28:29.280683 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tsjx5/must-gather-s5rvf" event={"ID":"189b761b-ad0c-41f5-892c-54ece21c8ab8","Type":"ContainerStarted","Data":"102f5df918a8124947cf7bc7313608494c7373011af76b928441978e9d5c6cfc"} Feb 17 17:28:29 crc kubenswrapper[4672]: I0217 17:28:29.303427 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tsjx5/must-gather-s5rvf" podStartSLOduration=3.332981544 podStartE2EDuration="12.303408465s" podCreationTimestamp="2026-02-17 17:28:17 +0000 UTC" firstStartedPulling="2026-02-17 17:28:18.843258601 +0000 UTC m=+5107.597347333" lastFinishedPulling="2026-02-17 17:28:27.813685522 +0000 UTC m=+5116.567774254" observedRunningTime="2026-02-17 17:28:29.296832088 +0000 UTC m=+5118.050920860" watchObservedRunningTime="2026-02-17 17:28:29.303408465 +0000 UTC m=+5118.057497197" Feb 17 17:28:29 crc kubenswrapper[4672]: E0217 17:28:29.947033 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:28:32 crc kubenswrapper[4672]: I0217 17:28:32.007806 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tsjx5/crc-debug-bxj25"] Feb 17 17:28:32 crc kubenswrapper[4672]: I0217 17:28:32.009775 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tsjx5/crc-debug-bxj25" Feb 17 17:28:32 crc kubenswrapper[4672]: I0217 17:28:32.111730 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t758v\" (UniqueName: \"kubernetes.io/projected/1f734d3a-ebe9-437e-b7bc-06adfa520885-kube-api-access-t758v\") pod \"crc-debug-bxj25\" (UID: \"1f734d3a-ebe9-437e-b7bc-06adfa520885\") " pod="openshift-must-gather-tsjx5/crc-debug-bxj25" Feb 17 17:28:32 crc kubenswrapper[4672]: I0217 17:28:32.111924 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f734d3a-ebe9-437e-b7bc-06adfa520885-host\") pod \"crc-debug-bxj25\" (UID: \"1f734d3a-ebe9-437e-b7bc-06adfa520885\") " pod="openshift-must-gather-tsjx5/crc-debug-bxj25" Feb 17 17:28:32 crc kubenswrapper[4672]: I0217 17:28:32.214180 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t758v\" (UniqueName: \"kubernetes.io/projected/1f734d3a-ebe9-437e-b7bc-06adfa520885-kube-api-access-t758v\") pod \"crc-debug-bxj25\" (UID: \"1f734d3a-ebe9-437e-b7bc-06adfa520885\") " pod="openshift-must-gather-tsjx5/crc-debug-bxj25" Feb 17 17:28:32 crc kubenswrapper[4672]: I0217 17:28:32.214428 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f734d3a-ebe9-437e-b7bc-06adfa520885-host\") pod \"crc-debug-bxj25\" (UID: \"1f734d3a-ebe9-437e-b7bc-06adfa520885\") " pod="openshift-must-gather-tsjx5/crc-debug-bxj25" Feb 17 17:28:32 crc kubenswrapper[4672]: I0217 17:28:32.214638 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f734d3a-ebe9-437e-b7bc-06adfa520885-host\") pod \"crc-debug-bxj25\" (UID: \"1f734d3a-ebe9-437e-b7bc-06adfa520885\") " pod="openshift-must-gather-tsjx5/crc-debug-bxj25" Feb 17 17:28:32 crc kubenswrapper[4672]: I0217 17:28:32.232819 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t758v\" (UniqueName: \"kubernetes.io/projected/1f734d3a-ebe9-437e-b7bc-06adfa520885-kube-api-access-t758v\") pod \"crc-debug-bxj25\" (UID: \"1f734d3a-ebe9-437e-b7bc-06adfa520885\") " pod="openshift-must-gather-tsjx5/crc-debug-bxj25" Feb 17 17:28:32 crc kubenswrapper[4672]: I0217 17:28:32.331672 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tsjx5/crc-debug-bxj25" Feb 17 17:28:32 crc kubenswrapper[4672]: W0217 17:28:32.360617 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f734d3a_ebe9_437e_b7bc_06adfa520885.slice/crio-5913511243dd325dfce9e0077e0aa183a3bf1e1f6c97184b922d25f6c1f316f0 WatchSource:0}: Error finding container 5913511243dd325dfce9e0077e0aa183a3bf1e1f6c97184b922d25f6c1f316f0: Status 404 returned error can't find the container with id 5913511243dd325dfce9e0077e0aa183a3bf1e1f6c97184b922d25f6c1f316f0 Feb 17 17:28:32 crc kubenswrapper[4672]: I0217 17:28:32.945872 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:28:32 crc kubenswrapper[4672]: E0217 17:28:32.946587 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:28:32 crc kubenswrapper[4672]: E0217 17:28:32.948273 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:28:33 crc kubenswrapper[4672]: I0217 17:28:33.328745 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tsjx5/crc-debug-bxj25" event={"ID":"1f734d3a-ebe9-437e-b7bc-06adfa520885","Type":"ContainerStarted","Data":"5913511243dd325dfce9e0077e0aa183a3bf1e1f6c97184b922d25f6c1f316f0"} Feb 17 17:28:41 crc kubenswrapper[4672]: E0217 17:28:41.962257 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:28:45 crc kubenswrapper[4672]: I0217 17:28:45.945251 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:28:45 crc kubenswrapper[4672]: E0217 17:28:45.946088 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:28:45 crc kubenswrapper[4672]: E0217 17:28:45.949068 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:28:48 crc kubenswrapper[4672]: I0217 17:28:48.512748 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tsjx5/crc-debug-bxj25" event={"ID":"1f734d3a-ebe9-437e-b7bc-06adfa520885","Type":"ContainerStarted","Data":"3fdee91464a7bb5170f6d568bf408b96161bbd64e750e8acc5b8a271966e1c02"} Feb 17 17:28:48 crc kubenswrapper[4672]: I0217 17:28:48.549928 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tsjx5/crc-debug-bxj25" podStartSLOduration=2.5464476830000002 podStartE2EDuration="17.549901177s" podCreationTimestamp="2026-02-17 17:28:31 +0000 UTC" firstStartedPulling="2026-02-17 17:28:32.362712043 +0000 UTC m=+5121.116800775" lastFinishedPulling="2026-02-17 17:28:47.366165527 +0000 UTC m=+5136.120254269" observedRunningTime="2026-02-17 17:28:48.528801361 +0000 UTC m=+5137.282890093" watchObservedRunningTime="2026-02-17 17:28:48.549901177 +0000 UTC m=+5137.303989929" Feb 17 17:28:55 crc kubenswrapper[4672]: E0217 17:28:55.946550 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:28:57 crc kubenswrapper[4672]: I0217 17:28:57.945146 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:28:57 crc kubenswrapper[4672]: E0217 17:28:57.945619 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:28:59 crc kubenswrapper[4672]: E0217 17:28:59.950424 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:29:07 crc kubenswrapper[4672]: E0217 17:29:07.947372 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:29:12 crc kubenswrapper[4672]: I0217 17:29:12.945182 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:29:12 crc kubenswrapper[4672]: E0217 17:29:12.946014 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:29:14 crc kubenswrapper[4672]: I0217 17:29:14.636324 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r69wx"] Feb 17 17:29:14 crc kubenswrapper[4672]: I0217 17:29:14.638884 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r69wx" Feb 17 17:29:14 crc kubenswrapper[4672]: I0217 17:29:14.651850 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r69wx"] Feb 17 17:29:14 crc kubenswrapper[4672]: I0217 17:29:14.740769 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwsg7\" (UniqueName: \"kubernetes.io/projected/28fdd128-5fa0-48a7-8a9b-a6cb1c84f780-kube-api-access-hwsg7\") pod \"redhat-operators-r69wx\" (UID: \"28fdd128-5fa0-48a7-8a9b-a6cb1c84f780\") " pod="openshift-marketplace/redhat-operators-r69wx" Feb 17 17:29:14 crc kubenswrapper[4672]: I0217 17:29:14.740990 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28fdd128-5fa0-48a7-8a9b-a6cb1c84f780-utilities\") pod \"redhat-operators-r69wx\" (UID: \"28fdd128-5fa0-48a7-8a9b-a6cb1c84f780\") " pod="openshift-marketplace/redhat-operators-r69wx" Feb 17 17:29:14 crc kubenswrapper[4672]: I0217 17:29:14.741085 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28fdd128-5fa0-48a7-8a9b-a6cb1c84f780-catalog-content\") pod \"redhat-operators-r69wx\" (UID: \"28fdd128-5fa0-48a7-8a9b-a6cb1c84f780\") " pod="openshift-marketplace/redhat-operators-r69wx" Feb 17 17:29:14 crc kubenswrapper[4672]: I0217 17:29:14.842600 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28fdd128-5fa0-48a7-8a9b-a6cb1c84f780-utilities\") pod \"redhat-operators-r69wx\" (UID: \"28fdd128-5fa0-48a7-8a9b-a6cb1c84f780\") " pod="openshift-marketplace/redhat-operators-r69wx" Feb 17 17:29:14 crc kubenswrapper[4672]: I0217 17:29:14.842684 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28fdd128-5fa0-48a7-8a9b-a6cb1c84f780-catalog-content\") pod \"redhat-operators-r69wx\" (UID: \"28fdd128-5fa0-48a7-8a9b-a6cb1c84f780\") " pod="openshift-marketplace/redhat-operators-r69wx" Feb 17 17:29:14 crc kubenswrapper[4672]: I0217 17:29:14.842718 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwsg7\" (UniqueName: \"kubernetes.io/projected/28fdd128-5fa0-48a7-8a9b-a6cb1c84f780-kube-api-access-hwsg7\") pod \"redhat-operators-r69wx\" (UID: \"28fdd128-5fa0-48a7-8a9b-a6cb1c84f780\") " pod="openshift-marketplace/redhat-operators-r69wx" Feb 17 17:29:14 crc kubenswrapper[4672]: I0217 17:29:14.843339 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28fdd128-5fa0-48a7-8a9b-a6cb1c84f780-utilities\") pod \"redhat-operators-r69wx\" (UID: \"28fdd128-5fa0-48a7-8a9b-a6cb1c84f780\") " pod="openshift-marketplace/redhat-operators-r69wx" Feb 17 17:29:14 crc kubenswrapper[4672]: I0217 17:29:14.843447 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28fdd128-5fa0-48a7-8a9b-a6cb1c84f780-catalog-content\") pod \"redhat-operators-r69wx\" (UID: \"28fdd128-5fa0-48a7-8a9b-a6cb1c84f780\") " pod="openshift-marketplace/redhat-operators-r69wx" Feb 17 17:29:14 crc kubenswrapper[4672]: I0217 17:29:14.864026 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwsg7\" (UniqueName: \"kubernetes.io/projected/28fdd128-5fa0-48a7-8a9b-a6cb1c84f780-kube-api-access-hwsg7\") pod \"redhat-operators-r69wx\" (UID: \"28fdd128-5fa0-48a7-8a9b-a6cb1c84f780\") " pod="openshift-marketplace/redhat-operators-r69wx" Feb 17 17:29:14 crc kubenswrapper[4672]: E0217 17:29:14.947076 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:29:14 crc kubenswrapper[4672]: I0217 17:29:14.963895 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r69wx" Feb 17 17:29:15 crc kubenswrapper[4672]: I0217 17:29:15.628474 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r69wx"] Feb 17 17:29:15 crc kubenswrapper[4672]: I0217 17:29:15.807898 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r69wx" event={"ID":"28fdd128-5fa0-48a7-8a9b-a6cb1c84f780","Type":"ContainerStarted","Data":"280c801d99fb9d5446c51e6a75b726db548ebf6f5c8a1e6103f06369fded67a3"} Feb 17 17:29:16 crc kubenswrapper[4672]: I0217 17:29:16.816940 4672 generic.go:334] "Generic (PLEG): container finished" podID="28fdd128-5fa0-48a7-8a9b-a6cb1c84f780" containerID="c835aa0395b9679a972221ad2be8c15770e60801b3e220a7711b938408b09c0b" exitCode=0 Feb 17 17:29:16 crc kubenswrapper[4672]: I0217 17:29:16.816981 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r69wx" event={"ID":"28fdd128-5fa0-48a7-8a9b-a6cb1c84f780","Type":"ContainerDied","Data":"c835aa0395b9679a972221ad2be8c15770e60801b3e220a7711b938408b09c0b"} Feb 17 17:29:16 crc kubenswrapper[4672]: I0217 17:29:16.820707 4672 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:29:17 crc kubenswrapper[4672]: I0217 17:29:17.827665 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r69wx" event={"ID":"28fdd128-5fa0-48a7-8a9b-a6cb1c84f780","Type":"ContainerStarted","Data":"1643b9164a2a4cc7fa9a871ae84298fa24db8495a858d10c2c89790708accb2f"} Feb 17 17:29:21 crc kubenswrapper[4672]: I0217 17:29:21.861863 4672 generic.go:334] "Generic (PLEG): container finished" podID="1f734d3a-ebe9-437e-b7bc-06adfa520885" containerID="3fdee91464a7bb5170f6d568bf408b96161bbd64e750e8acc5b8a271966e1c02" exitCode=0 Feb 17 17:29:21 crc kubenswrapper[4672]: I0217 17:29:21.861958 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tsjx5/crc-debug-bxj25" event={"ID":"1f734d3a-ebe9-437e-b7bc-06adfa520885","Type":"ContainerDied","Data":"3fdee91464a7bb5170f6d568bf408b96161bbd64e750e8acc5b8a271966e1c02"} Feb 17 17:29:22 crc kubenswrapper[4672]: E0217 17:29:22.076155 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:29:22 crc kubenswrapper[4672]: E0217 17:29:22.076231 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:29:22 crc kubenswrapper[4672]: E0217 17:29:22.076369 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq9ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-qrhj8_openstack(dc5471f5-2491-4841-be45-09c8f14b35c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:29:22 crc kubenswrapper[4672]: E0217 17:29:22.078349 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:29:22 crc kubenswrapper[4672]: I0217 17:29:22.992672 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tsjx5/crc-debug-bxj25" Feb 17 17:29:23 crc kubenswrapper[4672]: I0217 17:29:23.033726 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tsjx5/crc-debug-bxj25"] Feb 17 17:29:23 crc kubenswrapper[4672]: I0217 17:29:23.043590 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tsjx5/crc-debug-bxj25"] Feb 17 17:29:23 crc kubenswrapper[4672]: I0217 17:29:23.079852 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t758v\" (UniqueName: \"kubernetes.io/projected/1f734d3a-ebe9-437e-b7bc-06adfa520885-kube-api-access-t758v\") pod \"1f734d3a-ebe9-437e-b7bc-06adfa520885\" (UID: \"1f734d3a-ebe9-437e-b7bc-06adfa520885\") " Feb 17 17:29:23 crc kubenswrapper[4672]: I0217 17:29:23.080002 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f734d3a-ebe9-437e-b7bc-06adfa520885-host\") pod \"1f734d3a-ebe9-437e-b7bc-06adfa520885\" (UID: \"1f734d3a-ebe9-437e-b7bc-06adfa520885\") " Feb 17 17:29:23 crc kubenswrapper[4672]: I0217 17:29:23.080425 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f734d3a-ebe9-437e-b7bc-06adfa520885-host" (OuterVolumeSpecName: "host") pod "1f734d3a-ebe9-437e-b7bc-06adfa520885" (UID: "1f734d3a-ebe9-437e-b7bc-06adfa520885"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:29:23 crc kubenswrapper[4672]: I0217 17:29:23.092922 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f734d3a-ebe9-437e-b7bc-06adfa520885-kube-api-access-t758v" (OuterVolumeSpecName: "kube-api-access-t758v") pod "1f734d3a-ebe9-437e-b7bc-06adfa520885" (UID: "1f734d3a-ebe9-437e-b7bc-06adfa520885"). InnerVolumeSpecName "kube-api-access-t758v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:29:23 crc kubenswrapper[4672]: I0217 17:29:23.182401 4672 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f734d3a-ebe9-437e-b7bc-06adfa520885-host\") on node \"crc\" DevicePath \"\"" Feb 17 17:29:23 crc kubenswrapper[4672]: I0217 17:29:23.182430 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t758v\" (UniqueName: \"kubernetes.io/projected/1f734d3a-ebe9-437e-b7bc-06adfa520885-kube-api-access-t758v\") on node \"crc\" DevicePath \"\"" Feb 17 17:29:23 crc kubenswrapper[4672]: I0217 17:29:23.882156 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5913511243dd325dfce9e0077e0aa183a3bf1e1f6c97184b922d25f6c1f316f0" Feb 17 17:29:23 crc kubenswrapper[4672]: I0217 17:29:23.882220 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tsjx5/crc-debug-bxj25" Feb 17 17:29:23 crc kubenswrapper[4672]: I0217 17:29:23.955701 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f734d3a-ebe9-437e-b7bc-06adfa520885" path="/var/lib/kubelet/pods/1f734d3a-ebe9-437e-b7bc-06adfa520885/volumes" Feb 17 17:29:24 crc kubenswrapper[4672]: I0217 17:29:24.205115 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tsjx5/crc-debug-2wr9b"] Feb 17 17:29:24 crc kubenswrapper[4672]: E0217 17:29:24.205801 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f734d3a-ebe9-437e-b7bc-06adfa520885" containerName="container-00" Feb 17 17:29:24 crc kubenswrapper[4672]: I0217 17:29:24.205832 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f734d3a-ebe9-437e-b7bc-06adfa520885" containerName="container-00" Feb 17 17:29:24 crc kubenswrapper[4672]: I0217 17:29:24.206145 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f734d3a-ebe9-437e-b7bc-06adfa520885" containerName="container-00" Feb 17 17:29:24 crc kubenswrapper[4672]: I0217 17:29:24.207151 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tsjx5/crc-debug-2wr9b" Feb 17 17:29:24 crc kubenswrapper[4672]: I0217 17:29:24.405894 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b3aea242-d370-40ec-bb6a-0ddcedb2f1c1-host\") pod \"crc-debug-2wr9b\" (UID: \"b3aea242-d370-40ec-bb6a-0ddcedb2f1c1\") " pod="openshift-must-gather-tsjx5/crc-debug-2wr9b" Feb 17 17:29:24 crc kubenswrapper[4672]: I0217 17:29:24.406047 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27mmb\" (UniqueName: \"kubernetes.io/projected/b3aea242-d370-40ec-bb6a-0ddcedb2f1c1-kube-api-access-27mmb\") pod \"crc-debug-2wr9b\" (UID: \"b3aea242-d370-40ec-bb6a-0ddcedb2f1c1\") " pod="openshift-must-gather-tsjx5/crc-debug-2wr9b" Feb 17 17:29:24 crc kubenswrapper[4672]: I0217 17:29:24.507329 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b3aea242-d370-40ec-bb6a-0ddcedb2f1c1-host\") pod \"crc-debug-2wr9b\" (UID: \"b3aea242-d370-40ec-bb6a-0ddcedb2f1c1\") " pod="openshift-must-gather-tsjx5/crc-debug-2wr9b" Feb 17 17:29:24 crc kubenswrapper[4672]: I0217 17:29:24.507464 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27mmb\" (UniqueName: \"kubernetes.io/projected/b3aea242-d370-40ec-bb6a-0ddcedb2f1c1-kube-api-access-27mmb\") pod \"crc-debug-2wr9b\" (UID: \"b3aea242-d370-40ec-bb6a-0ddcedb2f1c1\") " pod="openshift-must-gather-tsjx5/crc-debug-2wr9b" Feb 17 17:29:24 crc kubenswrapper[4672]: I0217 17:29:24.507481 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b3aea242-d370-40ec-bb6a-0ddcedb2f1c1-host\") pod \"crc-debug-2wr9b\" (UID: \"b3aea242-d370-40ec-bb6a-0ddcedb2f1c1\") " pod="openshift-must-gather-tsjx5/crc-debug-2wr9b" Feb 17 17:29:24 crc kubenswrapper[4672]: I0217 17:29:24.525120 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27mmb\" (UniqueName: \"kubernetes.io/projected/b3aea242-d370-40ec-bb6a-0ddcedb2f1c1-kube-api-access-27mmb\") pod \"crc-debug-2wr9b\" (UID: \"b3aea242-d370-40ec-bb6a-0ddcedb2f1c1\") " pod="openshift-must-gather-tsjx5/crc-debug-2wr9b" Feb 17 17:29:24 crc kubenswrapper[4672]: I0217 17:29:24.527156 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tsjx5/crc-debug-2wr9b" Feb 17 17:29:24 crc kubenswrapper[4672]: I0217 17:29:24.891352 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tsjx5/crc-debug-2wr9b" event={"ID":"b3aea242-d370-40ec-bb6a-0ddcedb2f1c1","Type":"ContainerStarted","Data":"bbd37e9111d622c1907dd56fdf9d4329cf0833dce9b1eff21225d80f68b4c753"} Feb 17 17:29:25 crc kubenswrapper[4672]: I0217 17:29:25.901748 4672 generic.go:334] "Generic (PLEG): container finished" podID="b3aea242-d370-40ec-bb6a-0ddcedb2f1c1" containerID="1fbc0b24e46ae96f074d8cd41eaac33fc63850a9f1f42822dd7f79986c7c7a4d" exitCode=0 Feb 17 17:29:25 crc kubenswrapper[4672]: I0217 17:29:25.901856 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tsjx5/crc-debug-2wr9b" event={"ID":"b3aea242-d370-40ec-bb6a-0ddcedb2f1c1","Type":"ContainerDied","Data":"1fbc0b24e46ae96f074d8cd41eaac33fc63850a9f1f42822dd7f79986c7c7a4d"} Feb 17 17:29:26 crc kubenswrapper[4672]: I0217 17:29:26.506445 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tsjx5/crc-debug-2wr9b"] Feb 17 17:29:26 crc kubenswrapper[4672]: I0217 17:29:26.516657 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tsjx5/crc-debug-2wr9b"] Feb 17 17:29:26 crc kubenswrapper[4672]: I0217 17:29:26.945885 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:29:26 crc kubenswrapper[4672]: E0217 17:29:26.946267 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:29:27 crc kubenswrapper[4672]: I0217 17:29:27.023288 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tsjx5/crc-debug-2wr9b" Feb 17 17:29:27 crc kubenswrapper[4672]: I0217 17:29:27.180804 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b3aea242-d370-40ec-bb6a-0ddcedb2f1c1-host\") pod \"b3aea242-d370-40ec-bb6a-0ddcedb2f1c1\" (UID: \"b3aea242-d370-40ec-bb6a-0ddcedb2f1c1\") " Feb 17 17:29:27 crc kubenswrapper[4672]: I0217 17:29:27.180914 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27mmb\" (UniqueName: \"kubernetes.io/projected/b3aea242-d370-40ec-bb6a-0ddcedb2f1c1-kube-api-access-27mmb\") pod \"b3aea242-d370-40ec-bb6a-0ddcedb2f1c1\" (UID: \"b3aea242-d370-40ec-bb6a-0ddcedb2f1c1\") " Feb 17 17:29:27 crc kubenswrapper[4672]: I0217 17:29:27.180949 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3aea242-d370-40ec-bb6a-0ddcedb2f1c1-host" (OuterVolumeSpecName: "host") pod "b3aea242-d370-40ec-bb6a-0ddcedb2f1c1" (UID: "b3aea242-d370-40ec-bb6a-0ddcedb2f1c1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:29:27 crc kubenswrapper[4672]: I0217 17:29:27.181997 4672 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b3aea242-d370-40ec-bb6a-0ddcedb2f1c1-host\") on node \"crc\" DevicePath \"\"" Feb 17 17:29:27 crc kubenswrapper[4672]: I0217 17:29:27.186785 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3aea242-d370-40ec-bb6a-0ddcedb2f1c1-kube-api-access-27mmb" (OuterVolumeSpecName: "kube-api-access-27mmb") pod "b3aea242-d370-40ec-bb6a-0ddcedb2f1c1" (UID: "b3aea242-d370-40ec-bb6a-0ddcedb2f1c1"). InnerVolumeSpecName "kube-api-access-27mmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:29:27 crc kubenswrapper[4672]: I0217 17:29:27.284340 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27mmb\" (UniqueName: \"kubernetes.io/projected/b3aea242-d370-40ec-bb6a-0ddcedb2f1c1-kube-api-access-27mmb\") on node \"crc\" DevicePath \"\"" Feb 17 17:29:27 crc kubenswrapper[4672]: I0217 17:29:27.692302 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tsjx5/crc-debug-z65w4"] Feb 17 17:29:27 crc kubenswrapper[4672]: E0217 17:29:27.692825 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3aea242-d370-40ec-bb6a-0ddcedb2f1c1" containerName="container-00" Feb 17 17:29:27 crc kubenswrapper[4672]: I0217 17:29:27.692848 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3aea242-d370-40ec-bb6a-0ddcedb2f1c1" containerName="container-00" Feb 17 17:29:27 crc kubenswrapper[4672]: I0217 17:29:27.693086 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3aea242-d370-40ec-bb6a-0ddcedb2f1c1" containerName="container-00" Feb 17 17:29:27 crc kubenswrapper[4672]: I0217 17:29:27.693926 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tsjx5/crc-debug-z65w4" Feb 17 17:29:27 crc kubenswrapper[4672]: I0217 17:29:27.895578 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f199e20-c033-4d57-a271-a6515f704f1e-host\") pod \"crc-debug-z65w4\" (UID: \"5f199e20-c033-4d57-a271-a6515f704f1e\") " pod="openshift-must-gather-tsjx5/crc-debug-z65w4" Feb 17 17:29:27 crc kubenswrapper[4672]: I0217 17:29:27.895873 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjrdq\" (UniqueName: \"kubernetes.io/projected/5f199e20-c033-4d57-a271-a6515f704f1e-kube-api-access-cjrdq\") pod \"crc-debug-z65w4\" (UID: \"5f199e20-c033-4d57-a271-a6515f704f1e\") " pod="openshift-must-gather-tsjx5/crc-debug-z65w4" Feb 17 17:29:27 crc kubenswrapper[4672]: I0217 17:29:27.921180 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbd37e9111d622c1907dd56fdf9d4329cf0833dce9b1eff21225d80f68b4c753" Feb 17 17:29:27 crc kubenswrapper[4672]: I0217 17:29:27.921220 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tsjx5/crc-debug-2wr9b" Feb 17 17:29:27 crc kubenswrapper[4672]: I0217 17:29:27.955389 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3aea242-d370-40ec-bb6a-0ddcedb2f1c1" path="/var/lib/kubelet/pods/b3aea242-d370-40ec-bb6a-0ddcedb2f1c1/volumes" Feb 17 17:29:27 crc kubenswrapper[4672]: I0217 17:29:27.998689 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f199e20-c033-4d57-a271-a6515f704f1e-host\") pod \"crc-debug-z65w4\" (UID: \"5f199e20-c033-4d57-a271-a6515f704f1e\") " pod="openshift-must-gather-tsjx5/crc-debug-z65w4" Feb 17 17:29:27 crc kubenswrapper[4672]: I0217 17:29:27.998772 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjrdq\" (UniqueName: \"kubernetes.io/projected/5f199e20-c033-4d57-a271-a6515f704f1e-kube-api-access-cjrdq\") pod \"crc-debug-z65w4\" (UID: \"5f199e20-c033-4d57-a271-a6515f704f1e\") " pod="openshift-must-gather-tsjx5/crc-debug-z65w4" Feb 17 17:29:27 crc kubenswrapper[4672]: I0217 17:29:27.999191 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f199e20-c033-4d57-a271-a6515f704f1e-host\") pod \"crc-debug-z65w4\" (UID: \"5f199e20-c033-4d57-a271-a6515f704f1e\") " pod="openshift-must-gather-tsjx5/crc-debug-z65w4" Feb 17 17:29:28 crc kubenswrapper[4672]: I0217 17:29:28.031754 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjrdq\" (UniqueName: \"kubernetes.io/projected/5f199e20-c033-4d57-a271-a6515f704f1e-kube-api-access-cjrdq\") pod \"crc-debug-z65w4\" (UID: \"5f199e20-c033-4d57-a271-a6515f704f1e\") " pod="openshift-must-gather-tsjx5/crc-debug-z65w4" Feb 17 17:29:28 crc kubenswrapper[4672]: I0217 17:29:28.316270 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tsjx5/crc-debug-z65w4" Feb 17 17:29:28 crc kubenswrapper[4672]: I0217 17:29:28.932854 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tsjx5/crc-debug-z65w4" event={"ID":"5f199e20-c033-4d57-a271-a6515f704f1e","Type":"ContainerStarted","Data":"a99e051097bf8e6e42b896d6cb7f739f3530fe2e631e49fc58ece71090afdcd1"} Feb 17 17:29:29 crc kubenswrapper[4672]: E0217 17:29:29.074239 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:29:29 crc kubenswrapper[4672]: E0217 17:29:29.074308 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:29:29 crc kubenswrapper[4672]: E0217 17:29:29.074453 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66h7h644h64ch5f8h565hfch5dh56chfdh8hfdh5b5h567h6dh665h557h74h665hcbh96h659h554h589h57fh5d9h55h564hcfh5dhffhfdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tx4bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9e58ce9b-ddd5-42bb-8e07-08a22c8871a5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:29:29 crc kubenswrapper[4672]: E0217 17:29:29.076109 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:29:29 crc kubenswrapper[4672]: I0217 17:29:29.944792 4672 generic.go:334] "Generic (PLEG): container finished" podID="28fdd128-5fa0-48a7-8a9b-a6cb1c84f780" containerID="1643b9164a2a4cc7fa9a871ae84298fa24db8495a858d10c2c89790708accb2f" exitCode=0 Feb 17 17:29:29 crc kubenswrapper[4672]: I0217 17:29:29.955954 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r69wx" event={"ID":"28fdd128-5fa0-48a7-8a9b-a6cb1c84f780","Type":"ContainerDied","Data":"1643b9164a2a4cc7fa9a871ae84298fa24db8495a858d10c2c89790708accb2f"} Feb 17 17:29:30 crc kubenswrapper[4672]: I0217 17:29:30.956285 4672 generic.go:334] "Generic (PLEG): container finished" podID="5f199e20-c033-4d57-a271-a6515f704f1e" containerID="418aeee4d877a6c8a3d3a62ac7b93cc70b4b9194aa2ecc6a1180b27f774e8b6f" exitCode=0 Feb 17 17:29:30 crc kubenswrapper[4672]: I0217 17:29:30.956479 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tsjx5/crc-debug-z65w4" event={"ID":"5f199e20-c033-4d57-a271-a6515f704f1e","Type":"ContainerDied","Data":"418aeee4d877a6c8a3d3a62ac7b93cc70b4b9194aa2ecc6a1180b27f774e8b6f"} Feb 17 17:29:30 crc kubenswrapper[4672]: I0217 17:29:30.961321 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r69wx" event={"ID":"28fdd128-5fa0-48a7-8a9b-a6cb1c84f780","Type":"ContainerStarted","Data":"66c6ed300e5adcf3fc0da61e86b4fdfc4ec33a172b02903135e972eebca85291"} Feb 17 17:29:30 crc kubenswrapper[4672]: I0217 17:29:30.994896 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tsjx5/crc-debug-z65w4"] Feb 17 17:29:31 crc kubenswrapper[4672]: I0217 17:29:31.005212 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tsjx5/crc-debug-z65w4"] Feb 17 17:29:31 crc kubenswrapper[4672]: I0217 17:29:31.010395 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r69wx" podStartSLOduration=3.349289106 podStartE2EDuration="17.010373298s" podCreationTimestamp="2026-02-17 17:29:14 +0000 UTC" firstStartedPulling="2026-02-17 17:29:16.820452517 +0000 UTC m=+5165.574541249" lastFinishedPulling="2026-02-17 17:29:30.481536709 +0000 UTC m=+5179.235625441" observedRunningTime="2026-02-17 17:29:30.992827918 +0000 UTC m=+5179.746916650" watchObservedRunningTime="2026-02-17 17:29:31.010373298 +0000 UTC m=+5179.764462040" Feb 17 17:29:32 crc kubenswrapper[4672]: I0217 17:29:32.098122 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tsjx5/crc-debug-z65w4" Feb 17 17:29:32 crc kubenswrapper[4672]: I0217 17:29:32.225715 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f199e20-c033-4d57-a271-a6515f704f1e-host\") pod \"5f199e20-c033-4d57-a271-a6515f704f1e\" (UID: \"5f199e20-c033-4d57-a271-a6515f704f1e\") " Feb 17 17:29:32 crc kubenswrapper[4672]: I0217 17:29:32.225895 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f199e20-c033-4d57-a271-a6515f704f1e-host" (OuterVolumeSpecName: "host") pod "5f199e20-c033-4d57-a271-a6515f704f1e" (UID: "5f199e20-c033-4d57-a271-a6515f704f1e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:29:32 crc kubenswrapper[4672]: I0217 17:29:32.225972 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjrdq\" (UniqueName: \"kubernetes.io/projected/5f199e20-c033-4d57-a271-a6515f704f1e-kube-api-access-cjrdq\") pod \"5f199e20-c033-4d57-a271-a6515f704f1e\" (UID: \"5f199e20-c033-4d57-a271-a6515f704f1e\") " Feb 17 17:29:32 crc kubenswrapper[4672]: I0217 17:29:32.226794 4672 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f199e20-c033-4d57-a271-a6515f704f1e-host\") on node \"crc\" DevicePath \"\"" Feb 17 17:29:32 crc kubenswrapper[4672]: I0217 17:29:32.231237 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f199e20-c033-4d57-a271-a6515f704f1e-kube-api-access-cjrdq" (OuterVolumeSpecName: "kube-api-access-cjrdq") pod "5f199e20-c033-4d57-a271-a6515f704f1e" (UID: "5f199e20-c033-4d57-a271-a6515f704f1e"). InnerVolumeSpecName "kube-api-access-cjrdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:29:32 crc kubenswrapper[4672]: I0217 17:29:32.328611 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjrdq\" (UniqueName: \"kubernetes.io/projected/5f199e20-c033-4d57-a271-a6515f704f1e-kube-api-access-cjrdq\") on node \"crc\" DevicePath \"\"" Feb 17 17:29:32 crc kubenswrapper[4672]: I0217 17:29:32.979258 4672 scope.go:117] "RemoveContainer" containerID="418aeee4d877a6c8a3d3a62ac7b93cc70b4b9194aa2ecc6a1180b27f774e8b6f" Feb 17 17:29:32 crc kubenswrapper[4672]: I0217 17:29:32.979295 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tsjx5/crc-debug-z65w4" Feb 17 17:29:33 crc kubenswrapper[4672]: I0217 17:29:33.956789 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f199e20-c033-4d57-a271-a6515f704f1e" path="/var/lib/kubelet/pods/5f199e20-c033-4d57-a271-a6515f704f1e/volumes" Feb 17 17:29:34 crc kubenswrapper[4672]: I0217 17:29:34.964856 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r69wx" Feb 17 17:29:34 crc kubenswrapper[4672]: I0217 17:29:34.965578 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r69wx" Feb 17 17:29:35 crc kubenswrapper[4672]: E0217 17:29:35.949045 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:29:36 crc kubenswrapper[4672]: I0217 17:29:36.022049 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r69wx" podUID="28fdd128-5fa0-48a7-8a9b-a6cb1c84f780" containerName="registry-server" probeResult="failure" output=< Feb 17 17:29:36 crc kubenswrapper[4672]: timeout: failed to connect service ":50051" within 1s Feb 17 17:29:36 crc kubenswrapper[4672]: > Feb 17 17:29:41 crc kubenswrapper[4672]: I0217 17:29:41.952911 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:29:41 crc kubenswrapper[4672]: E0217 17:29:41.953461 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:29:41 crc kubenswrapper[4672]: E0217 17:29:41.953975 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:29:45 crc kubenswrapper[4672]: I0217 17:29:45.019505 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r69wx" Feb 17 17:29:45 crc kubenswrapper[4672]: I0217 17:29:45.073945 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r69wx" Feb 17 17:29:46 crc kubenswrapper[4672]: I0217 17:29:46.238673 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r69wx"] Feb 17 17:29:46 crc kubenswrapper[4672]: I0217 17:29:46.239103 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r69wx" podUID="28fdd128-5fa0-48a7-8a9b-a6cb1c84f780" containerName="registry-server" containerID="cri-o://66c6ed300e5adcf3fc0da61e86b4fdfc4ec33a172b02903135e972eebca85291" gracePeriod=2 Feb 17 17:29:46 crc kubenswrapper[4672]: I0217 17:29:46.807847 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r69wx" Feb 17 17:29:46 crc kubenswrapper[4672]: I0217 17:29:46.947330 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28fdd128-5fa0-48a7-8a9b-a6cb1c84f780-catalog-content\") pod \"28fdd128-5fa0-48a7-8a9b-a6cb1c84f780\" (UID: \"28fdd128-5fa0-48a7-8a9b-a6cb1c84f780\") " Feb 17 17:29:46 crc kubenswrapper[4672]: I0217 17:29:46.947546 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwsg7\" (UniqueName: \"kubernetes.io/projected/28fdd128-5fa0-48a7-8a9b-a6cb1c84f780-kube-api-access-hwsg7\") pod \"28fdd128-5fa0-48a7-8a9b-a6cb1c84f780\" (UID: \"28fdd128-5fa0-48a7-8a9b-a6cb1c84f780\") " Feb 17 17:29:46 crc kubenswrapper[4672]: I0217 17:29:46.947725 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28fdd128-5fa0-48a7-8a9b-a6cb1c84f780-utilities\") pod \"28fdd128-5fa0-48a7-8a9b-a6cb1c84f780\" (UID: \"28fdd128-5fa0-48a7-8a9b-a6cb1c84f780\") " Feb 17 17:29:46 crc kubenswrapper[4672]: I0217 17:29:46.948552 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28fdd128-5fa0-48a7-8a9b-a6cb1c84f780-utilities" (OuterVolumeSpecName: "utilities") pod "28fdd128-5fa0-48a7-8a9b-a6cb1c84f780" (UID: "28fdd128-5fa0-48a7-8a9b-a6cb1c84f780"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:29:46 crc kubenswrapper[4672]: I0217 17:29:46.954457 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28fdd128-5fa0-48a7-8a9b-a6cb1c84f780-kube-api-access-hwsg7" (OuterVolumeSpecName: "kube-api-access-hwsg7") pod "28fdd128-5fa0-48a7-8a9b-a6cb1c84f780" (UID: "28fdd128-5fa0-48a7-8a9b-a6cb1c84f780"). InnerVolumeSpecName "kube-api-access-hwsg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:29:47 crc kubenswrapper[4672]: I0217 17:29:47.051985 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwsg7\" (UniqueName: \"kubernetes.io/projected/28fdd128-5fa0-48a7-8a9b-a6cb1c84f780-kube-api-access-hwsg7\") on node \"crc\" DevicePath \"\"" Feb 17 17:29:47 crc kubenswrapper[4672]: I0217 17:29:47.052242 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28fdd128-5fa0-48a7-8a9b-a6cb1c84f780-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:29:47 crc kubenswrapper[4672]: I0217 17:29:47.080495 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28fdd128-5fa0-48a7-8a9b-a6cb1c84f780-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28fdd128-5fa0-48a7-8a9b-a6cb1c84f780" (UID: "28fdd128-5fa0-48a7-8a9b-a6cb1c84f780"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:29:47 crc kubenswrapper[4672]: I0217 17:29:47.141320 4672 generic.go:334] "Generic (PLEG): container finished" podID="28fdd128-5fa0-48a7-8a9b-a6cb1c84f780" containerID="66c6ed300e5adcf3fc0da61e86b4fdfc4ec33a172b02903135e972eebca85291" exitCode=0 Feb 17 17:29:47 crc kubenswrapper[4672]: I0217 17:29:47.141377 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r69wx" event={"ID":"28fdd128-5fa0-48a7-8a9b-a6cb1c84f780","Type":"ContainerDied","Data":"66c6ed300e5adcf3fc0da61e86b4fdfc4ec33a172b02903135e972eebca85291"} Feb 17 17:29:47 crc kubenswrapper[4672]: I0217 17:29:47.141397 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r69wx" Feb 17 17:29:47 crc kubenswrapper[4672]: I0217 17:29:47.141422 4672 scope.go:117] "RemoveContainer" containerID="66c6ed300e5adcf3fc0da61e86b4fdfc4ec33a172b02903135e972eebca85291" Feb 17 17:29:47 crc kubenswrapper[4672]: I0217 17:29:47.141411 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r69wx" event={"ID":"28fdd128-5fa0-48a7-8a9b-a6cb1c84f780","Type":"ContainerDied","Data":"280c801d99fb9d5446c51e6a75b726db548ebf6f5c8a1e6103f06369fded67a3"} Feb 17 17:29:47 crc kubenswrapper[4672]: I0217 17:29:47.155807 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28fdd128-5fa0-48a7-8a9b-a6cb1c84f780-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:29:47 crc kubenswrapper[4672]: I0217 17:29:47.187301 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r69wx"] Feb 17 17:29:47 crc kubenswrapper[4672]: I0217 17:29:47.187965 4672 scope.go:117] "RemoveContainer" containerID="1643b9164a2a4cc7fa9a871ae84298fa24db8495a858d10c2c89790708accb2f" Feb 17 17:29:47 crc kubenswrapper[4672]: I0217 17:29:47.201243 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r69wx"] Feb 17 17:29:47 crc kubenswrapper[4672]: I0217 17:29:47.231757 4672 scope.go:117] "RemoveContainer" containerID="c835aa0395b9679a972221ad2be8c15770e60801b3e220a7711b938408b09c0b" Feb 17 17:29:47 crc kubenswrapper[4672]: I0217 17:29:47.266010 4672 scope.go:117] "RemoveContainer" containerID="66c6ed300e5adcf3fc0da61e86b4fdfc4ec33a172b02903135e972eebca85291" Feb 17 17:29:47 crc kubenswrapper[4672]: E0217 17:29:47.266556 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66c6ed300e5adcf3fc0da61e86b4fdfc4ec33a172b02903135e972eebca85291\": container with ID starting with 66c6ed300e5adcf3fc0da61e86b4fdfc4ec33a172b02903135e972eebca85291 not found: ID does not exist" containerID="66c6ed300e5adcf3fc0da61e86b4fdfc4ec33a172b02903135e972eebca85291" Feb 17 17:29:47 crc kubenswrapper[4672]: I0217 17:29:47.266588 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66c6ed300e5adcf3fc0da61e86b4fdfc4ec33a172b02903135e972eebca85291"} err="failed to get container status \"66c6ed300e5adcf3fc0da61e86b4fdfc4ec33a172b02903135e972eebca85291\": rpc error: code = NotFound desc = could not find container \"66c6ed300e5adcf3fc0da61e86b4fdfc4ec33a172b02903135e972eebca85291\": container with ID starting with 66c6ed300e5adcf3fc0da61e86b4fdfc4ec33a172b02903135e972eebca85291 not found: ID does not exist" Feb 17 17:29:47 crc kubenswrapper[4672]: I0217 17:29:47.266615 4672 scope.go:117] "RemoveContainer" containerID="1643b9164a2a4cc7fa9a871ae84298fa24db8495a858d10c2c89790708accb2f" Feb 17 17:29:47 crc kubenswrapper[4672]: E0217 17:29:47.266922 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1643b9164a2a4cc7fa9a871ae84298fa24db8495a858d10c2c89790708accb2f\": container with ID starting with 1643b9164a2a4cc7fa9a871ae84298fa24db8495a858d10c2c89790708accb2f not found: ID does not exist" containerID="1643b9164a2a4cc7fa9a871ae84298fa24db8495a858d10c2c89790708accb2f" Feb 17 17:29:47 crc kubenswrapper[4672]: I0217 17:29:47.266973 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1643b9164a2a4cc7fa9a871ae84298fa24db8495a858d10c2c89790708accb2f"} err="failed to get container status \"1643b9164a2a4cc7fa9a871ae84298fa24db8495a858d10c2c89790708accb2f\": rpc error: code = NotFound desc = could not find container \"1643b9164a2a4cc7fa9a871ae84298fa24db8495a858d10c2c89790708accb2f\": container with ID starting with 1643b9164a2a4cc7fa9a871ae84298fa24db8495a858d10c2c89790708accb2f not found: ID does not exist" Feb 17 17:29:47 crc kubenswrapper[4672]: I0217 17:29:47.267007 4672 scope.go:117] "RemoveContainer" containerID="c835aa0395b9679a972221ad2be8c15770e60801b3e220a7711b938408b09c0b" Feb 17 17:29:47 crc kubenswrapper[4672]: E0217 17:29:47.267343 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c835aa0395b9679a972221ad2be8c15770e60801b3e220a7711b938408b09c0b\": container with ID starting with c835aa0395b9679a972221ad2be8c15770e60801b3e220a7711b938408b09c0b not found: ID does not exist" containerID="c835aa0395b9679a972221ad2be8c15770e60801b3e220a7711b938408b09c0b" Feb 17 17:29:47 crc kubenswrapper[4672]: I0217 17:29:47.267369 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c835aa0395b9679a972221ad2be8c15770e60801b3e220a7711b938408b09c0b"} err="failed to get container status \"c835aa0395b9679a972221ad2be8c15770e60801b3e220a7711b938408b09c0b\": rpc error: code = NotFound desc = could not find container \"c835aa0395b9679a972221ad2be8c15770e60801b3e220a7711b938408b09c0b\": container with ID starting with c835aa0395b9679a972221ad2be8c15770e60801b3e220a7711b938408b09c0b not found: ID does not exist" Feb 17 17:29:47 crc kubenswrapper[4672]: E0217 17:29:47.946986 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:29:47 crc kubenswrapper[4672]: I0217 17:29:47.958606 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28fdd128-5fa0-48a7-8a9b-a6cb1c84f780" path="/var/lib/kubelet/pods/28fdd128-5fa0-48a7-8a9b-a6cb1c84f780/volumes" Feb 17 17:29:52 crc kubenswrapper[4672]: E0217 17:29:52.947828 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:29:53 crc kubenswrapper[4672]: I0217 17:29:53.944953 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:29:53 crc kubenswrapper[4672]: E0217 17:29:53.945635 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:30:00 crc kubenswrapper[4672]: I0217 17:30:00.147779 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522490-bxkc8"] Feb 17 17:30:00 crc kubenswrapper[4672]: E0217 17:30:00.151852 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28fdd128-5fa0-48a7-8a9b-a6cb1c84f780" containerName="registry-server" Feb 17 17:30:00 crc kubenswrapper[4672]: I0217 17:30:00.151903 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="28fdd128-5fa0-48a7-8a9b-a6cb1c84f780" containerName="registry-server" Feb 17 17:30:00 crc kubenswrapper[4672]: E0217 17:30:00.151930 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28fdd128-5fa0-48a7-8a9b-a6cb1c84f780" containerName="extract-utilities" Feb 17 17:30:00 crc kubenswrapper[4672]: I0217 17:30:00.151939 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="28fdd128-5fa0-48a7-8a9b-a6cb1c84f780" containerName="extract-utilities" Feb 17 17:30:00 crc kubenswrapper[4672]: E0217 17:30:00.151963 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28fdd128-5fa0-48a7-8a9b-a6cb1c84f780" containerName="extract-content" Feb 17 17:30:00 crc kubenswrapper[4672]: I0217 17:30:00.151970 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="28fdd128-5fa0-48a7-8a9b-a6cb1c84f780" containerName="extract-content" Feb 17 17:30:00 crc kubenswrapper[4672]: E0217 17:30:00.151986 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f199e20-c033-4d57-a271-a6515f704f1e" containerName="container-00" Feb 17 17:30:00 crc kubenswrapper[4672]: I0217 17:30:00.151992 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f199e20-c033-4d57-a271-a6515f704f1e" containerName="container-00" Feb 17 17:30:00 crc kubenswrapper[4672]: I0217 17:30:00.152214 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f199e20-c033-4d57-a271-a6515f704f1e" containerName="container-00" Feb 17 17:30:00 crc kubenswrapper[4672]: I0217 17:30:00.152239 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="28fdd128-5fa0-48a7-8a9b-a6cb1c84f780" containerName="registry-server" Feb 17 17:30:00 crc kubenswrapper[4672]: I0217 17:30:00.168114 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522490-bxkc8"] Feb 17 17:30:00 crc kubenswrapper[4672]: I0217 17:30:00.168237 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-bxkc8" Feb 17 17:30:00 crc kubenswrapper[4672]: I0217 17:30:00.176640 4672 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 17:30:00 crc kubenswrapper[4672]: I0217 17:30:00.177345 4672 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 17:30:00 crc kubenswrapper[4672]: I0217 17:30:00.236012 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41d227a6-00c2-4a20-89cd-aab98ad30545-config-volume\") pod \"collect-profiles-29522490-bxkc8\" (UID: \"41d227a6-00c2-4a20-89cd-aab98ad30545\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-bxkc8" Feb 17 17:30:00 crc kubenswrapper[4672]: I0217 17:30:00.236101 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgt8g\" (UniqueName: \"kubernetes.io/projected/41d227a6-00c2-4a20-89cd-aab98ad30545-kube-api-access-zgt8g\") pod \"collect-profiles-29522490-bxkc8\" (UID: \"41d227a6-00c2-4a20-89cd-aab98ad30545\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-bxkc8" Feb 17 17:30:00 crc kubenswrapper[4672]: I0217 17:30:00.236196 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41d227a6-00c2-4a20-89cd-aab98ad30545-secret-volume\") pod \"collect-profiles-29522490-bxkc8\" (UID: \"41d227a6-00c2-4a20-89cd-aab98ad30545\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-bxkc8" Feb 17 17:30:00 crc kubenswrapper[4672]: I0217 17:30:00.338959 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41d227a6-00c2-4a20-89cd-aab98ad30545-config-volume\") pod \"collect-profiles-29522490-bxkc8\" (UID: \"41d227a6-00c2-4a20-89cd-aab98ad30545\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-bxkc8" Feb 17 17:30:00 crc kubenswrapper[4672]: I0217 17:30:00.339107 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgt8g\" (UniqueName: \"kubernetes.io/projected/41d227a6-00c2-4a20-89cd-aab98ad30545-kube-api-access-zgt8g\") pod \"collect-profiles-29522490-bxkc8\" (UID: \"41d227a6-00c2-4a20-89cd-aab98ad30545\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-bxkc8" Feb 17 17:30:00 crc kubenswrapper[4672]: I0217 17:30:00.339218 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41d227a6-00c2-4a20-89cd-aab98ad30545-secret-volume\") pod \"collect-profiles-29522490-bxkc8\" (UID: \"41d227a6-00c2-4a20-89cd-aab98ad30545\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-bxkc8" Feb 17 17:30:00 crc kubenswrapper[4672]: I0217 17:30:00.339955 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41d227a6-00c2-4a20-89cd-aab98ad30545-config-volume\") pod \"collect-profiles-29522490-bxkc8\" (UID: \"41d227a6-00c2-4a20-89cd-aab98ad30545\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-bxkc8" Feb 17 17:30:00 crc kubenswrapper[4672]: I0217 17:30:00.345722 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41d227a6-00c2-4a20-89cd-aab98ad30545-secret-volume\") pod \"collect-profiles-29522490-bxkc8\" (UID: \"41d227a6-00c2-4a20-89cd-aab98ad30545\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-bxkc8" Feb 17 17:30:00 crc kubenswrapper[4672]: I0217 17:30:00.356853 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgt8g\" (UniqueName: \"kubernetes.io/projected/41d227a6-00c2-4a20-89cd-aab98ad30545-kube-api-access-zgt8g\") pod \"collect-profiles-29522490-bxkc8\" (UID: \"41d227a6-00c2-4a20-89cd-aab98ad30545\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-bxkc8" Feb 17 17:30:00 crc kubenswrapper[4672]: I0217 17:30:00.494284 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-bxkc8" Feb 17 17:30:00 crc kubenswrapper[4672]: E0217 17:30:00.948046 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:30:01 crc kubenswrapper[4672]: W0217 17:30:01.007615 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41d227a6_00c2_4a20_89cd_aab98ad30545.slice/crio-9a70d24bb7199aee00f3d095250ca956923f2e783d3890bb29a2e64a4a647a6f WatchSource:0}: Error finding container 9a70d24bb7199aee00f3d095250ca956923f2e783d3890bb29a2e64a4a647a6f: Status 404 returned error can't find the container with id 9a70d24bb7199aee00f3d095250ca956923f2e783d3890bb29a2e64a4a647a6f Feb 17 17:30:01 crc kubenswrapper[4672]: I0217 17:30:01.008852 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522490-bxkc8"] Feb 17 17:30:01 crc kubenswrapper[4672]: I0217 17:30:01.357344 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-bxkc8" event={"ID":"41d227a6-00c2-4a20-89cd-aab98ad30545","Type":"ContainerStarted","Data":"560847775d835d0d69737c899e295ece5644e59bbe4a9f169735d5532a030c76"} Feb 17 17:30:01 crc kubenswrapper[4672]: I0217 17:30:01.357716 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-bxkc8" event={"ID":"41d227a6-00c2-4a20-89cd-aab98ad30545","Type":"ContainerStarted","Data":"9a70d24bb7199aee00f3d095250ca956923f2e783d3890bb29a2e64a4a647a6f"} Feb 17 17:30:01 crc kubenswrapper[4672]: I0217 17:30:01.379166 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-bxkc8" podStartSLOduration=1.379141489 podStartE2EDuration="1.379141489s" podCreationTimestamp="2026-02-17 17:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:30:01.370734754 +0000 UTC m=+5210.124823496" watchObservedRunningTime="2026-02-17 17:30:01.379141489 +0000 UTC m=+5210.133230241" Feb 17 17:30:02 crc kubenswrapper[4672]: I0217 17:30:02.367582 4672 generic.go:334] "Generic (PLEG): container finished" podID="41d227a6-00c2-4a20-89cd-aab98ad30545" containerID="560847775d835d0d69737c899e295ece5644e59bbe4a9f169735d5532a030c76" exitCode=0 Feb 17 17:30:02 crc kubenswrapper[4672]: I0217 17:30:02.367630 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-bxkc8" event={"ID":"41d227a6-00c2-4a20-89cd-aab98ad30545","Type":"ContainerDied","Data":"560847775d835d0d69737c899e295ece5644e59bbe4a9f169735d5532a030c76"} Feb 17 17:30:03 crc kubenswrapper[4672]: I0217 17:30:03.888427 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-bxkc8" Feb 17 17:30:04 crc kubenswrapper[4672]: I0217 17:30:04.027259 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41d227a6-00c2-4a20-89cd-aab98ad30545-config-volume\") pod \"41d227a6-00c2-4a20-89cd-aab98ad30545\" (UID: \"41d227a6-00c2-4a20-89cd-aab98ad30545\") " Feb 17 17:30:04 crc kubenswrapper[4672]: I0217 17:30:04.027383 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41d227a6-00c2-4a20-89cd-aab98ad30545-secret-volume\") pod \"41d227a6-00c2-4a20-89cd-aab98ad30545\" (UID: \"41d227a6-00c2-4a20-89cd-aab98ad30545\") " Feb 17 17:30:04 crc kubenswrapper[4672]: I0217 17:30:04.027434 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgt8g\" (UniqueName: \"kubernetes.io/projected/41d227a6-00c2-4a20-89cd-aab98ad30545-kube-api-access-zgt8g\") pod \"41d227a6-00c2-4a20-89cd-aab98ad30545\" (UID: \"41d227a6-00c2-4a20-89cd-aab98ad30545\") " Feb 17 17:30:04 crc kubenswrapper[4672]: I0217 17:30:04.027981 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41d227a6-00c2-4a20-89cd-aab98ad30545-config-volume" (OuterVolumeSpecName: "config-volume") pod "41d227a6-00c2-4a20-89cd-aab98ad30545" (UID: "41d227a6-00c2-4a20-89cd-aab98ad30545"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:30:04 crc kubenswrapper[4672]: I0217 17:30:04.033042 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41d227a6-00c2-4a20-89cd-aab98ad30545-kube-api-access-zgt8g" (OuterVolumeSpecName: "kube-api-access-zgt8g") pod "41d227a6-00c2-4a20-89cd-aab98ad30545" (UID: "41d227a6-00c2-4a20-89cd-aab98ad30545"). InnerVolumeSpecName "kube-api-access-zgt8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:30:04 crc kubenswrapper[4672]: I0217 17:30:04.037361 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41d227a6-00c2-4a20-89cd-aab98ad30545-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "41d227a6-00c2-4a20-89cd-aab98ad30545" (UID: "41d227a6-00c2-4a20-89cd-aab98ad30545"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:30:04 crc kubenswrapper[4672]: I0217 17:30:04.129975 4672 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41d227a6-00c2-4a20-89cd-aab98ad30545-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:04 crc kubenswrapper[4672]: I0217 17:30:04.130020 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgt8g\" (UniqueName: \"kubernetes.io/projected/41d227a6-00c2-4a20-89cd-aab98ad30545-kube-api-access-zgt8g\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:04 crc kubenswrapper[4672]: I0217 17:30:04.130033 4672 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41d227a6-00c2-4a20-89cd-aab98ad30545-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:04 crc kubenswrapper[4672]: I0217 17:30:04.387656 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-bxkc8" event={"ID":"41d227a6-00c2-4a20-89cd-aab98ad30545","Type":"ContainerDied","Data":"9a70d24bb7199aee00f3d095250ca956923f2e783d3890bb29a2e64a4a647a6f"} Feb 17 17:30:04 crc kubenswrapper[4672]: I0217 17:30:04.387710 4672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a70d24bb7199aee00f3d095250ca956923f2e783d3890bb29a2e64a4a647a6f" Feb 17 17:30:04 crc kubenswrapper[4672]: I0217 17:30:04.387815 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-bxkc8" Feb 17 17:30:04 crc kubenswrapper[4672]: I0217 17:30:04.448996 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-zzvc2"] Feb 17 17:30:04 crc kubenswrapper[4672]: I0217 17:30:04.458358 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-zzvc2"] Feb 17 17:30:05 crc kubenswrapper[4672]: E0217 17:30:05.950689 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:30:05 crc kubenswrapper[4672]: I0217 17:30:05.958907 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b62f278-9a4a-4cb6-b093-ca74d724e523" path="/var/lib/kubelet/pods/3b62f278-9a4a-4cb6-b093-ca74d724e523/volumes" Feb 17 17:30:06 crc kubenswrapper[4672]: I0217 17:30:06.945763 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:30:06 crc kubenswrapper[4672]: E0217 17:30:06.946355 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:30:11 crc kubenswrapper[4672]: E0217 17:30:11.959279 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:30:15 crc kubenswrapper[4672]: I0217 17:30:15.719618 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_91c936b2-eda8-4075-bcec-4c56d31cda1d/init-config-reloader/0.log" Feb 17 17:30:15 crc kubenswrapper[4672]: I0217 17:30:15.956870 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_91c936b2-eda8-4075-bcec-4c56d31cda1d/alertmanager/0.log" Feb 17 17:30:15 crc kubenswrapper[4672]: I0217 17:30:15.967045 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_91c936b2-eda8-4075-bcec-4c56d31cda1d/config-reloader/0.log" Feb 17 17:30:15 crc kubenswrapper[4672]: I0217 17:30:15.992475 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_91c936b2-eda8-4075-bcec-4c56d31cda1d/init-config-reloader/0.log" Feb 17 17:30:16 crc kubenswrapper[4672]: I0217 17:30:16.160367 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5ff45d6f4b-l6mqf_e2bd5a6e-90e9-487c-bc75-ee390f1f97c9/barbican-api-log/0.log" Feb 17 17:30:16 crc kubenswrapper[4672]: I0217 17:30:16.232656 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5ff45d6f4b-l6mqf_e2bd5a6e-90e9-487c-bc75-ee390f1f97c9/barbican-api/0.log" Feb 17 17:30:16 crc kubenswrapper[4672]: I0217 17:30:16.322696 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-647bb874b-c9skw_9e060962-a1f6-47fd-af63-b9b7f8bfd863/barbican-keystone-listener/0.log" Feb 17 17:30:16 crc kubenswrapper[4672]: I0217 17:30:16.427158 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-647bb874b-c9skw_9e060962-a1f6-47fd-af63-b9b7f8bfd863/barbican-keystone-listener-log/0.log" Feb 17 17:30:16 crc kubenswrapper[4672]: I0217 17:30:16.497685 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5498949f87-rqtb7_8359abf8-58ae-423d-9de3-b4488cffe247/barbican-worker/0.log" Feb 17 17:30:16 crc kubenswrapper[4672]: I0217 17:30:16.567464 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5498949f87-rqtb7_8359abf8-58ae-423d-9de3-b4488cffe247/barbican-worker-log/0.log" Feb 17 17:30:16 crc kubenswrapper[4672]: I0217 17:30:16.741157 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-8tnx6_fb402cd6-e885-4c1e-958a-cb731cdd4569/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:30:16 crc kubenswrapper[4672]: I0217 17:30:16.931996 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9e58ce9b-ddd5-42bb-8e07-08a22c8871a5/proxy-httpd/0.log" Feb 17 17:30:16 crc kubenswrapper[4672]: I0217 17:30:16.948234 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9e58ce9b-ddd5-42bb-8e07-08a22c8871a5/sg-core/0.log" Feb 17 17:30:16 crc kubenswrapper[4672]: I0217 17:30:16.965480 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9e58ce9b-ddd5-42bb-8e07-08a22c8871a5/ceilometer-notification-agent/0.log" Feb 17 17:30:17 crc kubenswrapper[4672]: I0217 17:30:17.141742 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d024e6ad-924f-42a9-94e3-21cf7d00f62f/cinder-api/0.log" Feb 17 17:30:17 crc kubenswrapper[4672]: I0217 17:30:17.153180 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d024e6ad-924f-42a9-94e3-21cf7d00f62f/cinder-api-log/0.log" Feb 17 17:30:17 crc kubenswrapper[4672]: I0217 17:30:17.232403 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_61689860-63f3-424a-92e6-b5f0fd8d17b3/cinder-scheduler/0.log" Feb 17 17:30:17 crc kubenswrapper[4672]: I0217 17:30:17.414165 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_61689860-63f3-424a-92e6-b5f0fd8d17b3/probe/0.log" Feb 17 17:30:17 crc kubenswrapper[4672]: I0217 17:30:17.488197 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_4fb90291-b26f-465e-9f31-aa9336133b6b/cloudkitty-api/0.log" Feb 17 17:30:17 crc kubenswrapper[4672]: I0217 17:30:17.497309 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_4fb90291-b26f-465e-9f31-aa9336133b6b/cloudkitty-api-log/0.log" Feb 17 17:30:17 crc kubenswrapper[4672]: I0217 17:30:17.741305 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_e6cf604e-3c10-4dd3-b6a7-6e6126705e3c/loki-compactor/0.log" Feb 17 17:30:18 crc kubenswrapper[4672]: I0217 17:30:18.238109 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-hszn6_b01fa86f-90fb-4b04-9bea-681cb6385a05/gateway/0.log" Feb 17 17:30:18 crc kubenswrapper[4672]: I0217 17:30:18.271389 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-585d9bcbc-gwjj7_2e52d03d-9616-4c46-b7c9-d090f4a43a93/loki-distributor/0.log" Feb 17 17:30:18 crc kubenswrapper[4672]: I0217 17:30:18.508149 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_73efd99e-65c1-4c17-90aa-562d35719b17/loki-index-gateway/0.log" Feb 17 17:30:18 crc kubenswrapper[4672]: I0217 17:30:18.509387 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-zpb9n_41bcd30f-d987-4e6c-ab80-4bff10853442/gateway/0.log" Feb 17 17:30:18 crc kubenswrapper[4672]: I0217 17:30:18.686366 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_3acacae4-cbf8-43e1-a2af-3e1bf95be39b/loki-ingester/0.log" Feb 17 17:30:18 crc kubenswrapper[4672]: I0217 17:30:18.767868 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-58c84b5844-r97r4_7ce7f56b-68cd-42a8-bbfe-588269b90802/loki-querier/0.log" Feb 17 17:30:18 crc kubenswrapper[4672]: I0217 17:30:18.935573 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-67bb4dfcd8-87nzr_2dcda2dc-3e7d-45a5-b95e-cd4b5242b1cf/loki-query-frontend/0.log" Feb 17 17:30:19 crc kubenswrapper[4672]: I0217 17:30:19.268587 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7bb494c7f-zl6qj_d859d437-24f3-497a-96b0-6ccd5e0381b7/init/0.log" Feb 17 17:30:19 crc kubenswrapper[4672]: I0217 17:30:19.450010 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7bb494c7f-zl6qj_d859d437-24f3-497a-96b0-6ccd5e0381b7/init/0.log" Feb 17 17:30:19 crc kubenswrapper[4672]: I0217 17:30:19.478358 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7bb494c7f-zl6qj_d859d437-24f3-497a-96b0-6ccd5e0381b7/dnsmasq-dns/0.log" Feb 17 17:30:19 crc kubenswrapper[4672]: I0217 17:30:19.596430 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-5tvt6_16cbb615-75bb-4298-90e4-6490dd64dd01/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:30:19 crc kubenswrapper[4672]: I0217 17:30:19.782375 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-8gpjs_58598c29-6a4f-43a2-87b4-3247b3144660/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:30:19 crc kubenswrapper[4672]: E0217 17:30:19.947406 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:30:19 crc kubenswrapper[4672]: I0217 17:30:19.949418 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:30:19 crc kubenswrapper[4672]: E0217 17:30:19.954447 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:30:19 crc kubenswrapper[4672]: I0217 17:30:19.980347 4672 scope.go:117] "RemoveContainer" containerID="00fa081013508a49f9a5a83672028d386d87720ba554d5ec9bde76fbb3bf7565" Feb 17 17:30:20 crc kubenswrapper[4672]: I0217 17:30:19.993944 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-fn8bm_015a71e3-cfc6-4bd6-bc90-2efce2db5885/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:30:20 crc kubenswrapper[4672]: I0217 17:30:20.160254 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-j4qcj_e25af450-196c-4035-96b6-5148862bca0d/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:30:20 crc kubenswrapper[4672]: I0217 17:30:20.334344 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-ld2fq_6162f29e-528b-4131-9e8d-1391db930dd5/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:30:20 crc kubenswrapper[4672]: I0217 17:30:20.458787 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-nhgw2_945f70cb-9394-43c9-b44c-c6ef7d021f78/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:30:20 crc kubenswrapper[4672]: I0217 17:30:20.572144 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-zsr64_fcaca0dc-4760-43af-bc46-efcdc09d7164/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:30:20 crc kubenswrapper[4672]: I0217 17:30:20.689192 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d125baba-09b1-4d4e-9d09-d040ee9323d1/glance-log/0.log" Feb 17 17:30:20 crc kubenswrapper[4672]: I0217 17:30:20.731619 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d125baba-09b1-4d4e-9d09-d040ee9323d1/glance-httpd/0.log" Feb 17 17:30:20 crc kubenswrapper[4672]: I0217 17:30:20.873150 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_cb8b7b86-c10a-486b-aec2-87475a3af44f/glance-httpd/0.log" Feb 17 17:30:20 crc kubenswrapper[4672]: I0217 17:30:20.906030 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_cb8b7b86-c10a-486b-aec2-87475a3af44f/glance-log/0.log" Feb 17 17:30:21 crc kubenswrapper[4672]: I0217 17:30:21.149628 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29522461-d5dt4_73c49726-b590-48ff-a8a1-bdaa0683a643/keystone-cron/0.log" Feb 17 17:30:21 crc kubenswrapper[4672]: I0217 17:30:21.212415 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-865bd5d96d-f924s_4d6379a6-6265-4eed-8c5c-cc4f8991bf7a/keystone-api/0.log" Feb 17 17:30:21 crc kubenswrapper[4672]: I0217 17:30:21.346444 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_fedb9fdf-2db2-4982-8136-b432cecd1f88/kube-state-metrics/0.log" Feb 17 17:30:21 crc kubenswrapper[4672]: I0217 17:30:21.831993 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-78b4dc5857-f54l5_4ddee357-262e-497b-aa02-4a2604fadc41/neutron-api/0.log" Feb 17 17:30:22 crc kubenswrapper[4672]: I0217 17:30:22.032004 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-78b4dc5857-f54l5_4ddee357-262e-497b-aa02-4a2604fadc41/neutron-httpd/0.log" Feb 17 17:30:22 crc kubenswrapper[4672]: I0217 17:30:22.441912 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_280f47cf-d6db-46e6-a9cf-6c2321f80d5d/nova-api-log/0.log" Feb 17 17:30:22 crc kubenswrapper[4672]: I0217 17:30:22.873469 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_280f47cf-d6db-46e6-a9cf-6c2321f80d5d/nova-api-api/0.log" Feb 17 17:30:22 crc kubenswrapper[4672]: E0217 17:30:22.948802 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:30:23 crc kubenswrapper[4672]: I0217 17:30:23.101042 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_62967d67-22e1-454a-a73c-a1d3fe95d08c/nova-cell0-conductor-conductor/0.log" Feb 17 17:30:23 crc kubenswrapper[4672]: I0217 17:30:23.382891 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_58a809fa-9243-47fa-9b98-08932cdef54f/nova-cell1-conductor-conductor/0.log" Feb 17 17:30:23 crc kubenswrapper[4672]: I0217 17:30:23.564496 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_e2ee7683-5a7e-45f0-b14b-0b5ddb382eaa/nova-cell1-novncproxy-novncproxy/0.log" Feb 17 17:30:23 crc kubenswrapper[4672]: I0217 17:30:23.757960 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ec271161-02cd-4b97-925f-47e757c52e34/nova-metadata-log/0.log" Feb 17 17:30:24 crc kubenswrapper[4672]: I0217 17:30:24.104632 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_7bb6bc7b-85e2-4379-a509-edf2d9424951/nova-scheduler-scheduler/0.log" Feb 17 17:30:24 crc kubenswrapper[4672]: I0217 17:30:24.412301 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_164bb24e-646b-4404-92f5-912254ac1421/mysql-bootstrap/0.log" Feb 17 17:30:25 crc kubenswrapper[4672]: I0217 17:30:25.180740 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_164bb24e-646b-4404-92f5-912254ac1421/mysql-bootstrap/0.log" Feb 17 17:30:25 crc kubenswrapper[4672]: I0217 17:30:25.185813 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_164bb24e-646b-4404-92f5-912254ac1421/galera/0.log" Feb 17 17:30:25 crc kubenswrapper[4672]: I0217 17:30:25.345926 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-proc-0_8660afe8-86d8-4fff-9707-c67a3ad7f842/cloudkitty-proc/0.log" Feb 17 17:30:25 crc kubenswrapper[4672]: I0217 17:30:25.488856 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_322bd505-c790-49c2-8ffa-0cb97cf40d7c/mysql-bootstrap/0.log" Feb 17 17:30:25 crc kubenswrapper[4672]: I0217 17:30:25.646814 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_322bd505-c790-49c2-8ffa-0cb97cf40d7c/mysql-bootstrap/0.log" Feb 17 17:30:25 crc kubenswrapper[4672]: I0217 17:30:25.754106 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_322bd505-c790-49c2-8ffa-0cb97cf40d7c/galera/0.log" Feb 17 17:30:25 crc kubenswrapper[4672]: I0217 17:30:25.884665 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_b02e419f-9426-4e56-9b4b-17ec702acb0a/openstackclient/0.log" Feb 17 17:30:25 crc kubenswrapper[4672]: I0217 17:30:25.996615 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-dw5sq_ce2af7d6-0f80-4c6a-90d5-89dba254991f/openstack-network-exporter/0.log" Feb 17 17:30:26 crc kubenswrapper[4672]: I0217 17:30:26.223313 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-8nlcn_a3267a9e-18a1-49f9-bda5-8dcb1467446a/ovsdb-server-init/0.log" Feb 17 17:30:26 crc kubenswrapper[4672]: I0217 17:30:26.292603 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ec271161-02cd-4b97-925f-47e757c52e34/nova-metadata-metadata/0.log" Feb 17 17:30:26 crc kubenswrapper[4672]: I0217 17:30:26.409612 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-8nlcn_a3267a9e-18a1-49f9-bda5-8dcb1467446a/ovsdb-server-init/0.log" Feb 17 17:30:26 crc kubenswrapper[4672]: I0217 17:30:26.485879 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-8nlcn_a3267a9e-18a1-49f9-bda5-8dcb1467446a/ovsdb-server/0.log" Feb 17 17:30:26 crc kubenswrapper[4672]: I0217 17:30:26.486744 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-8nlcn_a3267a9e-18a1-49f9-bda5-8dcb1467446a/ovs-vswitchd/0.log" Feb 17 17:30:26 crc kubenswrapper[4672]: I0217 17:30:26.710758 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-q9cd6_12b377dd-1f13-4af0-81d6-635d39cc528c/ovn-controller/0.log" Feb 17 17:30:26 crc kubenswrapper[4672]: I0217 17:30:26.787760 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d24cbf47-ec62-44c7-8e9e-1c93a52aabbc/openstack-network-exporter/0.log" Feb 17 17:30:26 crc kubenswrapper[4672]: I0217 17:30:26.968990 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d24cbf47-ec62-44c7-8e9e-1c93a52aabbc/ovn-northd/0.log" Feb 17 17:30:27 crc kubenswrapper[4672]: I0217 17:30:27.004470 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_68117379-9c1b-497f-8d3a-39bddb5a76dc/openstack-network-exporter/0.log" Feb 17 17:30:27 crc kubenswrapper[4672]: I0217 17:30:27.052911 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_68117379-9c1b-497f-8d3a-39bddb5a76dc/ovsdbserver-nb/0.log" Feb 17 17:30:27 crc kubenswrapper[4672]: I0217 17:30:27.207703 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_44577c92-aff9-433c-aece-3021a8e85377/openstack-network-exporter/0.log" Feb 17 17:30:27 crc kubenswrapper[4672]: I0217 17:30:27.283851 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_44577c92-aff9-433c-aece-3021a8e85377/ovsdbserver-sb/0.log" Feb 17 17:30:27 crc kubenswrapper[4672]: I0217 17:30:27.496775 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6fd764cdf6-q8qss_d47f2556-58c0-4d11-b435-a08e06a11c76/placement-api/0.log" Feb 17 17:30:27 crc kubenswrapper[4672]: I0217 17:30:27.549977 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6fd764cdf6-q8qss_d47f2556-58c0-4d11-b435-a08e06a11c76/placement-log/0.log" Feb 17 17:30:27 crc kubenswrapper[4672]: I0217 17:30:27.643648 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e1107a46-b916-4fe7-b4cc-a6576f242ec0/init-config-reloader/0.log" Feb 17 17:30:27 crc kubenswrapper[4672]: I0217 17:30:27.898982 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e1107a46-b916-4fe7-b4cc-a6576f242ec0/prometheus/0.log" Feb 17 17:30:27 crc kubenswrapper[4672]: I0217 17:30:27.913337 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e1107a46-b916-4fe7-b4cc-a6576f242ec0/thanos-sidecar/0.log" Feb 17 17:30:27 crc kubenswrapper[4672]: I0217 17:30:27.945399 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e1107a46-b916-4fe7-b4cc-a6576f242ec0/init-config-reloader/0.log" Feb 17 17:30:27 crc kubenswrapper[4672]: I0217 17:30:27.967366 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e1107a46-b916-4fe7-b4cc-a6576f242ec0/config-reloader/0.log" Feb 17 17:30:28 crc kubenswrapper[4672]: I0217 17:30:28.375821 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9a73e2db-d320-4e3c-9412-02555a0a17eb/setup-container/0.log" Feb 17 17:30:28 crc kubenswrapper[4672]: I0217 17:30:28.655988 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9a73e2db-d320-4e3c-9412-02555a0a17eb/rabbitmq/0.log" Feb 17 17:30:28 crc kubenswrapper[4672]: I0217 17:30:28.668078 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2da88232-8248-48fa-98e2-3220a17cc432/setup-container/0.log" Feb 17 17:30:28 crc kubenswrapper[4672]: I0217 17:30:28.716299 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9a73e2db-d320-4e3c-9412-02555a0a17eb/setup-container/0.log" Feb 17 17:30:28 crc kubenswrapper[4672]: I0217 17:30:28.959301 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2da88232-8248-48fa-98e2-3220a17cc432/setup-container/0.log" Feb 17 17:30:29 crc kubenswrapper[4672]: I0217 17:30:29.072161 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2da88232-8248-48fa-98e2-3220a17cc432/rabbitmq/0.log" Feb 17 17:30:29 crc kubenswrapper[4672]: I0217 17:30:29.101087 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-4dj4d_3efee2c1-0f1f-4611-ada4-055dac7d9bc5/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:30:29 crc kubenswrapper[4672]: I0217 17:30:29.205835 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-frdt6_dd8e4614-fb4d-4444-827f-659cffc613ea/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:30:29 crc kubenswrapper[4672]: I0217 17:30:29.491551 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-844c787c5c-l2cm9_9f2974b4-c465-4d64-b3b3-e79e4d1b74a2/proxy-server/0.log" Feb 17 17:30:29 crc kubenswrapper[4672]: I0217 17:30:29.530850 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-844c787c5c-l2cm9_9f2974b4-c465-4d64-b3b3-e79e4d1b74a2/proxy-httpd/0.log" Feb 17 17:30:29 crc kubenswrapper[4672]: I0217 17:30:29.640766 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-zrw2r_59695c47-0c8e-4e97-b04f-3200eb8efc42/swift-ring-rebalance/0.log" Feb 17 17:30:29 crc kubenswrapper[4672]: I0217 17:30:29.829724 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6f82a4ce-8da0-40f1-996a-843302449a12/account-auditor/0.log" Feb 17 17:30:29 crc kubenswrapper[4672]: I0217 17:30:29.912883 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6f82a4ce-8da0-40f1-996a-843302449a12/account-reaper/0.log" Feb 17 17:30:30 crc kubenswrapper[4672]: I0217 17:30:30.046876 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6f82a4ce-8da0-40f1-996a-843302449a12/account-replicator/0.log" Feb 17 17:30:30 crc kubenswrapper[4672]: I0217 17:30:30.097417 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6f82a4ce-8da0-40f1-996a-843302449a12/account-server/0.log" Feb 17 17:30:30 crc kubenswrapper[4672]: I0217 17:30:30.151531 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6f82a4ce-8da0-40f1-996a-843302449a12/container-auditor/0.log" Feb 17 17:30:30 crc kubenswrapper[4672]: I0217 17:30:30.216426 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6f82a4ce-8da0-40f1-996a-843302449a12/container-replicator/0.log" Feb 17 17:30:30 crc kubenswrapper[4672]: I0217 17:30:30.277251 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6f82a4ce-8da0-40f1-996a-843302449a12/container-server/0.log" Feb 17 17:30:30 crc kubenswrapper[4672]: I0217 17:30:30.323759 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6f82a4ce-8da0-40f1-996a-843302449a12/container-updater/0.log" Feb 17 17:30:30 crc kubenswrapper[4672]: I0217 17:30:30.483695 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6f82a4ce-8da0-40f1-996a-843302449a12/object-expirer/0.log" Feb 17 17:30:30 crc kubenswrapper[4672]: I0217 17:30:30.505340 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6f82a4ce-8da0-40f1-996a-843302449a12/object-auditor/0.log" Feb 17 17:30:30 crc kubenswrapper[4672]: I0217 17:30:30.567938 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6f82a4ce-8da0-40f1-996a-843302449a12/object-replicator/0.log" Feb 17 17:30:30 crc kubenswrapper[4672]: I0217 17:30:30.588807 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6f82a4ce-8da0-40f1-996a-843302449a12/object-server/0.log" Feb 17 17:30:30 crc kubenswrapper[4672]: I0217 17:30:30.730636 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6f82a4ce-8da0-40f1-996a-843302449a12/object-updater/0.log" Feb 17 17:30:30 crc kubenswrapper[4672]: I0217 17:30:30.778016 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6f82a4ce-8da0-40f1-996a-843302449a12/rsync/0.log" Feb 17 17:30:30 crc kubenswrapper[4672]: I0217 17:30:30.855215 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_6f82a4ce-8da0-40f1-996a-843302449a12/swift-recon-cron/0.log" Feb 17 17:30:32 crc kubenswrapper[4672]: I0217 17:30:32.945555 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:30:32 crc kubenswrapper[4672]: E0217 17:30:32.946471 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:30:32 crc kubenswrapper[4672]: E0217 17:30:32.950643 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:30:36 crc kubenswrapper[4672]: I0217 17:30:36.046657 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_abbf2ccd-ce83-432b-9e9d-7f39d2483aee/memcached/0.log" Feb 17 17:30:36 crc kubenswrapper[4672]: E0217 17:30:36.949010 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:30:44 crc kubenswrapper[4672]: E0217 17:30:44.947839 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:30:47 crc kubenswrapper[4672]: I0217 17:30:47.944782 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:30:47 crc kubenswrapper[4672]: E0217 17:30:47.945498 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:30:48 crc kubenswrapper[4672]: E0217 17:30:48.947335 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:30:58 crc kubenswrapper[4672]: E0217 17:30:58.947790 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:30:59 crc kubenswrapper[4672]: I0217 17:30:59.994026 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx_9e1e9225-bf20-4ce6-ba45-7577a5616754/util/0.log" Feb 17 17:31:00 crc kubenswrapper[4672]: I0217 17:31:00.171810 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx_9e1e9225-bf20-4ce6-ba45-7577a5616754/util/0.log" Feb 17 17:31:00 crc kubenswrapper[4672]: I0217 17:31:00.185027 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx_9e1e9225-bf20-4ce6-ba45-7577a5616754/pull/0.log" Feb 17 17:31:00 crc kubenswrapper[4672]: I0217 17:31:00.230981 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx_9e1e9225-bf20-4ce6-ba45-7577a5616754/pull/0.log" Feb 17 17:31:00 crc kubenswrapper[4672]: I0217 17:31:00.929207 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx_9e1e9225-bf20-4ce6-ba45-7577a5616754/pull/0.log" Feb 17 17:31:00 crc kubenswrapper[4672]: I0217 17:31:00.936342 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx_9e1e9225-bf20-4ce6-ba45-7577a5616754/extract/0.log" Feb 17 17:31:00 crc kubenswrapper[4672]: I0217 17:31:00.939131 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1670403baf44144a237bba27b9a7f7bf09d0b81f1b06a7e5c0d7fc3933rddpx_9e1e9225-bf20-4ce6-ba45-7577a5616754/util/0.log" Feb 17 17:31:01 crc kubenswrapper[4672]: I0217 17:31:01.391164 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-n8sch_8747c08b-53c8-45dc-98b0-124e58820cdb/manager/0.log" Feb 17 17:31:01 crc kubenswrapper[4672]: I0217 17:31:01.721557 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-h75ck_fa9ca2ad-545b-4125-a472-0aa969f560fd/manager/0.log" Feb 17 17:31:01 crc kubenswrapper[4672]: I0217 17:31:01.883186 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-kdl2h_246842cd-06e9-4793-96a5-9b0dad79ce08/manager/0.log" Feb 17 17:31:01 crc kubenswrapper[4672]: I0217 17:31:01.944466 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:31:01 crc kubenswrapper[4672]: E0217 17:31:01.944735 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:31:02 crc kubenswrapper[4672]: I0217 17:31:02.085733 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-9sv2x_f554f9dc-7778-4116-8d09-205f2c3671fd/manager/0.log" Feb 17 17:31:02 crc kubenswrapper[4672]: E0217 17:31:02.950706 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:31:03 crc kubenswrapper[4672]: I0217 17:31:03.106191 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-4slcx_481c13a0-8cdd-4753-9bae-31d536cd4779/manager/0.log" Feb 17 17:31:03 crc kubenswrapper[4672]: I0217 17:31:03.442252 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-bqchl_8396e964-bc62-4fe3-9a1e-b965b0ca30f5/manager/0.log" Feb 17 17:31:03 crc kubenswrapper[4672]: I0217 17:31:03.707958 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-7tgjn_a97a493d-5f21-4965-b7ad-aff4cffcfb37/manager/0.log" Feb 17 17:31:03 crc kubenswrapper[4672]: I0217 17:31:03.758448 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-4p9xd_697b2176-1abc-4887-9ba9-32e6e667a8a0/manager/0.log" Feb 17 17:31:03 crc kubenswrapper[4672]: I0217 17:31:03.889806 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-bmw4m_ebef7502-75af-4d09-98eb-b3fbfb5bf0ad/manager/0.log" Feb 17 17:31:04 crc kubenswrapper[4672]: I0217 17:31:04.007861 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-rlvlw_820e1fb1-9bbe-47e8-a2d5-6e45235244b4/manager/0.log" Feb 17 17:31:04 crc kubenswrapper[4672]: I0217 17:31:04.211749 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-mw56t_a1ac6199-2cd8-48e9-9303-39fba36f1369/manager/0.log" Feb 17 17:31:04 crc kubenswrapper[4672]: I0217 17:31:04.407501 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-gxfcs_ac8ba5c6-2841-4a02-8707-54be52de56f1/manager/0.log" Feb 17 17:31:04 crc kubenswrapper[4672]: I0217 17:31:04.547380 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cvzfsn_cd50b560-8522-43e7-bbb9-10c5097ee367/manager/0.log" Feb 17 17:31:05 crc kubenswrapper[4672]: I0217 17:31:05.123673 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5b4d8b9dd-hnxb4_4a8609a9-e813-46c0-ad2b-64d97ee6c368/operator/0.log" Feb 17 17:31:05 crc kubenswrapper[4672]: I0217 17:31:05.345605 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-b9pq7_ab4aefbf-1133-48d6-af32-33ffaf8d787b/registry-server/0.log" Feb 17 17:31:05 crc kubenswrapper[4672]: I0217 17:31:05.651871 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-ht2sv_9ea06f37-5c5b-45f1-b6ba-fb72f5e8f86a/manager/0.log" Feb 17 17:31:05 crc kubenswrapper[4672]: I0217 17:31:05.856769 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-wck86_ec59ef76-a144-4870-b714-4ddaeae5b741/manager/0.log" Feb 17 17:31:06 crc kubenswrapper[4672]: I0217 17:31:06.058194 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-wcmsp_92490ad7-6905-4c57-9d64-e7b1acbb44eb/operator/0.log" Feb 17 17:31:06 crc kubenswrapper[4672]: I0217 17:31:06.303217 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-rjn5h_4d12b414-59e2-49aa-9463-ae2061b1aa80/manager/0.log" Feb 17 17:31:06 crc kubenswrapper[4672]: I0217 17:31:06.751118 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-66554dbdcf-jm2nh_6be94508-f499-48af-b1c8-50a773fb53d1/manager/0.log" Feb 17 17:31:06 crc kubenswrapper[4672]: I0217 17:31:06.987960 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-m45mw_72370045-528c-4239-8c6f-24f435b3736b/manager/0.log" Feb 17 17:31:07 crc kubenswrapper[4672]: I0217 17:31:07.147180 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-x7798_8b28f180-8f69-4141-827f-2eb95e876b84/manager/0.log" Feb 17 17:31:07 crc kubenswrapper[4672]: I0217 17:31:07.690759 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5d7c6cd576-c5g8f_761b3282-6d8a-4613-8191-fe2e37822d19/manager/0.log" Feb 17 17:31:07 crc kubenswrapper[4672]: I0217 17:31:07.818774 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-6xtt5_c0c82835-0153-4ce1-be6a-9b748ced0671/manager/0.log" Feb 17 17:31:11 crc kubenswrapper[4672]: E0217 17:31:11.956200 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:31:14 crc kubenswrapper[4672]: I0217 17:31:14.216757 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-67vt9_f18754ba-fbb5-4741-a801-03326fd7714d/manager/0.log" Feb 17 17:31:14 crc kubenswrapper[4672]: I0217 17:31:14.945279 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:31:14 crc kubenswrapper[4672]: E0217 17:31:14.945861 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:31:15 crc kubenswrapper[4672]: E0217 17:31:15.947438 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:31:22 crc kubenswrapper[4672]: E0217 17:31:22.947107 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:31:26 crc kubenswrapper[4672]: I0217 17:31:26.945349 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:31:26 crc kubenswrapper[4672]: E0217 17:31:26.946238 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:31:29 crc kubenswrapper[4672]: E0217 17:31:29.947997 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:31:30 crc kubenswrapper[4672]: I0217 17:31:30.048639 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-g6m8r_e0bef061-3829-41ea-926f-058de4404865/control-plane-machine-set-operator/0.log" Feb 17 17:31:30 crc kubenswrapper[4672]: I0217 17:31:30.810300 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cdjsz_6119a50b-94a4-4095-b14c-f009fe646312/kube-rbac-proxy/0.log" Feb 17 17:31:30 crc kubenswrapper[4672]: I0217 17:31:30.849933 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cdjsz_6119a50b-94a4-4095-b14c-f009fe646312/machine-api-operator/0.log" Feb 17 17:31:35 crc kubenswrapper[4672]: E0217 17:31:35.949938 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:31:40 crc kubenswrapper[4672]: I0217 17:31:40.945997 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:31:40 crc kubenswrapper[4672]: E0217 17:31:40.946946 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:31:43 crc kubenswrapper[4672]: E0217 17:31:43.946929 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:31:44 crc kubenswrapper[4672]: I0217 17:31:44.102140 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-r7xjf_99c98563-db8b-4849-b06e-6d7bf6a08b69/cert-manager-controller/0.log" Feb 17 17:31:44 crc kubenswrapper[4672]: I0217 17:31:44.224170 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-db29w_75f66eec-6844-429c-8168-33db45850fd9/cert-manager-cainjector/0.log" Feb 17 17:31:44 crc kubenswrapper[4672]: I0217 17:31:44.319914 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-8grm4_e34e9fd6-0f58-4f41-a4ae-39f88ff43fac/cert-manager-webhook/0.log" Feb 17 17:31:48 crc kubenswrapper[4672]: E0217 17:31:48.946851 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:31:52 crc kubenswrapper[4672]: I0217 17:31:52.945315 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:31:52 crc kubenswrapper[4672]: E0217 17:31:52.945974 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:31:55 crc kubenswrapper[4672]: E0217 17:31:55.947000 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:31:56 crc kubenswrapper[4672]: I0217 17:31:56.340300 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-hfn6n_24078e98-6c8d-4bb5-a40f-2042ad57c490/nmstate-console-plugin/0.log" Feb 17 17:31:56 crc kubenswrapper[4672]: I0217 17:31:56.544535 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-db8bf_19ebc984-d273-4d9e-9801-5e6b8d2c99b5/nmstate-handler/0.log" Feb 17 17:31:56 crc kubenswrapper[4672]: I0217 17:31:56.640209 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-49cvf_150f899e-0d70-4d0b-8021-82aedb51ea0c/kube-rbac-proxy/0.log" Feb 17 17:31:56 crc kubenswrapper[4672]: I0217 17:31:56.665018 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-49cvf_150f899e-0d70-4d0b-8021-82aedb51ea0c/nmstate-metrics/0.log" Feb 17 17:31:56 crc kubenswrapper[4672]: I0217 17:31:56.834125 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-cpp57_38e0f7d0-a9d3-42f8-b1d9-fd4ef6a8e413/nmstate-operator/0.log" Feb 17 17:31:56 crc kubenswrapper[4672]: I0217 17:31:56.886476 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-q4kl8_76662e89-70bf-4e3e-8fd4-df5f7af9c24f/nmstate-webhook/0.log" Feb 17 17:32:01 crc kubenswrapper[4672]: E0217 17:32:01.953390 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:32:06 crc kubenswrapper[4672]: I0217 17:32:06.945424 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:32:06 crc kubenswrapper[4672]: E0217 17:32:06.946210 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:32:09 crc kubenswrapper[4672]: E0217 17:32:09.947937 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:32:11 crc kubenswrapper[4672]: I0217 17:32:11.009597 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5698c87bb7-6twv2_de56b787-401f-4eea-b171-484eb364fbe8/kube-rbac-proxy/0.log" Feb 17 17:32:11 crc kubenswrapper[4672]: I0217 17:32:11.066301 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5698c87bb7-6twv2_de56b787-401f-4eea-b171-484eb364fbe8/manager/0.log" Feb 17 17:32:12 crc kubenswrapper[4672]: E0217 17:32:12.947613 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:32:19 crc kubenswrapper[4672]: I0217 17:32:19.945467 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:32:19 crc kubenswrapper[4672]: E0217 17:32:19.946363 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:32:22 crc kubenswrapper[4672]: E0217 17:32:22.947242 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:32:24 crc kubenswrapper[4672]: E0217 17:32:24.947388 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:32:26 crc kubenswrapper[4672]: I0217 17:32:26.143174 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-fkmc6_78d09a7b-94c0-4d04-a640-a67a065a6aff/prometheus-operator/0.log" Feb 17 17:32:26 crc kubenswrapper[4672]: I0217 17:32:26.344641 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp_043e8cc1-abfc-4d57-89b8-4d26da7b8a83/prometheus-operator-admission-webhook/0.log" Feb 17 17:32:26 crc kubenswrapper[4672]: I0217 17:32:26.454402 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56_de79ed15-243f-4c2a-a09f-b94c69734b33/prometheus-operator-admission-webhook/0.log" Feb 17 17:32:26 crc kubenswrapper[4672]: I0217 17:32:26.592004 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-gkrbk_527318fe-5c99-481d-910e-0e0973f7748b/operator/0.log" Feb 17 17:32:26 crc kubenswrapper[4672]: I0217 17:32:26.650348 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-f86bw_bf60e2ef-36ff-47b1-94c3-58db8c9a4e40/perses-operator/0.log" Feb 17 17:32:33 crc kubenswrapper[4672]: I0217 17:32:33.944823 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:32:34 crc kubenswrapper[4672]: I0217 17:32:34.883667 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerStarted","Data":"8d597cc8ff492e2c5a82f2b6824b54ff748acbefe4d8679fd3078b7cfdc4aea5"} Feb 17 17:32:35 crc kubenswrapper[4672]: E0217 17:32:35.947102 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:32:38 crc kubenswrapper[4672]: E0217 17:32:38.948319 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:32:44 crc kubenswrapper[4672]: I0217 17:32:44.078864 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-l8jng_7497b865-7479-4d35-97da-3d333bc26d66/kube-rbac-proxy/0.log" Feb 17 17:32:44 crc kubenswrapper[4672]: I0217 17:32:44.288494 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-l8jng_7497b865-7479-4d35-97da-3d333bc26d66/controller/0.log" Feb 17 17:32:44 crc kubenswrapper[4672]: I0217 17:32:44.315265 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gcmvn_fd5a2c9d-3e7b-4525-a730-efd640c47fc6/cp-frr-files/0.log" Feb 17 17:32:44 crc kubenswrapper[4672]: I0217 17:32:44.649660 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gcmvn_fd5a2c9d-3e7b-4525-a730-efd640c47fc6/cp-reloader/0.log" Feb 17 17:32:44 crc kubenswrapper[4672]: I0217 17:32:44.667534 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gcmvn_fd5a2c9d-3e7b-4525-a730-efd640c47fc6/cp-reloader/0.log" Feb 17 17:32:44 crc kubenswrapper[4672]: I0217 17:32:44.688802 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gcmvn_fd5a2c9d-3e7b-4525-a730-efd640c47fc6/cp-frr-files/0.log" Feb 17 17:32:44 crc kubenswrapper[4672]: I0217 17:32:44.707223 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gcmvn_fd5a2c9d-3e7b-4525-a730-efd640c47fc6/cp-metrics/0.log" Feb 17 17:32:44 crc kubenswrapper[4672]: I0217 17:32:44.983316 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gcmvn_fd5a2c9d-3e7b-4525-a730-efd640c47fc6/cp-metrics/0.log" Feb 17 17:32:44 crc kubenswrapper[4672]: I0217 17:32:44.983335 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gcmvn_fd5a2c9d-3e7b-4525-a730-efd640c47fc6/cp-frr-files/0.log" Feb 17 17:32:45 crc kubenswrapper[4672]: I0217 17:32:45.000675 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gcmvn_fd5a2c9d-3e7b-4525-a730-efd640c47fc6/cp-metrics/0.log" Feb 17 17:32:45 crc kubenswrapper[4672]: I0217 17:32:45.011862 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gcmvn_fd5a2c9d-3e7b-4525-a730-efd640c47fc6/cp-reloader/0.log" Feb 17 17:32:45 crc kubenswrapper[4672]: I0217 17:32:45.173163 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gcmvn_fd5a2c9d-3e7b-4525-a730-efd640c47fc6/cp-frr-files/0.log" Feb 17 17:32:45 crc kubenswrapper[4672]: I0217 17:32:45.237808 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gcmvn_fd5a2c9d-3e7b-4525-a730-efd640c47fc6/cp-reloader/0.log" Feb 17 17:32:45 crc kubenswrapper[4672]: I0217 17:32:45.245621 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gcmvn_fd5a2c9d-3e7b-4525-a730-efd640c47fc6/cp-metrics/0.log" Feb 17 17:32:45 crc kubenswrapper[4672]: I0217 17:32:45.260467 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gcmvn_fd5a2c9d-3e7b-4525-a730-efd640c47fc6/controller/0.log" Feb 17 17:32:45 crc kubenswrapper[4672]: I0217 17:32:45.720252 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gcmvn_fd5a2c9d-3e7b-4525-a730-efd640c47fc6/frr-metrics/0.log" Feb 17 17:32:45 crc kubenswrapper[4672]: I0217 17:32:45.759438 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gcmvn_fd5a2c9d-3e7b-4525-a730-efd640c47fc6/kube-rbac-proxy-frr/0.log" Feb 17 17:32:45 crc kubenswrapper[4672]: I0217 17:32:45.764404 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gcmvn_fd5a2c9d-3e7b-4525-a730-efd640c47fc6/kube-rbac-proxy/0.log" Feb 17 17:32:45 crc kubenswrapper[4672]: I0217 17:32:45.986496 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gcmvn_fd5a2c9d-3e7b-4525-a730-efd640c47fc6/reloader/0.log" Feb 17 17:32:46 crc kubenswrapper[4672]: I0217 17:32:46.036106 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-p85jn_9d4185ee-4bef-46e2-abf0-088c934361f2/frr-k8s-webhook-server/0.log" Feb 17 17:32:46 crc kubenswrapper[4672]: I0217 17:32:46.243211 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-65789c999f-rcpdd_79c462ff-317c-49da-a2db-9e6039c136a7/manager/0.log" Feb 17 17:32:46 crc kubenswrapper[4672]: I0217 17:32:46.455893 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-57c9dfc97-nmggd_4cc21a41-d48f-44f6-adfc-7d88c1e6c4b3/webhook-server/0.log" Feb 17 17:32:46 crc kubenswrapper[4672]: I0217 17:32:46.554619 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k5j8q_c4ecaffa-63e8-4b49-9274-3e8f715b7d7b/kube-rbac-proxy/0.log" Feb 17 17:32:47 crc kubenswrapper[4672]: I0217 17:32:47.310791 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k5j8q_c4ecaffa-63e8-4b49-9274-3e8f715b7d7b/speaker/0.log" Feb 17 17:32:47 crc kubenswrapper[4672]: I0217 17:32:47.407704 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gcmvn_fd5a2c9d-3e7b-4525-a730-efd640c47fc6/frr/0.log" Feb 17 17:32:48 crc kubenswrapper[4672]: E0217 17:32:48.947533 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:32:51 crc kubenswrapper[4672]: E0217 17:32:51.957080 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:33:00 crc kubenswrapper[4672]: E0217 17:33:00.947439 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:33:02 crc kubenswrapper[4672]: I0217 17:33:02.520630 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks_d7125f42-e466-4a0e-af16-ed09a82f07be/util/0.log" Feb 17 17:33:02 crc kubenswrapper[4672]: I0217 17:33:02.734488 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks_d7125f42-e466-4a0e-af16-ed09a82f07be/util/0.log" Feb 17 17:33:02 crc kubenswrapper[4672]: I0217 17:33:02.808858 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks_d7125f42-e466-4a0e-af16-ed09a82f07be/pull/0.log" Feb 17 17:33:02 crc kubenswrapper[4672]: I0217 17:33:02.814203 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks_d7125f42-e466-4a0e-af16-ed09a82f07be/pull/0.log" Feb 17 17:33:03 crc kubenswrapper[4672]: I0217 17:33:03.400799 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks_d7125f42-e466-4a0e-af16-ed09a82f07be/pull/0.log" Feb 17 17:33:03 crc kubenswrapper[4672]: I0217 17:33:03.412882 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks_d7125f42-e466-4a0e-af16-ed09a82f07be/util/0.log" Feb 17 17:33:03 crc kubenswrapper[4672]: I0217 17:33:03.423404 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651pp5ks_d7125f42-e466-4a0e-af16-ed09a82f07be/extract/0.log" Feb 17 17:33:03 crc kubenswrapper[4672]: I0217 17:33:03.604539 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z_6e724011-a0fa-4eb7-a10b-8199435d4478/util/0.log" Feb 17 17:33:03 crc kubenswrapper[4672]: I0217 17:33:03.787460 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z_6e724011-a0fa-4eb7-a10b-8199435d4478/pull/0.log" Feb 17 17:33:03 crc kubenswrapper[4672]: I0217 17:33:03.792771 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z_6e724011-a0fa-4eb7-a10b-8199435d4478/pull/0.log" Feb 17 17:33:03 crc kubenswrapper[4672]: I0217 17:33:03.821145 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z_6e724011-a0fa-4eb7-a10b-8199435d4478/util/0.log" Feb 17 17:33:03 crc kubenswrapper[4672]: I0217 17:33:03.993006 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z_6e724011-a0fa-4eb7-a10b-8199435d4478/extract/0.log" Feb 17 17:33:04 crc kubenswrapper[4672]: I0217 17:33:04.006678 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z_6e724011-a0fa-4eb7-a10b-8199435d4478/util/0.log" Feb 17 17:33:04 crc kubenswrapper[4672]: I0217 17:33:04.014405 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086wb2z_6e724011-a0fa-4eb7-a10b-8199435d4478/pull/0.log" Feb 17 17:33:04 crc kubenswrapper[4672]: I0217 17:33:04.199297 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p_382b3125-0f90-40a4-91f2-28ca8ac0e894/util/0.log" Feb 17 17:33:04 crc kubenswrapper[4672]: I0217 17:33:04.359888 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p_382b3125-0f90-40a4-91f2-28ca8ac0e894/pull/0.log" Feb 17 17:33:04 crc kubenswrapper[4672]: I0217 17:33:04.428498 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p_382b3125-0f90-40a4-91f2-28ca8ac0e894/pull/0.log" Feb 17 17:33:04 crc kubenswrapper[4672]: I0217 17:33:04.438250 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p_382b3125-0f90-40a4-91f2-28ca8ac0e894/util/0.log" Feb 17 17:33:04 crc kubenswrapper[4672]: I0217 17:33:04.637741 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p_382b3125-0f90-40a4-91f2-28ca8ac0e894/extract/0.log" Feb 17 17:33:04 crc kubenswrapper[4672]: I0217 17:33:04.648317 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p_382b3125-0f90-40a4-91f2-28ca8ac0e894/util/0.log" Feb 17 17:33:04 crc kubenswrapper[4672]: I0217 17:33:04.727333 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jq27p_382b3125-0f90-40a4-91f2-28ca8ac0e894/pull/0.log" Feb 17 17:33:04 crc kubenswrapper[4672]: I0217 17:33:04.874875 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hc52r_acbcb77a-c8a8-4ec1-80ab-727db7919906/extract-utilities/0.log" Feb 17 17:33:05 crc kubenswrapper[4672]: I0217 17:33:05.098296 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hc52r_acbcb77a-c8a8-4ec1-80ab-727db7919906/extract-utilities/0.log" Feb 17 17:33:05 crc kubenswrapper[4672]: I0217 17:33:05.103549 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hc52r_acbcb77a-c8a8-4ec1-80ab-727db7919906/extract-content/0.log" Feb 17 17:33:05 crc kubenswrapper[4672]: I0217 17:33:05.116992 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hc52r_acbcb77a-c8a8-4ec1-80ab-727db7919906/extract-content/0.log" Feb 17 17:33:05 crc kubenswrapper[4672]: I0217 17:33:05.370540 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hc52r_acbcb77a-c8a8-4ec1-80ab-727db7919906/extract-utilities/0.log" Feb 17 17:33:05 crc kubenswrapper[4672]: I0217 17:33:05.459824 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hc52r_acbcb77a-c8a8-4ec1-80ab-727db7919906/extract-content/0.log" Feb 17 17:33:05 crc kubenswrapper[4672]: I0217 17:33:05.902730 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xgg8j_7a4859b6-916a-4d5b-beac-e8bb32161f6a/extract-utilities/0.log" Feb 17 17:33:05 crc kubenswrapper[4672]: E0217 17:33:05.947142 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:33:06 crc kubenswrapper[4672]: I0217 17:33:06.051215 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hc52r_acbcb77a-c8a8-4ec1-80ab-727db7919906/registry-server/0.log" Feb 17 17:33:06 crc kubenswrapper[4672]: I0217 17:33:06.056163 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xgg8j_7a4859b6-916a-4d5b-beac-e8bb32161f6a/extract-utilities/0.log" Feb 17 17:33:06 crc kubenswrapper[4672]: I0217 17:33:06.118080 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xgg8j_7a4859b6-916a-4d5b-beac-e8bb32161f6a/extract-content/0.log" Feb 17 17:33:06 crc kubenswrapper[4672]: I0217 17:33:06.148186 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xgg8j_7a4859b6-916a-4d5b-beac-e8bb32161f6a/extract-content/0.log" Feb 17 17:33:06 crc kubenswrapper[4672]: I0217 17:33:06.323243 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xgg8j_7a4859b6-916a-4d5b-beac-e8bb32161f6a/extract-content/0.log" Feb 17 17:33:06 crc kubenswrapper[4672]: I0217 17:33:06.333541 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xgg8j_7a4859b6-916a-4d5b-beac-e8bb32161f6a/extract-utilities/0.log" Feb 17 17:33:06 crc kubenswrapper[4672]: I0217 17:33:06.483789 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b_100d404a-cfba-4360-bd59-0d74afc68e40/util/0.log" Feb 17 17:33:06 crc kubenswrapper[4672]: I0217 17:33:06.731902 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b_100d404a-cfba-4360-bd59-0d74afc68e40/util/0.log" Feb 17 17:33:06 crc kubenswrapper[4672]: I0217 17:33:06.992347 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xgg8j_7a4859b6-916a-4d5b-beac-e8bb32161f6a/registry-server/0.log" Feb 17 17:33:06 crc kubenswrapper[4672]: I0217 17:33:06.994968 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b_100d404a-cfba-4360-bd59-0d74afc68e40/pull/0.log" Feb 17 17:33:06 crc kubenswrapper[4672]: I0217 17:33:06.997618 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b_100d404a-cfba-4360-bd59-0d74afc68e40/pull/0.log" Feb 17 17:33:07 crc kubenswrapper[4672]: I0217 17:33:07.234582 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b_100d404a-cfba-4360-bd59-0d74afc68e40/util/0.log" Feb 17 17:33:07 crc kubenswrapper[4672]: I0217 17:33:07.238720 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b_100d404a-cfba-4360-bd59-0d74afc68e40/pull/0.log" Feb 17 17:33:07 crc kubenswrapper[4672]: I0217 17:33:07.277313 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ctrp5_c41c2d4d-2194-4562-97e0-69f36cf4007f/marketplace-operator/0.log" Feb 17 17:33:07 crc kubenswrapper[4672]: I0217 17:33:07.342045 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatll9b_100d404a-cfba-4360-bd59-0d74afc68e40/extract/0.log" Feb 17 17:33:07 crc kubenswrapper[4672]: I0217 17:33:07.428737 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9vqqf_206bc097-eb65-4755-89c7-4e230efa5224/extract-utilities/0.log" Feb 17 17:33:07 crc kubenswrapper[4672]: I0217 17:33:07.632569 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9vqqf_206bc097-eb65-4755-89c7-4e230efa5224/extract-content/0.log" Feb 17 17:33:07 crc kubenswrapper[4672]: I0217 17:33:07.642691 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9vqqf_206bc097-eb65-4755-89c7-4e230efa5224/extract-utilities/0.log" Feb 17 17:33:07 crc kubenswrapper[4672]: I0217 17:33:07.696615 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9vqqf_206bc097-eb65-4755-89c7-4e230efa5224/extract-content/0.log" Feb 17 17:33:07 crc kubenswrapper[4672]: I0217 17:33:07.848452 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9vqqf_206bc097-eb65-4755-89c7-4e230efa5224/extract-utilities/0.log" Feb 17 17:33:07 crc kubenswrapper[4672]: I0217 17:33:07.876312 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9vqqf_206bc097-eb65-4755-89c7-4e230efa5224/extract-content/0.log" Feb 17 17:33:07 crc kubenswrapper[4672]: I0217 17:33:07.968614 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bxph5_972d555a-9790-4a25-aa88-5ab896b52f5c/extract-utilities/0.log" Feb 17 17:33:08 crc kubenswrapper[4672]: I0217 17:33:08.021539 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9vqqf_206bc097-eb65-4755-89c7-4e230efa5224/registry-server/0.log" Feb 17 17:33:08 crc kubenswrapper[4672]: I0217 17:33:08.180700 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bxph5_972d555a-9790-4a25-aa88-5ab896b52f5c/extract-content/0.log" Feb 17 17:33:08 crc kubenswrapper[4672]: I0217 17:33:08.211344 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bxph5_972d555a-9790-4a25-aa88-5ab896b52f5c/extract-utilities/0.log" Feb 17 17:33:08 crc kubenswrapper[4672]: I0217 17:33:08.224376 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bxph5_972d555a-9790-4a25-aa88-5ab896b52f5c/extract-content/0.log" Feb 17 17:33:08 crc kubenswrapper[4672]: I0217 17:33:08.403333 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bxph5_972d555a-9790-4a25-aa88-5ab896b52f5c/extract-utilities/0.log" Feb 17 17:33:08 crc kubenswrapper[4672]: I0217 17:33:08.413033 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bxph5_972d555a-9790-4a25-aa88-5ab896b52f5c/extract-content/0.log" Feb 17 17:33:09 crc kubenswrapper[4672]: I0217 17:33:09.097502 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bxph5_972d555a-9790-4a25-aa88-5ab896b52f5c/registry-server/0.log" Feb 17 17:33:12 crc kubenswrapper[4672]: E0217 17:33:12.946993 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:33:19 crc kubenswrapper[4672]: E0217 17:33:19.947445 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:33:23 crc kubenswrapper[4672]: I0217 17:33:23.955747 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c4fff644f-kjr56_de79ed15-243f-4c2a-a09f-b94c69734b33/prometheus-operator-admission-webhook/0.log" Feb 17 17:33:24 crc kubenswrapper[4672]: I0217 17:33:24.018370 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-gkrbk_527318fe-5c99-481d-910e-0e0973f7748b/operator/0.log" Feb 17 17:33:24 crc kubenswrapper[4672]: I0217 17:33:24.020827 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-fkmc6_78d09a7b-94c0-4d04-a640-a67a065a6aff/prometheus-operator/0.log" Feb 17 17:33:24 crc kubenswrapper[4672]: I0217 17:33:24.031128 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c4fff644f-bctdp_043e8cc1-abfc-4d57-89b8-4d26da7b8a83/prometheus-operator-admission-webhook/0.log" Feb 17 17:33:24 crc kubenswrapper[4672]: I0217 17:33:24.208497 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-f86bw_bf60e2ef-36ff-47b1-94c3-58db8c9a4e40/perses-operator/0.log" Feb 17 17:33:26 crc kubenswrapper[4672]: E0217 17:33:26.949680 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:33:34 crc kubenswrapper[4672]: E0217 17:33:34.946723 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:33:38 crc kubenswrapper[4672]: I0217 17:33:38.254379 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5698c87bb7-6twv2_de56b787-401f-4eea-b171-484eb364fbe8/kube-rbac-proxy/0.log" Feb 17 17:33:38 crc kubenswrapper[4672]: I0217 17:33:38.270076 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5698c87bb7-6twv2_de56b787-401f-4eea-b171-484eb364fbe8/manager/0.log" Feb 17 17:33:39 crc kubenswrapper[4672]: E0217 17:33:39.947663 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:33:45 crc kubenswrapper[4672]: E0217 17:33:45.948182 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:33:49 crc kubenswrapper[4672]: E0217 17:33:49.361831 4672 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.46:41936->38.102.83.46:42007: write tcp 38.102.83.46:41936->38.102.83.46:42007: write: broken pipe Feb 17 17:33:51 crc kubenswrapper[4672]: E0217 17:33:51.120184 4672 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.46:42152->38.102.83.46:42007: write tcp 38.102.83.46:42152->38.102.83.46:42007: write: broken pipe Feb 17 17:33:52 crc kubenswrapper[4672]: E0217 17:33:52.946633 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:33:56 crc kubenswrapper[4672]: E0217 17:33:56.947208 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:34:04 crc kubenswrapper[4672]: E0217 17:34:04.948336 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:34:07 crc kubenswrapper[4672]: E0217 17:34:07.952037 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:34:17 crc kubenswrapper[4672]: E0217 17:34:17.947478 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:34:20 crc kubenswrapper[4672]: E0217 17:34:20.947134 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:34:32 crc kubenswrapper[4672]: I0217 17:34:32.947592 4672 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:34:33 crc kubenswrapper[4672]: E0217 17:34:33.077538 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:34:33 crc kubenswrapper[4672]: E0217 17:34:33.077612 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:34:33 crc kubenswrapper[4672]: E0217 17:34:33.077771 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq9ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-qrhj8_openstack(dc5471f5-2491-4841-be45-09c8f14b35c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:34:33 crc kubenswrapper[4672]: E0217 17:34:33.079281 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:34:36 crc kubenswrapper[4672]: E0217 17:34:36.053182 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:34:36 crc kubenswrapper[4672]: E0217 17:34:36.053692 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:34:36 crc kubenswrapper[4672]: E0217 17:34:36.054566 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66h7h644h64ch5f8h565hfch5dh56chfdh8hfdh5b5h567h6dh665h557h74h665hcbh96h659h554h589h57fh5d9h55h564hcfh5dhffhfdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tx4bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9e58ce9b-ddd5-42bb-8e07-08a22c8871a5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:34:36 crc kubenswrapper[4672]: E0217 17:34:36.055894 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:34:45 crc kubenswrapper[4672]: E0217 17:34:45.949708 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:34:50 crc kubenswrapper[4672]: E0217 17:34:50.948163 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:34:57 crc kubenswrapper[4672]: I0217 17:34:57.566445 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:34:57 crc kubenswrapper[4672]: I0217 17:34:57.568341 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:34:57 crc kubenswrapper[4672]: E0217 17:34:57.947152 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:35:01 crc kubenswrapper[4672]: E0217 17:35:01.958104 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:35:11 crc kubenswrapper[4672]: E0217 17:35:11.955366 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:35:15 crc kubenswrapper[4672]: E0217 17:35:15.949816 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:35:20 crc kubenswrapper[4672]: I0217 17:35:20.256838 4672 scope.go:117] "RemoveContainer" containerID="3fdee91464a7bb5170f6d568bf408b96161bbd64e750e8acc5b8a271966e1c02" Feb 17 17:35:24 crc kubenswrapper[4672]: I0217 17:35:24.631218 4672 generic.go:334] "Generic (PLEG): container finished" podID="189b761b-ad0c-41f5-892c-54ece21c8ab8" containerID="4e316c4b786fceb930c362b7a5bbe6e6af2700f151b473d0fb02a6594e3053bb" exitCode=0 Feb 17 17:35:24 crc kubenswrapper[4672]: I0217 17:35:24.631835 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tsjx5/must-gather-s5rvf" event={"ID":"189b761b-ad0c-41f5-892c-54ece21c8ab8","Type":"ContainerDied","Data":"4e316c4b786fceb930c362b7a5bbe6e6af2700f151b473d0fb02a6594e3053bb"} Feb 17 17:35:24 crc kubenswrapper[4672]: I0217 17:35:24.632627 4672 scope.go:117] "RemoveContainer" containerID="4e316c4b786fceb930c362b7a5bbe6e6af2700f151b473d0fb02a6594e3053bb" Feb 17 17:35:24 crc kubenswrapper[4672]: I0217 17:35:24.838589 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tsjx5_must-gather-s5rvf_189b761b-ad0c-41f5-892c-54ece21c8ab8/gather/0.log" Feb 17 17:35:26 crc kubenswrapper[4672]: E0217 17:35:26.948702 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:35:27 crc kubenswrapper[4672]: I0217 17:35:27.565827 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:35:27 crc kubenswrapper[4672]: I0217 17:35:27.565895 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:35:28 crc kubenswrapper[4672]: I0217 17:35:28.818008 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qp86z"] Feb 17 17:35:28 crc kubenswrapper[4672]: E0217 17:35:28.819001 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d227a6-00c2-4a20-89cd-aab98ad30545" containerName="collect-profiles" Feb 17 17:35:28 crc kubenswrapper[4672]: I0217 17:35:28.819015 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d227a6-00c2-4a20-89cd-aab98ad30545" containerName="collect-profiles" Feb 17 17:35:28 crc kubenswrapper[4672]: I0217 17:35:28.819217 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="41d227a6-00c2-4a20-89cd-aab98ad30545" containerName="collect-profiles" Feb 17 17:35:28 crc kubenswrapper[4672]: I0217 17:35:28.821963 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qp86z" Feb 17 17:35:28 crc kubenswrapper[4672]: I0217 17:35:28.834721 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qp86z"] Feb 17 17:35:28 crc kubenswrapper[4672]: I0217 17:35:28.936887 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55hc4\" (UniqueName: \"kubernetes.io/projected/fb84b5af-0ad7-468a-854e-c41369912ded-kube-api-access-55hc4\") pod \"redhat-marketplace-qp86z\" (UID: \"fb84b5af-0ad7-468a-854e-c41369912ded\") " pod="openshift-marketplace/redhat-marketplace-qp86z" Feb 17 17:35:28 crc kubenswrapper[4672]: I0217 17:35:28.936976 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb84b5af-0ad7-468a-854e-c41369912ded-utilities\") pod \"redhat-marketplace-qp86z\" (UID: \"fb84b5af-0ad7-468a-854e-c41369912ded\") " pod="openshift-marketplace/redhat-marketplace-qp86z" Feb 17 17:35:28 crc kubenswrapper[4672]: I0217 17:35:28.937044 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb84b5af-0ad7-468a-854e-c41369912ded-catalog-content\") pod \"redhat-marketplace-qp86z\" (UID: \"fb84b5af-0ad7-468a-854e-c41369912ded\") " pod="openshift-marketplace/redhat-marketplace-qp86z" Feb 17 17:35:29 crc kubenswrapper[4672]: I0217 17:35:29.039093 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55hc4\" (UniqueName: \"kubernetes.io/projected/fb84b5af-0ad7-468a-854e-c41369912ded-kube-api-access-55hc4\") pod \"redhat-marketplace-qp86z\" (UID: \"fb84b5af-0ad7-468a-854e-c41369912ded\") " pod="openshift-marketplace/redhat-marketplace-qp86z" Feb 17 17:35:29 crc kubenswrapper[4672]: I0217 17:35:29.039254 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb84b5af-0ad7-468a-854e-c41369912ded-utilities\") pod \"redhat-marketplace-qp86z\" (UID: \"fb84b5af-0ad7-468a-854e-c41369912ded\") " pod="openshift-marketplace/redhat-marketplace-qp86z" Feb 17 17:35:29 crc kubenswrapper[4672]: I0217 17:35:29.039355 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb84b5af-0ad7-468a-854e-c41369912ded-catalog-content\") pod \"redhat-marketplace-qp86z\" (UID: \"fb84b5af-0ad7-468a-854e-c41369912ded\") " pod="openshift-marketplace/redhat-marketplace-qp86z" Feb 17 17:35:29 crc kubenswrapper[4672]: I0217 17:35:29.040230 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb84b5af-0ad7-468a-854e-c41369912ded-utilities\") pod \"redhat-marketplace-qp86z\" (UID: \"fb84b5af-0ad7-468a-854e-c41369912ded\") " pod="openshift-marketplace/redhat-marketplace-qp86z" Feb 17 17:35:29 crc kubenswrapper[4672]: I0217 17:35:29.040373 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb84b5af-0ad7-468a-854e-c41369912ded-catalog-content\") pod \"redhat-marketplace-qp86z\" (UID: \"fb84b5af-0ad7-468a-854e-c41369912ded\") " pod="openshift-marketplace/redhat-marketplace-qp86z" Feb 17 17:35:29 crc kubenswrapper[4672]: I0217 17:35:29.063621 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55hc4\" (UniqueName: \"kubernetes.io/projected/fb84b5af-0ad7-468a-854e-c41369912ded-kube-api-access-55hc4\") pod \"redhat-marketplace-qp86z\" (UID: \"fb84b5af-0ad7-468a-854e-c41369912ded\") " pod="openshift-marketplace/redhat-marketplace-qp86z" Feb 17 17:35:29 crc kubenswrapper[4672]: I0217 17:35:29.142899 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qp86z" Feb 17 17:35:29 crc kubenswrapper[4672]: I0217 17:35:29.694769 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qp86z"] Feb 17 17:35:29 crc kubenswrapper[4672]: W0217 17:35:29.714815 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb84b5af_0ad7_468a_854e_c41369912ded.slice/crio-c650575590773bf709b733c815ee2825651ae47657bd79f098ec615b34d4dd50 WatchSource:0}: Error finding container c650575590773bf709b733c815ee2825651ae47657bd79f098ec615b34d4dd50: Status 404 returned error can't find the container with id c650575590773bf709b733c815ee2825651ae47657bd79f098ec615b34d4dd50 Feb 17 17:35:30 crc kubenswrapper[4672]: I0217 17:35:30.701495 4672 generic.go:334] "Generic (PLEG): container finished" podID="fb84b5af-0ad7-468a-854e-c41369912ded" containerID="720f3fd43bea74cd0a9399e93b8f2fdc429329cbdf41ae3f5c3e9070820c523c" exitCode=0 Feb 17 17:35:30 crc kubenswrapper[4672]: I0217 17:35:30.701636 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qp86z" event={"ID":"fb84b5af-0ad7-468a-854e-c41369912ded","Type":"ContainerDied","Data":"720f3fd43bea74cd0a9399e93b8f2fdc429329cbdf41ae3f5c3e9070820c523c"} Feb 17 17:35:30 crc kubenswrapper[4672]: I0217 17:35:30.702630 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qp86z" event={"ID":"fb84b5af-0ad7-468a-854e-c41369912ded","Type":"ContainerStarted","Data":"c650575590773bf709b733c815ee2825651ae47657bd79f098ec615b34d4dd50"} Feb 17 17:35:30 crc kubenswrapper[4672]: E0217 17:35:30.946766 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:35:31 crc kubenswrapper[4672]: I0217 17:35:31.714599 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qp86z" event={"ID":"fb84b5af-0ad7-468a-854e-c41369912ded","Type":"ContainerStarted","Data":"7c1be05538d5ec09013f9b60bea18f99989e7a1ce28cddb2425c37a05a3ab8d2"} Feb 17 17:35:32 crc kubenswrapper[4672]: I0217 17:35:32.728419 4672 generic.go:334] "Generic (PLEG): container finished" podID="fb84b5af-0ad7-468a-854e-c41369912ded" containerID="7c1be05538d5ec09013f9b60bea18f99989e7a1ce28cddb2425c37a05a3ab8d2" exitCode=0 Feb 17 17:35:32 crc kubenswrapper[4672]: I0217 17:35:32.728568 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qp86z" event={"ID":"fb84b5af-0ad7-468a-854e-c41369912ded","Type":"ContainerDied","Data":"7c1be05538d5ec09013f9b60bea18f99989e7a1ce28cddb2425c37a05a3ab8d2"} Feb 17 17:35:33 crc kubenswrapper[4672]: I0217 17:35:33.742201 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qp86z" event={"ID":"fb84b5af-0ad7-468a-854e-c41369912ded","Type":"ContainerStarted","Data":"01afe755db3be13b74ba05dabfcc0de1f076c8e118a30e9cffed8484ec68488a"} Feb 17 17:35:33 crc kubenswrapper[4672]: I0217 17:35:33.769342 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qp86z" podStartSLOduration=3.178199103 podStartE2EDuration="5.769315675s" podCreationTimestamp="2026-02-17 17:35:28 +0000 UTC" firstStartedPulling="2026-02-17 17:35:30.703095021 +0000 UTC m=+5539.457183753" lastFinishedPulling="2026-02-17 17:35:33.294211593 +0000 UTC m=+5542.048300325" observedRunningTime="2026-02-17 17:35:33.760682076 +0000 UTC m=+5542.514770808" watchObservedRunningTime="2026-02-17 17:35:33.769315675 +0000 UTC m=+5542.523404417" Feb 17 17:35:34 crc kubenswrapper[4672]: I0217 17:35:34.557717 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tsjx5/must-gather-s5rvf"] Feb 17 17:35:34 crc kubenswrapper[4672]: I0217 17:35:34.558308 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-tsjx5/must-gather-s5rvf" podUID="189b761b-ad0c-41f5-892c-54ece21c8ab8" containerName="copy" containerID="cri-o://102f5df918a8124947cf7bc7313608494c7373011af76b928441978e9d5c6cfc" gracePeriod=2 Feb 17 17:35:34 crc kubenswrapper[4672]: I0217 17:35:34.573873 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tsjx5/must-gather-s5rvf"] Feb 17 17:35:34 crc kubenswrapper[4672]: I0217 17:35:34.779001 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tsjx5_must-gather-s5rvf_189b761b-ad0c-41f5-892c-54ece21c8ab8/copy/0.log" Feb 17 17:35:34 crc kubenswrapper[4672]: I0217 17:35:34.783187 4672 generic.go:334] "Generic (PLEG): container finished" podID="189b761b-ad0c-41f5-892c-54ece21c8ab8" containerID="102f5df918a8124947cf7bc7313608494c7373011af76b928441978e9d5c6cfc" exitCode=143 Feb 17 17:35:35 crc kubenswrapper[4672]: I0217 17:35:35.209958 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tsjx5_must-gather-s5rvf_189b761b-ad0c-41f5-892c-54ece21c8ab8/copy/0.log" Feb 17 17:35:35 crc kubenswrapper[4672]: I0217 17:35:35.210689 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tsjx5/must-gather-s5rvf" Feb 17 17:35:35 crc kubenswrapper[4672]: I0217 17:35:35.289493 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7n4j\" (UniqueName: \"kubernetes.io/projected/189b761b-ad0c-41f5-892c-54ece21c8ab8-kube-api-access-x7n4j\") pod \"189b761b-ad0c-41f5-892c-54ece21c8ab8\" (UID: \"189b761b-ad0c-41f5-892c-54ece21c8ab8\") " Feb 17 17:35:35 crc kubenswrapper[4672]: I0217 17:35:35.289741 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/189b761b-ad0c-41f5-892c-54ece21c8ab8-must-gather-output\") pod \"189b761b-ad0c-41f5-892c-54ece21c8ab8\" (UID: \"189b761b-ad0c-41f5-892c-54ece21c8ab8\") " Feb 17 17:35:35 crc kubenswrapper[4672]: I0217 17:35:35.295637 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/189b761b-ad0c-41f5-892c-54ece21c8ab8-kube-api-access-x7n4j" (OuterVolumeSpecName: "kube-api-access-x7n4j") pod "189b761b-ad0c-41f5-892c-54ece21c8ab8" (UID: "189b761b-ad0c-41f5-892c-54ece21c8ab8"). InnerVolumeSpecName "kube-api-access-x7n4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:35:35 crc kubenswrapper[4672]: I0217 17:35:35.392523 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7n4j\" (UniqueName: \"kubernetes.io/projected/189b761b-ad0c-41f5-892c-54ece21c8ab8-kube-api-access-x7n4j\") on node \"crc\" DevicePath \"\"" Feb 17 17:35:35 crc kubenswrapper[4672]: I0217 17:35:35.481373 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/189b761b-ad0c-41f5-892c-54ece21c8ab8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "189b761b-ad0c-41f5-892c-54ece21c8ab8" (UID: "189b761b-ad0c-41f5-892c-54ece21c8ab8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:35:35 crc kubenswrapper[4672]: I0217 17:35:35.494441 4672 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/189b761b-ad0c-41f5-892c-54ece21c8ab8-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 17 17:35:35 crc kubenswrapper[4672]: I0217 17:35:35.795769 4672 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tsjx5_must-gather-s5rvf_189b761b-ad0c-41f5-892c-54ece21c8ab8/copy/0.log" Feb 17 17:35:35 crc kubenswrapper[4672]: I0217 17:35:35.796645 4672 scope.go:117] "RemoveContainer" containerID="102f5df918a8124947cf7bc7313608494c7373011af76b928441978e9d5c6cfc" Feb 17 17:35:35 crc kubenswrapper[4672]: I0217 17:35:35.796715 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tsjx5/must-gather-s5rvf" Feb 17 17:35:35 crc kubenswrapper[4672]: I0217 17:35:35.833150 4672 scope.go:117] "RemoveContainer" containerID="4e316c4b786fceb930c362b7a5bbe6e6af2700f151b473d0fb02a6594e3053bb" Feb 17 17:35:35 crc kubenswrapper[4672]: I0217 17:35:35.964826 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="189b761b-ad0c-41f5-892c-54ece21c8ab8" path="/var/lib/kubelet/pods/189b761b-ad0c-41f5-892c-54ece21c8ab8/volumes" Feb 17 17:35:37 crc kubenswrapper[4672]: E0217 17:35:37.946342 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:35:39 crc kubenswrapper[4672]: I0217 17:35:39.144637 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qp86z" Feb 17 17:35:39 crc kubenswrapper[4672]: I0217 17:35:39.144679 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qp86z" Feb 17 17:35:39 crc kubenswrapper[4672]: I0217 17:35:39.196219 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qp86z" Feb 17 17:35:39 crc kubenswrapper[4672]: I0217 17:35:39.876280 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qp86z" Feb 17 17:35:39 crc kubenswrapper[4672]: I0217 17:35:39.925968 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qp86z"] Feb 17 17:35:41 crc kubenswrapper[4672]: I0217 17:35:41.847263 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qp86z" podUID="fb84b5af-0ad7-468a-854e-c41369912ded" containerName="registry-server" containerID="cri-o://01afe755db3be13b74ba05dabfcc0de1f076c8e118a30e9cffed8484ec68488a" gracePeriod=2 Feb 17 17:35:42 crc kubenswrapper[4672]: I0217 17:35:42.421873 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qp86z" Feb 17 17:35:42 crc kubenswrapper[4672]: I0217 17:35:42.440951 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb84b5af-0ad7-468a-854e-c41369912ded-utilities\") pod \"fb84b5af-0ad7-468a-854e-c41369912ded\" (UID: \"fb84b5af-0ad7-468a-854e-c41369912ded\") " Feb 17 17:35:42 crc kubenswrapper[4672]: I0217 17:35:42.441186 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb84b5af-0ad7-468a-854e-c41369912ded-catalog-content\") pod \"fb84b5af-0ad7-468a-854e-c41369912ded\" (UID: \"fb84b5af-0ad7-468a-854e-c41369912ded\") " Feb 17 17:35:42 crc kubenswrapper[4672]: I0217 17:35:42.441270 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55hc4\" (UniqueName: \"kubernetes.io/projected/fb84b5af-0ad7-468a-854e-c41369912ded-kube-api-access-55hc4\") pod \"fb84b5af-0ad7-468a-854e-c41369912ded\" (UID: \"fb84b5af-0ad7-468a-854e-c41369912ded\") " Feb 17 17:35:42 crc kubenswrapper[4672]: I0217 17:35:42.442248 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb84b5af-0ad7-468a-854e-c41369912ded-utilities" (OuterVolumeSpecName: "utilities") pod "fb84b5af-0ad7-468a-854e-c41369912ded" (UID: "fb84b5af-0ad7-468a-854e-c41369912ded"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:35:42 crc kubenswrapper[4672]: I0217 17:35:42.448713 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb84b5af-0ad7-468a-854e-c41369912ded-kube-api-access-55hc4" (OuterVolumeSpecName: "kube-api-access-55hc4") pod "fb84b5af-0ad7-468a-854e-c41369912ded" (UID: "fb84b5af-0ad7-468a-854e-c41369912ded"). InnerVolumeSpecName "kube-api-access-55hc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:35:42 crc kubenswrapper[4672]: I0217 17:35:42.544153 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb84b5af-0ad7-468a-854e-c41369912ded-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:35:42 crc kubenswrapper[4672]: I0217 17:35:42.544195 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55hc4\" (UniqueName: \"kubernetes.io/projected/fb84b5af-0ad7-468a-854e-c41369912ded-kube-api-access-55hc4\") on node \"crc\" DevicePath \"\"" Feb 17 17:35:42 crc kubenswrapper[4672]: I0217 17:35:42.860244 4672 generic.go:334] "Generic (PLEG): container finished" podID="fb84b5af-0ad7-468a-854e-c41369912ded" containerID="01afe755db3be13b74ba05dabfcc0de1f076c8e118a30e9cffed8484ec68488a" exitCode=0 Feb 17 17:35:42 crc kubenswrapper[4672]: I0217 17:35:42.860303 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qp86z" event={"ID":"fb84b5af-0ad7-468a-854e-c41369912ded","Type":"ContainerDied","Data":"01afe755db3be13b74ba05dabfcc0de1f076c8e118a30e9cffed8484ec68488a"} Feb 17 17:35:42 crc kubenswrapper[4672]: I0217 17:35:42.860364 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qp86z" event={"ID":"fb84b5af-0ad7-468a-854e-c41369912ded","Type":"ContainerDied","Data":"c650575590773bf709b733c815ee2825651ae47657bd79f098ec615b34d4dd50"} Feb 17 17:35:42 crc kubenswrapper[4672]: I0217 17:35:42.860361 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qp86z" Feb 17 17:35:42 crc kubenswrapper[4672]: I0217 17:35:42.860380 4672 scope.go:117] "RemoveContainer" containerID="01afe755db3be13b74ba05dabfcc0de1f076c8e118a30e9cffed8484ec68488a" Feb 17 17:35:42 crc kubenswrapper[4672]: I0217 17:35:42.891637 4672 scope.go:117] "RemoveContainer" containerID="7c1be05538d5ec09013f9b60bea18f99989e7a1ce28cddb2425c37a05a3ab8d2" Feb 17 17:35:42 crc kubenswrapper[4672]: I0217 17:35:42.911501 4672 scope.go:117] "RemoveContainer" containerID="720f3fd43bea74cd0a9399e93b8f2fdc429329cbdf41ae3f5c3e9070820c523c" Feb 17 17:35:42 crc kubenswrapper[4672]: I0217 17:35:42.996731 4672 scope.go:117] "RemoveContainer" containerID="01afe755db3be13b74ba05dabfcc0de1f076c8e118a30e9cffed8484ec68488a" Feb 17 17:35:42 crc kubenswrapper[4672]: E0217 17:35:42.997665 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01afe755db3be13b74ba05dabfcc0de1f076c8e118a30e9cffed8484ec68488a\": container with ID starting with 01afe755db3be13b74ba05dabfcc0de1f076c8e118a30e9cffed8484ec68488a not found: ID does not exist" containerID="01afe755db3be13b74ba05dabfcc0de1f076c8e118a30e9cffed8484ec68488a" Feb 17 17:35:42 crc kubenswrapper[4672]: I0217 17:35:42.997775 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01afe755db3be13b74ba05dabfcc0de1f076c8e118a30e9cffed8484ec68488a"} err="failed to get container status \"01afe755db3be13b74ba05dabfcc0de1f076c8e118a30e9cffed8484ec68488a\": rpc error: code = NotFound desc = could not find container \"01afe755db3be13b74ba05dabfcc0de1f076c8e118a30e9cffed8484ec68488a\": container with ID starting with 01afe755db3be13b74ba05dabfcc0de1f076c8e118a30e9cffed8484ec68488a not found: ID does not exist" Feb 17 17:35:42 crc kubenswrapper[4672]: I0217 17:35:42.997879 4672 scope.go:117] "RemoveContainer" containerID="7c1be05538d5ec09013f9b60bea18f99989e7a1ce28cddb2425c37a05a3ab8d2" Feb 17 17:35:42 crc kubenswrapper[4672]: E0217 17:35:42.998463 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c1be05538d5ec09013f9b60bea18f99989e7a1ce28cddb2425c37a05a3ab8d2\": container with ID starting with 7c1be05538d5ec09013f9b60bea18f99989e7a1ce28cddb2425c37a05a3ab8d2 not found: ID does not exist" containerID="7c1be05538d5ec09013f9b60bea18f99989e7a1ce28cddb2425c37a05a3ab8d2" Feb 17 17:35:42 crc kubenswrapper[4672]: I0217 17:35:42.998488 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c1be05538d5ec09013f9b60bea18f99989e7a1ce28cddb2425c37a05a3ab8d2"} err="failed to get container status \"7c1be05538d5ec09013f9b60bea18f99989e7a1ce28cddb2425c37a05a3ab8d2\": rpc error: code = NotFound desc = could not find container \"7c1be05538d5ec09013f9b60bea18f99989e7a1ce28cddb2425c37a05a3ab8d2\": container with ID starting with 7c1be05538d5ec09013f9b60bea18f99989e7a1ce28cddb2425c37a05a3ab8d2 not found: ID does not exist" Feb 17 17:35:42 crc kubenswrapper[4672]: I0217 17:35:42.998518 4672 scope.go:117] "RemoveContainer" containerID="720f3fd43bea74cd0a9399e93b8f2fdc429329cbdf41ae3f5c3e9070820c523c" Feb 17 17:35:42 crc kubenswrapper[4672]: E0217 17:35:42.998835 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"720f3fd43bea74cd0a9399e93b8f2fdc429329cbdf41ae3f5c3e9070820c523c\": container with ID starting with 720f3fd43bea74cd0a9399e93b8f2fdc429329cbdf41ae3f5c3e9070820c523c not found: ID does not exist" containerID="720f3fd43bea74cd0a9399e93b8f2fdc429329cbdf41ae3f5c3e9070820c523c" Feb 17 17:35:42 crc kubenswrapper[4672]: I0217 17:35:42.998862 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"720f3fd43bea74cd0a9399e93b8f2fdc429329cbdf41ae3f5c3e9070820c523c"} err="failed to get container status \"720f3fd43bea74cd0a9399e93b8f2fdc429329cbdf41ae3f5c3e9070820c523c\": rpc error: code = NotFound desc = could not find container \"720f3fd43bea74cd0a9399e93b8f2fdc429329cbdf41ae3f5c3e9070820c523c\": container with ID starting with 720f3fd43bea74cd0a9399e93b8f2fdc429329cbdf41ae3f5c3e9070820c523c not found: ID does not exist" Feb 17 17:35:43 crc kubenswrapper[4672]: I0217 17:35:43.434781 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb84b5af-0ad7-468a-854e-c41369912ded-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb84b5af-0ad7-468a-854e-c41369912ded" (UID: "fb84b5af-0ad7-468a-854e-c41369912ded"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:35:43 crc kubenswrapper[4672]: I0217 17:35:43.477590 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb84b5af-0ad7-468a-854e-c41369912ded-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:35:43 crc kubenswrapper[4672]: I0217 17:35:43.576537 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qp86z"] Feb 17 17:35:43 crc kubenswrapper[4672]: I0217 17:35:43.586471 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qp86z"] Feb 17 17:35:43 crc kubenswrapper[4672]: I0217 17:35:43.956923 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb84b5af-0ad7-468a-854e-c41369912ded" path="/var/lib/kubelet/pods/fb84b5af-0ad7-468a-854e-c41369912ded/volumes" Feb 17 17:35:45 crc kubenswrapper[4672]: E0217 17:35:45.947540 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:35:48 crc kubenswrapper[4672]: E0217 17:35:48.951571 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:35:57 crc kubenswrapper[4672]: I0217 17:35:57.566681 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:35:57 crc kubenswrapper[4672]: I0217 17:35:57.567703 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:35:57 crc kubenswrapper[4672]: I0217 17:35:57.567791 4672 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 17:35:57 crc kubenswrapper[4672]: I0217 17:35:57.568716 4672 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8d597cc8ff492e2c5a82f2b6824b54ff748acbefe4d8679fd3078b7cfdc4aea5"} pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:35:57 crc kubenswrapper[4672]: I0217 17:35:57.568809 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" containerID="cri-o://8d597cc8ff492e2c5a82f2b6824b54ff748acbefe4d8679fd3078b7cfdc4aea5" gracePeriod=600 Feb 17 17:35:58 crc kubenswrapper[4672]: I0217 17:35:58.014843 4672 generic.go:334] "Generic (PLEG): container finished" podID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerID="8d597cc8ff492e2c5a82f2b6824b54ff748acbefe4d8679fd3078b7cfdc4aea5" exitCode=0 Feb 17 17:35:58 crc kubenswrapper[4672]: I0217 17:35:58.014881 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerDied","Data":"8d597cc8ff492e2c5a82f2b6824b54ff748acbefe4d8679fd3078b7cfdc4aea5"} Feb 17 17:35:58 crc kubenswrapper[4672]: I0217 17:35:58.015209 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerStarted","Data":"84b26503293fd480153540a364dd1bb213df8906602f27c523499fd5a410b40b"} Feb 17 17:35:58 crc kubenswrapper[4672]: I0217 17:35:58.015234 4672 scope.go:117] "RemoveContainer" containerID="56e6007a8201972c49b1432e5d22e2ef9faf1c48bcd6de061f8d78425bba9eaa" Feb 17 17:35:58 crc kubenswrapper[4672]: E0217 17:35:58.946446 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:36:01 crc kubenswrapper[4672]: E0217 17:36:01.955644 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:36:13 crc kubenswrapper[4672]: E0217 17:36:13.947358 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:36:16 crc kubenswrapper[4672]: E0217 17:36:16.947371 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:36:20 crc kubenswrapper[4672]: I0217 17:36:20.327750 4672 scope.go:117] "RemoveContainer" containerID="1fbc0b24e46ae96f074d8cd41eaac33fc63850a9f1f42822dd7f79986c7c7a4d" Feb 17 17:36:25 crc kubenswrapper[4672]: E0217 17:36:25.947554 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:36:31 crc kubenswrapper[4672]: E0217 17:36:31.954454 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:36:37 crc kubenswrapper[4672]: E0217 17:36:37.947846 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:36:45 crc kubenswrapper[4672]: E0217 17:36:45.948485 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:36:49 crc kubenswrapper[4672]: E0217 17:36:49.950048 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:36:56 crc kubenswrapper[4672]: E0217 17:36:56.947434 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:37:02 crc kubenswrapper[4672]: E0217 17:37:02.947494 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:37:07 crc kubenswrapper[4672]: E0217 17:37:07.947175 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:37:16 crc kubenswrapper[4672]: E0217 17:37:16.952860 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:37:19 crc kubenswrapper[4672]: E0217 17:37:19.949613 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:37:27 crc kubenswrapper[4672]: E0217 17:37:27.946921 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:37:31 crc kubenswrapper[4672]: E0217 17:37:31.953131 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:37:37 crc kubenswrapper[4672]: I0217 17:37:37.275310 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xdnxb"] Feb 17 17:37:37 crc kubenswrapper[4672]: E0217 17:37:37.276239 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb84b5af-0ad7-468a-854e-c41369912ded" containerName="extract-utilities" Feb 17 17:37:37 crc kubenswrapper[4672]: I0217 17:37:37.276253 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb84b5af-0ad7-468a-854e-c41369912ded" containerName="extract-utilities" Feb 17 17:37:37 crc kubenswrapper[4672]: E0217 17:37:37.276263 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="189b761b-ad0c-41f5-892c-54ece21c8ab8" containerName="gather" Feb 17 17:37:37 crc kubenswrapper[4672]: I0217 17:37:37.276269 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="189b761b-ad0c-41f5-892c-54ece21c8ab8" containerName="gather" Feb 17 17:37:37 crc kubenswrapper[4672]: E0217 17:37:37.276277 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="189b761b-ad0c-41f5-892c-54ece21c8ab8" containerName="copy" Feb 17 17:37:37 crc kubenswrapper[4672]: I0217 17:37:37.276282 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="189b761b-ad0c-41f5-892c-54ece21c8ab8" containerName="copy" Feb 17 17:37:37 crc kubenswrapper[4672]: E0217 17:37:37.276300 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb84b5af-0ad7-468a-854e-c41369912ded" containerName="registry-server" Feb 17 17:37:37 crc kubenswrapper[4672]: I0217 17:37:37.276306 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb84b5af-0ad7-468a-854e-c41369912ded" containerName="registry-server" Feb 17 17:37:37 crc kubenswrapper[4672]: E0217 17:37:37.276326 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb84b5af-0ad7-468a-854e-c41369912ded" containerName="extract-content" Feb 17 17:37:37 crc kubenswrapper[4672]: I0217 17:37:37.276332 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb84b5af-0ad7-468a-854e-c41369912ded" containerName="extract-content" Feb 17 17:37:37 crc kubenswrapper[4672]: I0217 17:37:37.276547 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="189b761b-ad0c-41f5-892c-54ece21c8ab8" containerName="gather" Feb 17 17:37:37 crc kubenswrapper[4672]: I0217 17:37:37.276575 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb84b5af-0ad7-468a-854e-c41369912ded" containerName="registry-server" Feb 17 17:37:37 crc kubenswrapper[4672]: I0217 17:37:37.276587 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="189b761b-ad0c-41f5-892c-54ece21c8ab8" containerName="copy" Feb 17 17:37:37 crc kubenswrapper[4672]: I0217 17:37:37.278162 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xdnxb" Feb 17 17:37:37 crc kubenswrapper[4672]: I0217 17:37:37.288791 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xdnxb"] Feb 17 17:37:37 crc kubenswrapper[4672]: I0217 17:37:37.384865 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/175d0cc7-3276-488c-a3d2-bea1dda94caf-utilities\") pod \"certified-operators-xdnxb\" (UID: \"175d0cc7-3276-488c-a3d2-bea1dda94caf\") " pod="openshift-marketplace/certified-operators-xdnxb" Feb 17 17:37:37 crc kubenswrapper[4672]: I0217 17:37:37.384966 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/175d0cc7-3276-488c-a3d2-bea1dda94caf-catalog-content\") pod \"certified-operators-xdnxb\" (UID: \"175d0cc7-3276-488c-a3d2-bea1dda94caf\") " pod="openshift-marketplace/certified-operators-xdnxb" Feb 17 17:37:37 crc kubenswrapper[4672]: I0217 17:37:37.384997 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkjjw\" (UniqueName: \"kubernetes.io/projected/175d0cc7-3276-488c-a3d2-bea1dda94caf-kube-api-access-bkjjw\") pod \"certified-operators-xdnxb\" (UID: \"175d0cc7-3276-488c-a3d2-bea1dda94caf\") " pod="openshift-marketplace/certified-operators-xdnxb" Feb 17 17:37:37 crc kubenswrapper[4672]: I0217 17:37:37.487073 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/175d0cc7-3276-488c-a3d2-bea1dda94caf-utilities\") pod \"certified-operators-xdnxb\" (UID: \"175d0cc7-3276-488c-a3d2-bea1dda94caf\") " pod="openshift-marketplace/certified-operators-xdnxb" Feb 17 17:37:37 crc kubenswrapper[4672]: I0217 17:37:37.487160 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/175d0cc7-3276-488c-a3d2-bea1dda94caf-catalog-content\") pod \"certified-operators-xdnxb\" (UID: \"175d0cc7-3276-488c-a3d2-bea1dda94caf\") " pod="openshift-marketplace/certified-operators-xdnxb" Feb 17 17:37:37 crc kubenswrapper[4672]: I0217 17:37:37.487190 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkjjw\" (UniqueName: \"kubernetes.io/projected/175d0cc7-3276-488c-a3d2-bea1dda94caf-kube-api-access-bkjjw\") pod \"certified-operators-xdnxb\" (UID: \"175d0cc7-3276-488c-a3d2-bea1dda94caf\") " pod="openshift-marketplace/certified-operators-xdnxb" Feb 17 17:37:37 crc kubenswrapper[4672]: I0217 17:37:37.487638 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/175d0cc7-3276-488c-a3d2-bea1dda94caf-utilities\") pod \"certified-operators-xdnxb\" (UID: \"175d0cc7-3276-488c-a3d2-bea1dda94caf\") " pod="openshift-marketplace/certified-operators-xdnxb" Feb 17 17:37:37 crc kubenswrapper[4672]: I0217 17:37:37.487914 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/175d0cc7-3276-488c-a3d2-bea1dda94caf-catalog-content\") pod \"certified-operators-xdnxb\" (UID: \"175d0cc7-3276-488c-a3d2-bea1dda94caf\") " pod="openshift-marketplace/certified-operators-xdnxb" Feb 17 17:37:37 crc kubenswrapper[4672]: I0217 17:37:37.509069 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkjjw\" (UniqueName: \"kubernetes.io/projected/175d0cc7-3276-488c-a3d2-bea1dda94caf-kube-api-access-bkjjw\") pod \"certified-operators-xdnxb\" (UID: \"175d0cc7-3276-488c-a3d2-bea1dda94caf\") " pod="openshift-marketplace/certified-operators-xdnxb" Feb 17 17:37:37 crc kubenswrapper[4672]: I0217 17:37:37.617147 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xdnxb" Feb 17 17:37:38 crc kubenswrapper[4672]: I0217 17:37:38.104953 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vxfhs"] Feb 17 17:37:38 crc kubenswrapper[4672]: I0217 17:37:38.113075 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxfhs" Feb 17 17:37:38 crc kubenswrapper[4672]: I0217 17:37:38.132408 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vxfhs"] Feb 17 17:37:38 crc kubenswrapper[4672]: I0217 17:37:38.180122 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xdnxb"] Feb 17 17:37:38 crc kubenswrapper[4672]: I0217 17:37:38.235998 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxp2b\" (UniqueName: \"kubernetes.io/projected/90c403d1-b530-4715-b896-16938860ada2-kube-api-access-xxp2b\") pod \"community-operators-vxfhs\" (UID: \"90c403d1-b530-4715-b896-16938860ada2\") " pod="openshift-marketplace/community-operators-vxfhs" Feb 17 17:37:38 crc kubenswrapper[4672]: I0217 17:37:38.236429 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90c403d1-b530-4715-b896-16938860ada2-catalog-content\") pod \"community-operators-vxfhs\" (UID: \"90c403d1-b530-4715-b896-16938860ada2\") " pod="openshift-marketplace/community-operators-vxfhs" Feb 17 17:37:38 crc kubenswrapper[4672]: I0217 17:37:38.236640 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90c403d1-b530-4715-b896-16938860ada2-utilities\") pod \"community-operators-vxfhs\" (UID: \"90c403d1-b530-4715-b896-16938860ada2\") " pod="openshift-marketplace/community-operators-vxfhs" Feb 17 17:37:38 crc kubenswrapper[4672]: I0217 17:37:38.337708 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90c403d1-b530-4715-b896-16938860ada2-utilities\") pod \"community-operators-vxfhs\" (UID: \"90c403d1-b530-4715-b896-16938860ada2\") " pod="openshift-marketplace/community-operators-vxfhs" Feb 17 17:37:38 crc kubenswrapper[4672]: I0217 17:37:38.337856 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxp2b\" (UniqueName: \"kubernetes.io/projected/90c403d1-b530-4715-b896-16938860ada2-kube-api-access-xxp2b\") pod \"community-operators-vxfhs\" (UID: \"90c403d1-b530-4715-b896-16938860ada2\") " pod="openshift-marketplace/community-operators-vxfhs" Feb 17 17:37:38 crc kubenswrapper[4672]: I0217 17:37:38.337908 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90c403d1-b530-4715-b896-16938860ada2-catalog-content\") pod \"community-operators-vxfhs\" (UID: \"90c403d1-b530-4715-b896-16938860ada2\") " pod="openshift-marketplace/community-operators-vxfhs" Feb 17 17:37:38 crc kubenswrapper[4672]: I0217 17:37:38.338396 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90c403d1-b530-4715-b896-16938860ada2-utilities\") pod \"community-operators-vxfhs\" (UID: \"90c403d1-b530-4715-b896-16938860ada2\") " pod="openshift-marketplace/community-operators-vxfhs" Feb 17 17:37:38 crc kubenswrapper[4672]: I0217 17:37:38.338405 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90c403d1-b530-4715-b896-16938860ada2-catalog-content\") pod \"community-operators-vxfhs\" (UID: \"90c403d1-b530-4715-b896-16938860ada2\") " pod="openshift-marketplace/community-operators-vxfhs" Feb 17 17:37:38 crc kubenswrapper[4672]: I0217 17:37:38.368250 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxp2b\" (UniqueName: \"kubernetes.io/projected/90c403d1-b530-4715-b896-16938860ada2-kube-api-access-xxp2b\") pod \"community-operators-vxfhs\" (UID: \"90c403d1-b530-4715-b896-16938860ada2\") " pod="openshift-marketplace/community-operators-vxfhs" Feb 17 17:37:38 crc kubenswrapper[4672]: I0217 17:37:38.450343 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxfhs" Feb 17 17:37:38 crc kubenswrapper[4672]: E0217 17:37:38.946242 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:37:38 crc kubenswrapper[4672]: I0217 17:37:38.961826 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vxfhs"] Feb 17 17:37:38 crc kubenswrapper[4672]: W0217 17:37:38.965219 4672 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90c403d1_b530_4715_b896_16938860ada2.slice/crio-7e7fcd45b32e56de9c883661aa983e4a2f3b9fccda250f08164d1a5eea249ead WatchSource:0}: Error finding container 7e7fcd45b32e56de9c883661aa983e4a2f3b9fccda250f08164d1a5eea249ead: Status 404 returned error can't find the container with id 7e7fcd45b32e56de9c883661aa983e4a2f3b9fccda250f08164d1a5eea249ead Feb 17 17:37:39 crc kubenswrapper[4672]: I0217 17:37:39.039751 4672 generic.go:334] "Generic (PLEG): container finished" podID="175d0cc7-3276-488c-a3d2-bea1dda94caf" containerID="2cb5fd9bd4c01f5cb541a26c08137c0ba3dd8b4bdebc1e2a75865ac8ec563627" exitCode=0 Feb 17 17:37:39 crc kubenswrapper[4672]: I0217 17:37:39.039798 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xdnxb" event={"ID":"175d0cc7-3276-488c-a3d2-bea1dda94caf","Type":"ContainerDied","Data":"2cb5fd9bd4c01f5cb541a26c08137c0ba3dd8b4bdebc1e2a75865ac8ec563627"} Feb 17 17:37:39 crc kubenswrapper[4672]: I0217 17:37:39.039841 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xdnxb" event={"ID":"175d0cc7-3276-488c-a3d2-bea1dda94caf","Type":"ContainerStarted","Data":"f88e4e0f0c14c6151201a4f2d1dc5038f54f62841f7b15c669586b29b255530d"} Feb 17 17:37:39 crc kubenswrapper[4672]: I0217 17:37:39.041308 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxfhs" event={"ID":"90c403d1-b530-4715-b896-16938860ada2","Type":"ContainerStarted","Data":"7e7fcd45b32e56de9c883661aa983e4a2f3b9fccda250f08164d1a5eea249ead"} Feb 17 17:37:40 crc kubenswrapper[4672]: I0217 17:37:40.052248 4672 generic.go:334] "Generic (PLEG): container finished" podID="90c403d1-b530-4715-b896-16938860ada2" containerID="b9e559f7154f8b96a81c2a6d57237353b88030f75c7aa4a335560c8ffb2b9c67" exitCode=0 Feb 17 17:37:40 crc kubenswrapper[4672]: I0217 17:37:40.052294 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxfhs" event={"ID":"90c403d1-b530-4715-b896-16938860ada2","Type":"ContainerDied","Data":"b9e559f7154f8b96a81c2a6d57237353b88030f75c7aa4a335560c8ffb2b9c67"} Feb 17 17:37:41 crc kubenswrapper[4672]: I0217 17:37:41.062243 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xdnxb" event={"ID":"175d0cc7-3276-488c-a3d2-bea1dda94caf","Type":"ContainerStarted","Data":"6bf975261fa97abbe37d9040ca5959dece1a18b1ef8e9097fcb113eea1a30f4d"} Feb 17 17:37:42 crc kubenswrapper[4672]: I0217 17:37:42.089740 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxfhs" event={"ID":"90c403d1-b530-4715-b896-16938860ada2","Type":"ContainerStarted","Data":"e39214849b266fd75535473b1b94086d96e5ab0148af5d8a85eabdf948e88aef"} Feb 17 17:37:45 crc kubenswrapper[4672]: I0217 17:37:45.118592 4672 generic.go:334] "Generic (PLEG): container finished" podID="175d0cc7-3276-488c-a3d2-bea1dda94caf" containerID="6bf975261fa97abbe37d9040ca5959dece1a18b1ef8e9097fcb113eea1a30f4d" exitCode=0 Feb 17 17:37:45 crc kubenswrapper[4672]: I0217 17:37:45.118638 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xdnxb" event={"ID":"175d0cc7-3276-488c-a3d2-bea1dda94caf","Type":"ContainerDied","Data":"6bf975261fa97abbe37d9040ca5959dece1a18b1ef8e9097fcb113eea1a30f4d"} Feb 17 17:37:45 crc kubenswrapper[4672]: E0217 17:37:45.946309 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:37:46 crc kubenswrapper[4672]: I0217 17:37:46.131356 4672 generic.go:334] "Generic (PLEG): container finished" podID="90c403d1-b530-4715-b896-16938860ada2" containerID="e39214849b266fd75535473b1b94086d96e5ab0148af5d8a85eabdf948e88aef" exitCode=0 Feb 17 17:37:46 crc kubenswrapper[4672]: I0217 17:37:46.131412 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxfhs" event={"ID":"90c403d1-b530-4715-b896-16938860ada2","Type":"ContainerDied","Data":"e39214849b266fd75535473b1b94086d96e5ab0148af5d8a85eabdf948e88aef"} Feb 17 17:37:46 crc kubenswrapper[4672]: I0217 17:37:46.133744 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xdnxb" event={"ID":"175d0cc7-3276-488c-a3d2-bea1dda94caf","Type":"ContainerStarted","Data":"7b84a43170c693f9894cadc6fa86fbed0438d3b97c5ce181499fd163d874a33e"} Feb 17 17:37:46 crc kubenswrapper[4672]: I0217 17:37:46.174450 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xdnxb" podStartSLOduration=2.603234005 podStartE2EDuration="9.174434721s" podCreationTimestamp="2026-02-17 17:37:37 +0000 UTC" firstStartedPulling="2026-02-17 17:37:39.042076899 +0000 UTC m=+5667.796165631" lastFinishedPulling="2026-02-17 17:37:45.613277615 +0000 UTC m=+5674.367366347" observedRunningTime="2026-02-17 17:37:46.170886366 +0000 UTC m=+5674.924975128" watchObservedRunningTime="2026-02-17 17:37:46.174434721 +0000 UTC m=+5674.928523453" Feb 17 17:37:47 crc kubenswrapper[4672]: I0217 17:37:47.146881 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxfhs" event={"ID":"90c403d1-b530-4715-b896-16938860ada2","Type":"ContainerStarted","Data":"7f8839c944cb79c2d4d78337f99ce06b0ff2f2b0767ba1710b9b307392cfa99f"} Feb 17 17:37:47 crc kubenswrapper[4672]: I0217 17:37:47.171958 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vxfhs" podStartSLOduration=2.786244778 podStartE2EDuration="9.171942504s" podCreationTimestamp="2026-02-17 17:37:38 +0000 UTC" firstStartedPulling="2026-02-17 17:37:40.16628165 +0000 UTC m=+5668.920370382" lastFinishedPulling="2026-02-17 17:37:46.551979376 +0000 UTC m=+5675.306068108" observedRunningTime="2026-02-17 17:37:47.168321628 +0000 UTC m=+5675.922410400" watchObservedRunningTime="2026-02-17 17:37:47.171942504 +0000 UTC m=+5675.926031236" Feb 17 17:37:47 crc kubenswrapper[4672]: I0217 17:37:47.618166 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xdnxb" Feb 17 17:37:47 crc kubenswrapper[4672]: I0217 17:37:47.618460 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xdnxb" Feb 17 17:37:47 crc kubenswrapper[4672]: I0217 17:37:47.669783 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xdnxb" Feb 17 17:37:48 crc kubenswrapper[4672]: I0217 17:37:48.450489 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vxfhs" Feb 17 17:37:48 crc kubenswrapper[4672]: I0217 17:37:48.450869 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vxfhs" Feb 17 17:37:49 crc kubenswrapper[4672]: I0217 17:37:49.496054 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vxfhs" podUID="90c403d1-b530-4715-b896-16938860ada2" containerName="registry-server" probeResult="failure" output=< Feb 17 17:37:49 crc kubenswrapper[4672]: timeout: failed to connect service ":50051" within 1s Feb 17 17:37:49 crc kubenswrapper[4672]: > Feb 17 17:37:51 crc kubenswrapper[4672]: E0217 17:37:51.954015 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:37:57 crc kubenswrapper[4672]: I0217 17:37:57.566671 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:37:57 crc kubenswrapper[4672]: I0217 17:37:57.567422 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:37:57 crc kubenswrapper[4672]: I0217 17:37:57.680037 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xdnxb" Feb 17 17:37:57 crc kubenswrapper[4672]: I0217 17:37:57.742319 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xdnxb"] Feb 17 17:37:58 crc kubenswrapper[4672]: I0217 17:37:58.259343 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xdnxb" podUID="175d0cc7-3276-488c-a3d2-bea1dda94caf" containerName="registry-server" containerID="cri-o://7b84a43170c693f9894cadc6fa86fbed0438d3b97c5ce181499fd163d874a33e" gracePeriod=2 Feb 17 17:37:58 crc kubenswrapper[4672]: I0217 17:37:58.513316 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vxfhs" Feb 17 17:37:58 crc kubenswrapper[4672]: I0217 17:37:58.578679 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vxfhs" Feb 17 17:37:58 crc kubenswrapper[4672]: I0217 17:37:58.840631 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xdnxb" Feb 17 17:37:58 crc kubenswrapper[4672]: I0217 17:37:58.995540 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkjjw\" (UniqueName: \"kubernetes.io/projected/175d0cc7-3276-488c-a3d2-bea1dda94caf-kube-api-access-bkjjw\") pod \"175d0cc7-3276-488c-a3d2-bea1dda94caf\" (UID: \"175d0cc7-3276-488c-a3d2-bea1dda94caf\") " Feb 17 17:37:58 crc kubenswrapper[4672]: I0217 17:37:58.995624 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/175d0cc7-3276-488c-a3d2-bea1dda94caf-utilities\") pod \"175d0cc7-3276-488c-a3d2-bea1dda94caf\" (UID: \"175d0cc7-3276-488c-a3d2-bea1dda94caf\") " Feb 17 17:37:58 crc kubenswrapper[4672]: I0217 17:37:58.995868 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/175d0cc7-3276-488c-a3d2-bea1dda94caf-catalog-content\") pod \"175d0cc7-3276-488c-a3d2-bea1dda94caf\" (UID: \"175d0cc7-3276-488c-a3d2-bea1dda94caf\") " Feb 17 17:37:58 crc kubenswrapper[4672]: I0217 17:37:58.997301 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/175d0cc7-3276-488c-a3d2-bea1dda94caf-utilities" (OuterVolumeSpecName: "utilities") pod "175d0cc7-3276-488c-a3d2-bea1dda94caf" (UID: "175d0cc7-3276-488c-a3d2-bea1dda94caf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:37:59 crc kubenswrapper[4672]: I0217 17:37:59.018172 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/175d0cc7-3276-488c-a3d2-bea1dda94caf-kube-api-access-bkjjw" (OuterVolumeSpecName: "kube-api-access-bkjjw") pod "175d0cc7-3276-488c-a3d2-bea1dda94caf" (UID: "175d0cc7-3276-488c-a3d2-bea1dda94caf"). InnerVolumeSpecName "kube-api-access-bkjjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:37:59 crc kubenswrapper[4672]: I0217 17:37:59.077483 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/175d0cc7-3276-488c-a3d2-bea1dda94caf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "175d0cc7-3276-488c-a3d2-bea1dda94caf" (UID: "175d0cc7-3276-488c-a3d2-bea1dda94caf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:37:59 crc kubenswrapper[4672]: I0217 17:37:59.099265 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkjjw\" (UniqueName: \"kubernetes.io/projected/175d0cc7-3276-488c-a3d2-bea1dda94caf-kube-api-access-bkjjw\") on node \"crc\" DevicePath \"\"" Feb 17 17:37:59 crc kubenswrapper[4672]: I0217 17:37:59.099298 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/175d0cc7-3276-488c-a3d2-bea1dda94caf-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:37:59 crc kubenswrapper[4672]: I0217 17:37:59.099307 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/175d0cc7-3276-488c-a3d2-bea1dda94caf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:37:59 crc kubenswrapper[4672]: I0217 17:37:59.269658 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xdnxb" Feb 17 17:37:59 crc kubenswrapper[4672]: I0217 17:37:59.269704 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xdnxb" event={"ID":"175d0cc7-3276-488c-a3d2-bea1dda94caf","Type":"ContainerDied","Data":"7b84a43170c693f9894cadc6fa86fbed0438d3b97c5ce181499fd163d874a33e"} Feb 17 17:37:59 crc kubenswrapper[4672]: I0217 17:37:59.269744 4672 scope.go:117] "RemoveContainer" containerID="7b84a43170c693f9894cadc6fa86fbed0438d3b97c5ce181499fd163d874a33e" Feb 17 17:37:59 crc kubenswrapper[4672]: I0217 17:37:59.269583 4672 generic.go:334] "Generic (PLEG): container finished" podID="175d0cc7-3276-488c-a3d2-bea1dda94caf" containerID="7b84a43170c693f9894cadc6fa86fbed0438d3b97c5ce181499fd163d874a33e" exitCode=0 Feb 17 17:37:59 crc kubenswrapper[4672]: I0217 17:37:59.269916 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xdnxb" event={"ID":"175d0cc7-3276-488c-a3d2-bea1dda94caf","Type":"ContainerDied","Data":"f88e4e0f0c14c6151201a4f2d1dc5038f54f62841f7b15c669586b29b255530d"} Feb 17 17:37:59 crc kubenswrapper[4672]: I0217 17:37:59.303354 4672 scope.go:117] "RemoveContainer" containerID="6bf975261fa97abbe37d9040ca5959dece1a18b1ef8e9097fcb113eea1a30f4d" Feb 17 17:37:59 crc kubenswrapper[4672]: I0217 17:37:59.309682 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xdnxb"] Feb 17 17:37:59 crc kubenswrapper[4672]: I0217 17:37:59.320996 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xdnxb"] Feb 17 17:37:59 crc kubenswrapper[4672]: I0217 17:37:59.334961 4672 scope.go:117] "RemoveContainer" containerID="2cb5fd9bd4c01f5cb541a26c08137c0ba3dd8b4bdebc1e2a75865ac8ec563627" Feb 17 17:37:59 crc kubenswrapper[4672]: I0217 17:37:59.396149 4672 scope.go:117] "RemoveContainer" containerID="7b84a43170c693f9894cadc6fa86fbed0438d3b97c5ce181499fd163d874a33e" Feb 17 17:37:59 crc kubenswrapper[4672]: E0217 17:37:59.396553 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b84a43170c693f9894cadc6fa86fbed0438d3b97c5ce181499fd163d874a33e\": container with ID starting with 7b84a43170c693f9894cadc6fa86fbed0438d3b97c5ce181499fd163d874a33e not found: ID does not exist" containerID="7b84a43170c693f9894cadc6fa86fbed0438d3b97c5ce181499fd163d874a33e" Feb 17 17:37:59 crc kubenswrapper[4672]: I0217 17:37:59.396585 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b84a43170c693f9894cadc6fa86fbed0438d3b97c5ce181499fd163d874a33e"} err="failed to get container status \"7b84a43170c693f9894cadc6fa86fbed0438d3b97c5ce181499fd163d874a33e\": rpc error: code = NotFound desc = could not find container \"7b84a43170c693f9894cadc6fa86fbed0438d3b97c5ce181499fd163d874a33e\": container with ID starting with 7b84a43170c693f9894cadc6fa86fbed0438d3b97c5ce181499fd163d874a33e not found: ID does not exist" Feb 17 17:37:59 crc kubenswrapper[4672]: I0217 17:37:59.396610 4672 scope.go:117] "RemoveContainer" containerID="6bf975261fa97abbe37d9040ca5959dece1a18b1ef8e9097fcb113eea1a30f4d" Feb 17 17:37:59 crc kubenswrapper[4672]: E0217 17:37:59.396859 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bf975261fa97abbe37d9040ca5959dece1a18b1ef8e9097fcb113eea1a30f4d\": container with ID starting with 6bf975261fa97abbe37d9040ca5959dece1a18b1ef8e9097fcb113eea1a30f4d not found: ID does not exist" containerID="6bf975261fa97abbe37d9040ca5959dece1a18b1ef8e9097fcb113eea1a30f4d" Feb 17 17:37:59 crc kubenswrapper[4672]: I0217 17:37:59.396888 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bf975261fa97abbe37d9040ca5959dece1a18b1ef8e9097fcb113eea1a30f4d"} err="failed to get container status \"6bf975261fa97abbe37d9040ca5959dece1a18b1ef8e9097fcb113eea1a30f4d\": rpc error: code = NotFound desc = could not find container \"6bf975261fa97abbe37d9040ca5959dece1a18b1ef8e9097fcb113eea1a30f4d\": container with ID starting with 6bf975261fa97abbe37d9040ca5959dece1a18b1ef8e9097fcb113eea1a30f4d not found: ID does not exist" Feb 17 17:37:59 crc kubenswrapper[4672]: I0217 17:37:59.396906 4672 scope.go:117] "RemoveContainer" containerID="2cb5fd9bd4c01f5cb541a26c08137c0ba3dd8b4bdebc1e2a75865ac8ec563627" Feb 17 17:37:59 crc kubenswrapper[4672]: E0217 17:37:59.397230 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cb5fd9bd4c01f5cb541a26c08137c0ba3dd8b4bdebc1e2a75865ac8ec563627\": container with ID starting with 2cb5fd9bd4c01f5cb541a26c08137c0ba3dd8b4bdebc1e2a75865ac8ec563627 not found: ID does not exist" containerID="2cb5fd9bd4c01f5cb541a26c08137c0ba3dd8b4bdebc1e2a75865ac8ec563627" Feb 17 17:37:59 crc kubenswrapper[4672]: I0217 17:37:59.397260 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cb5fd9bd4c01f5cb541a26c08137c0ba3dd8b4bdebc1e2a75865ac8ec563627"} err="failed to get container status \"2cb5fd9bd4c01f5cb541a26c08137c0ba3dd8b4bdebc1e2a75865ac8ec563627\": rpc error: code = NotFound desc = could not find container \"2cb5fd9bd4c01f5cb541a26c08137c0ba3dd8b4bdebc1e2a75865ac8ec563627\": container with ID starting with 2cb5fd9bd4c01f5cb541a26c08137c0ba3dd8b4bdebc1e2a75865ac8ec563627 not found: ID does not exist" Feb 17 17:37:59 crc kubenswrapper[4672]: I0217 17:37:59.718381 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vxfhs"] Feb 17 17:37:59 crc kubenswrapper[4672]: E0217 17:37:59.946225 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:37:59 crc kubenswrapper[4672]: I0217 17:37:59.958958 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="175d0cc7-3276-488c-a3d2-bea1dda94caf" path="/var/lib/kubelet/pods/175d0cc7-3276-488c-a3d2-bea1dda94caf/volumes" Feb 17 17:38:00 crc kubenswrapper[4672]: I0217 17:38:00.282463 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vxfhs" podUID="90c403d1-b530-4715-b896-16938860ada2" containerName="registry-server" containerID="cri-o://7f8839c944cb79c2d4d78337f99ce06b0ff2f2b0767ba1710b9b307392cfa99f" gracePeriod=2 Feb 17 17:38:00 crc kubenswrapper[4672]: I0217 17:38:00.793373 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxfhs" Feb 17 17:38:00 crc kubenswrapper[4672]: I0217 17:38:00.936817 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxp2b\" (UniqueName: \"kubernetes.io/projected/90c403d1-b530-4715-b896-16938860ada2-kube-api-access-xxp2b\") pod \"90c403d1-b530-4715-b896-16938860ada2\" (UID: \"90c403d1-b530-4715-b896-16938860ada2\") " Feb 17 17:38:00 crc kubenswrapper[4672]: I0217 17:38:00.936869 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90c403d1-b530-4715-b896-16938860ada2-utilities\") pod \"90c403d1-b530-4715-b896-16938860ada2\" (UID: \"90c403d1-b530-4715-b896-16938860ada2\") " Feb 17 17:38:00 crc kubenswrapper[4672]: I0217 17:38:00.936960 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90c403d1-b530-4715-b896-16938860ada2-catalog-content\") pod \"90c403d1-b530-4715-b896-16938860ada2\" (UID: \"90c403d1-b530-4715-b896-16938860ada2\") " Feb 17 17:38:00 crc kubenswrapper[4672]: I0217 17:38:00.938833 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90c403d1-b530-4715-b896-16938860ada2-utilities" (OuterVolumeSpecName: "utilities") pod "90c403d1-b530-4715-b896-16938860ada2" (UID: "90c403d1-b530-4715-b896-16938860ada2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:38:00 crc kubenswrapper[4672]: I0217 17:38:00.939210 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90c403d1-b530-4715-b896-16938860ada2-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:38:00 crc kubenswrapper[4672]: I0217 17:38:00.945374 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90c403d1-b530-4715-b896-16938860ada2-kube-api-access-xxp2b" (OuterVolumeSpecName: "kube-api-access-xxp2b") pod "90c403d1-b530-4715-b896-16938860ada2" (UID: "90c403d1-b530-4715-b896-16938860ada2"). InnerVolumeSpecName "kube-api-access-xxp2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:38:01 crc kubenswrapper[4672]: I0217 17:38:01.014057 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90c403d1-b530-4715-b896-16938860ada2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90c403d1-b530-4715-b896-16938860ada2" (UID: "90c403d1-b530-4715-b896-16938860ada2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:38:01 crc kubenswrapper[4672]: I0217 17:38:01.041469 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90c403d1-b530-4715-b896-16938860ada2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:38:01 crc kubenswrapper[4672]: I0217 17:38:01.041503 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxp2b\" (UniqueName: \"kubernetes.io/projected/90c403d1-b530-4715-b896-16938860ada2-kube-api-access-xxp2b\") on node \"crc\" DevicePath \"\"" Feb 17 17:38:01 crc kubenswrapper[4672]: I0217 17:38:01.293460 4672 generic.go:334] "Generic (PLEG): container finished" podID="90c403d1-b530-4715-b896-16938860ada2" containerID="7f8839c944cb79c2d4d78337f99ce06b0ff2f2b0767ba1710b9b307392cfa99f" exitCode=0 Feb 17 17:38:01 crc kubenswrapper[4672]: I0217 17:38:01.293638 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxfhs" event={"ID":"90c403d1-b530-4715-b896-16938860ada2","Type":"ContainerDied","Data":"7f8839c944cb79c2d4d78337f99ce06b0ff2f2b0767ba1710b9b307392cfa99f"} Feb 17 17:38:01 crc kubenswrapper[4672]: I0217 17:38:01.293928 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxfhs" event={"ID":"90c403d1-b530-4715-b896-16938860ada2","Type":"ContainerDied","Data":"7e7fcd45b32e56de9c883661aa983e4a2f3b9fccda250f08164d1a5eea249ead"} Feb 17 17:38:01 crc kubenswrapper[4672]: I0217 17:38:01.293965 4672 scope.go:117] "RemoveContainer" containerID="7f8839c944cb79c2d4d78337f99ce06b0ff2f2b0767ba1710b9b307392cfa99f" Feb 17 17:38:01 crc kubenswrapper[4672]: I0217 17:38:01.293721 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxfhs" Feb 17 17:38:01 crc kubenswrapper[4672]: I0217 17:38:01.319901 4672 scope.go:117] "RemoveContainer" containerID="e39214849b266fd75535473b1b94086d96e5ab0148af5d8a85eabdf948e88aef" Feb 17 17:38:01 crc kubenswrapper[4672]: I0217 17:38:01.330654 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vxfhs"] Feb 17 17:38:01 crc kubenswrapper[4672]: I0217 17:38:01.339491 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vxfhs"] Feb 17 17:38:01 crc kubenswrapper[4672]: I0217 17:38:01.350651 4672 scope.go:117] "RemoveContainer" containerID="b9e559f7154f8b96a81c2a6d57237353b88030f75c7aa4a335560c8ffb2b9c67" Feb 17 17:38:01 crc kubenswrapper[4672]: I0217 17:38:01.389644 4672 scope.go:117] "RemoveContainer" containerID="7f8839c944cb79c2d4d78337f99ce06b0ff2f2b0767ba1710b9b307392cfa99f" Feb 17 17:38:01 crc kubenswrapper[4672]: E0217 17:38:01.390130 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f8839c944cb79c2d4d78337f99ce06b0ff2f2b0767ba1710b9b307392cfa99f\": container with ID starting with 7f8839c944cb79c2d4d78337f99ce06b0ff2f2b0767ba1710b9b307392cfa99f not found: ID does not exist" containerID="7f8839c944cb79c2d4d78337f99ce06b0ff2f2b0767ba1710b9b307392cfa99f" Feb 17 17:38:01 crc kubenswrapper[4672]: I0217 17:38:01.390164 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f8839c944cb79c2d4d78337f99ce06b0ff2f2b0767ba1710b9b307392cfa99f"} err="failed to get container status \"7f8839c944cb79c2d4d78337f99ce06b0ff2f2b0767ba1710b9b307392cfa99f\": rpc error: code = NotFound desc = could not find container \"7f8839c944cb79c2d4d78337f99ce06b0ff2f2b0767ba1710b9b307392cfa99f\": container with ID starting with 7f8839c944cb79c2d4d78337f99ce06b0ff2f2b0767ba1710b9b307392cfa99f not found: ID does not exist" Feb 17 17:38:01 crc kubenswrapper[4672]: I0217 17:38:01.390185 4672 scope.go:117] "RemoveContainer" containerID="e39214849b266fd75535473b1b94086d96e5ab0148af5d8a85eabdf948e88aef" Feb 17 17:38:01 crc kubenswrapper[4672]: E0217 17:38:01.390487 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e39214849b266fd75535473b1b94086d96e5ab0148af5d8a85eabdf948e88aef\": container with ID starting with e39214849b266fd75535473b1b94086d96e5ab0148af5d8a85eabdf948e88aef not found: ID does not exist" containerID="e39214849b266fd75535473b1b94086d96e5ab0148af5d8a85eabdf948e88aef" Feb 17 17:38:01 crc kubenswrapper[4672]: I0217 17:38:01.390506 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e39214849b266fd75535473b1b94086d96e5ab0148af5d8a85eabdf948e88aef"} err="failed to get container status \"e39214849b266fd75535473b1b94086d96e5ab0148af5d8a85eabdf948e88aef\": rpc error: code = NotFound desc = could not find container \"e39214849b266fd75535473b1b94086d96e5ab0148af5d8a85eabdf948e88aef\": container with ID starting with e39214849b266fd75535473b1b94086d96e5ab0148af5d8a85eabdf948e88aef not found: ID does not exist" Feb 17 17:38:01 crc kubenswrapper[4672]: I0217 17:38:01.390522 4672 scope.go:117] "RemoveContainer" containerID="b9e559f7154f8b96a81c2a6d57237353b88030f75c7aa4a335560c8ffb2b9c67" Feb 17 17:38:01 crc kubenswrapper[4672]: E0217 17:38:01.390858 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9e559f7154f8b96a81c2a6d57237353b88030f75c7aa4a335560c8ffb2b9c67\": container with ID starting with b9e559f7154f8b96a81c2a6d57237353b88030f75c7aa4a335560c8ffb2b9c67 not found: ID does not exist" containerID="b9e559f7154f8b96a81c2a6d57237353b88030f75c7aa4a335560c8ffb2b9c67" Feb 17 17:38:01 crc kubenswrapper[4672]: I0217 17:38:01.390908 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9e559f7154f8b96a81c2a6d57237353b88030f75c7aa4a335560c8ffb2b9c67"} err="failed to get container status \"b9e559f7154f8b96a81c2a6d57237353b88030f75c7aa4a335560c8ffb2b9c67\": rpc error: code = NotFound desc = could not find container \"b9e559f7154f8b96a81c2a6d57237353b88030f75c7aa4a335560c8ffb2b9c67\": container with ID starting with b9e559f7154f8b96a81c2a6d57237353b88030f75c7aa4a335560c8ffb2b9c67 not found: ID does not exist" Feb 17 17:38:01 crc kubenswrapper[4672]: I0217 17:38:01.956329 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90c403d1-b530-4715-b896-16938860ada2" path="/var/lib/kubelet/pods/90c403d1-b530-4715-b896-16938860ada2/volumes" Feb 17 17:38:03 crc kubenswrapper[4672]: E0217 17:38:03.948483 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:38:14 crc kubenswrapper[4672]: E0217 17:38:14.947419 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:38:15 crc kubenswrapper[4672]: E0217 17:38:15.946916 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:38:27 crc kubenswrapper[4672]: I0217 17:38:27.566405 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:38:27 crc kubenswrapper[4672]: I0217 17:38:27.567026 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:38:28 crc kubenswrapper[4672]: E0217 17:38:28.947188 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:38:29 crc kubenswrapper[4672]: E0217 17:38:29.947574 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:38:40 crc kubenswrapper[4672]: E0217 17:38:40.947166 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:38:43 crc kubenswrapper[4672]: E0217 17:38:43.947288 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:38:52 crc kubenswrapper[4672]: E0217 17:38:52.950750 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:38:57 crc kubenswrapper[4672]: I0217 17:38:57.565563 4672 patch_prober.go:28] interesting pod/machine-config-daemon-d6dhs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:38:57 crc kubenswrapper[4672]: I0217 17:38:57.566103 4672 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:38:57 crc kubenswrapper[4672]: I0217 17:38:57.566154 4672 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" Feb 17 17:38:57 crc kubenswrapper[4672]: I0217 17:38:57.567298 4672 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"84b26503293fd480153540a364dd1bb213df8906602f27c523499fd5a410b40b"} pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:38:57 crc kubenswrapper[4672]: I0217 17:38:57.567398 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerName="machine-config-daemon" containerID="cri-o://84b26503293fd480153540a364dd1bb213df8906602f27c523499fd5a410b40b" gracePeriod=600 Feb 17 17:38:57 crc kubenswrapper[4672]: E0217 17:38:57.691785 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:38:57 crc kubenswrapper[4672]: I0217 17:38:57.850446 4672 generic.go:334] "Generic (PLEG): container finished" podID="fa9cd2c6-74a5-4567-a141-be56c668e566" containerID="84b26503293fd480153540a364dd1bb213df8906602f27c523499fd5a410b40b" exitCode=0 Feb 17 17:38:57 crc kubenswrapper[4672]: I0217 17:38:57.850493 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" event={"ID":"fa9cd2c6-74a5-4567-a141-be56c668e566","Type":"ContainerDied","Data":"84b26503293fd480153540a364dd1bb213df8906602f27c523499fd5a410b40b"} Feb 17 17:38:57 crc kubenswrapper[4672]: I0217 17:38:57.850547 4672 scope.go:117] "RemoveContainer" containerID="8d597cc8ff492e2c5a82f2b6824b54ff748acbefe4d8679fd3078b7cfdc4aea5" Feb 17 17:38:57 crc kubenswrapper[4672]: I0217 17:38:57.851646 4672 scope.go:117] "RemoveContainer" containerID="84b26503293fd480153540a364dd1bb213df8906602f27c523499fd5a410b40b" Feb 17 17:38:57 crc kubenswrapper[4672]: E0217 17:38:57.852259 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:38:58 crc kubenswrapper[4672]: E0217 17:38:58.947435 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:39:04 crc kubenswrapper[4672]: E0217 17:39:04.947467 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:39:09 crc kubenswrapper[4672]: I0217 17:39:09.945256 4672 scope.go:117] "RemoveContainer" containerID="84b26503293fd480153540a364dd1bb213df8906602f27c523499fd5a410b40b" Feb 17 17:39:09 crc kubenswrapper[4672]: E0217 17:39:09.945790 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:39:10 crc kubenswrapper[4672]: E0217 17:39:10.947596 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:39:15 crc kubenswrapper[4672]: E0217 17:39:15.946490 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:39:22 crc kubenswrapper[4672]: I0217 17:39:22.945337 4672 scope.go:117] "RemoveContainer" containerID="84b26503293fd480153540a364dd1bb213df8906602f27c523499fd5a410b40b" Feb 17 17:39:22 crc kubenswrapper[4672]: E0217 17:39:22.946120 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:39:23 crc kubenswrapper[4672]: E0217 17:39:23.946791 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:39:29 crc kubenswrapper[4672]: E0217 17:39:29.947725 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:39:34 crc kubenswrapper[4672]: I0217 17:39:34.945674 4672 scope.go:117] "RemoveContainer" containerID="84b26503293fd480153540a364dd1bb213df8906602f27c523499fd5a410b40b" Feb 17 17:39:34 crc kubenswrapper[4672]: E0217 17:39:34.947497 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:39:34 crc kubenswrapper[4672]: I0217 17:39:34.947597 4672 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:39:35 crc kubenswrapper[4672]: E0217 17:39:35.076981 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:39:35 crc kubenswrapper[4672]: E0217 17:39:35.077060 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:39:35 crc kubenswrapper[4672]: E0217 17:39:35.077226 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq9ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-qrhj8_openstack(dc5471f5-2491-4841-be45-09c8f14b35c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:39:35 crc kubenswrapper[4672]: E0217 17:39:35.078595 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:39:44 crc kubenswrapper[4672]: E0217 17:39:44.033992 4672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:39:44 crc kubenswrapper[4672]: E0217 17:39:44.034526 4672 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:39:44 crc kubenswrapper[4672]: E0217 17:39:44.034692 4672 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66h7h644h64ch5f8h565hfch5dh56chfdh8hfdh5b5h567h6dh665h557h74h665hcbh96h659h554h589h57fh5d9h55h564hcfh5dhffhfdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tx4bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9e58ce9b-ddd5-42bb-8e07-08a22c8871a5): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:39:44 crc kubenswrapper[4672]: E0217 17:39:44.035862 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:39:47 crc kubenswrapper[4672]: E0217 17:39:47.949440 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:39:49 crc kubenswrapper[4672]: I0217 17:39:49.946197 4672 scope.go:117] "RemoveContainer" containerID="84b26503293fd480153540a364dd1bb213df8906602f27c523499fd5a410b40b" Feb 17 17:39:49 crc kubenswrapper[4672]: E0217 17:39:49.946843 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:39:55 crc kubenswrapper[4672]: E0217 17:39:55.948932 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:40:00 crc kubenswrapper[4672]: E0217 17:40:00.948052 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:40:03 crc kubenswrapper[4672]: I0217 17:40:03.946148 4672 scope.go:117] "RemoveContainer" containerID="84b26503293fd480153540a364dd1bb213df8906602f27c523499fd5a410b40b" Feb 17 17:40:03 crc kubenswrapper[4672]: E0217 17:40:03.946808 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:40:07 crc kubenswrapper[4672]: E0217 17:40:07.947820 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:40:15 crc kubenswrapper[4672]: E0217 17:40:15.948278 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:40:16 crc kubenswrapper[4672]: I0217 17:40:16.944984 4672 scope.go:117] "RemoveContainer" containerID="84b26503293fd480153540a364dd1bb213df8906602f27c523499fd5a410b40b" Feb 17 17:40:16 crc kubenswrapper[4672]: E0217 17:40:16.945811 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:40:21 crc kubenswrapper[4672]: E0217 17:40:21.955525 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:40:27 crc kubenswrapper[4672]: E0217 17:40:27.949622 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:40:29 crc kubenswrapper[4672]: I0217 17:40:29.945618 4672 scope.go:117] "RemoveContainer" containerID="84b26503293fd480153540a364dd1bb213df8906602f27c523499fd5a410b40b" Feb 17 17:40:29 crc kubenswrapper[4672]: E0217 17:40:29.946268 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:40:36 crc kubenswrapper[4672]: E0217 17:40:36.946632 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:40:38 crc kubenswrapper[4672]: E0217 17:40:38.947356 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:40:42 crc kubenswrapper[4672]: I0217 17:40:42.945673 4672 scope.go:117] "RemoveContainer" containerID="84b26503293fd480153540a364dd1bb213df8906602f27c523499fd5a410b40b" Feb 17 17:40:42 crc kubenswrapper[4672]: E0217 17:40:42.946549 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:40:46 crc kubenswrapper[4672]: I0217 17:40:46.641836 4672 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vcth2"] Feb 17 17:40:46 crc kubenswrapper[4672]: E0217 17:40:46.642944 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90c403d1-b530-4715-b896-16938860ada2" containerName="extract-content" Feb 17 17:40:46 crc kubenswrapper[4672]: I0217 17:40:46.642961 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="90c403d1-b530-4715-b896-16938860ada2" containerName="extract-content" Feb 17 17:40:46 crc kubenswrapper[4672]: E0217 17:40:46.642980 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90c403d1-b530-4715-b896-16938860ada2" containerName="registry-server" Feb 17 17:40:46 crc kubenswrapper[4672]: I0217 17:40:46.642987 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="90c403d1-b530-4715-b896-16938860ada2" containerName="registry-server" Feb 17 17:40:46 crc kubenswrapper[4672]: E0217 17:40:46.643012 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175d0cc7-3276-488c-a3d2-bea1dda94caf" containerName="extract-content" Feb 17 17:40:46 crc kubenswrapper[4672]: I0217 17:40:46.643017 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="175d0cc7-3276-488c-a3d2-bea1dda94caf" containerName="extract-content" Feb 17 17:40:46 crc kubenswrapper[4672]: E0217 17:40:46.643029 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175d0cc7-3276-488c-a3d2-bea1dda94caf" containerName="extract-utilities" Feb 17 17:40:46 crc kubenswrapper[4672]: I0217 17:40:46.643035 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="175d0cc7-3276-488c-a3d2-bea1dda94caf" containerName="extract-utilities" Feb 17 17:40:46 crc kubenswrapper[4672]: E0217 17:40:46.643054 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90c403d1-b530-4715-b896-16938860ada2" containerName="extract-utilities" Feb 17 17:40:46 crc kubenswrapper[4672]: I0217 17:40:46.643061 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="90c403d1-b530-4715-b896-16938860ada2" containerName="extract-utilities" Feb 17 17:40:46 crc kubenswrapper[4672]: E0217 17:40:46.643075 4672 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175d0cc7-3276-488c-a3d2-bea1dda94caf" containerName="registry-server" Feb 17 17:40:46 crc kubenswrapper[4672]: I0217 17:40:46.643081 4672 state_mem.go:107] "Deleted CPUSet assignment" podUID="175d0cc7-3276-488c-a3d2-bea1dda94caf" containerName="registry-server" Feb 17 17:40:46 crc kubenswrapper[4672]: I0217 17:40:46.643290 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="175d0cc7-3276-488c-a3d2-bea1dda94caf" containerName="registry-server" Feb 17 17:40:46 crc kubenswrapper[4672]: I0217 17:40:46.643305 4672 memory_manager.go:354] "RemoveStaleState removing state" podUID="90c403d1-b530-4715-b896-16938860ada2" containerName="registry-server" Feb 17 17:40:46 crc kubenswrapper[4672]: I0217 17:40:46.645026 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vcth2" Feb 17 17:40:46 crc kubenswrapper[4672]: I0217 17:40:46.657054 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vcth2"] Feb 17 17:40:46 crc kubenswrapper[4672]: I0217 17:40:46.771171 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/722890b1-cebb-4946-ac06-75501774f138-utilities\") pod \"redhat-operators-vcth2\" (UID: \"722890b1-cebb-4946-ac06-75501774f138\") " pod="openshift-marketplace/redhat-operators-vcth2" Feb 17 17:40:46 crc kubenswrapper[4672]: I0217 17:40:46.771575 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/722890b1-cebb-4946-ac06-75501774f138-catalog-content\") pod \"redhat-operators-vcth2\" (UID: \"722890b1-cebb-4946-ac06-75501774f138\") " pod="openshift-marketplace/redhat-operators-vcth2" Feb 17 17:40:46 crc kubenswrapper[4672]: I0217 17:40:46.771741 4672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kn5g\" (UniqueName: \"kubernetes.io/projected/722890b1-cebb-4946-ac06-75501774f138-kube-api-access-8kn5g\") pod \"redhat-operators-vcth2\" (UID: \"722890b1-cebb-4946-ac06-75501774f138\") " pod="openshift-marketplace/redhat-operators-vcth2" Feb 17 17:40:46 crc kubenswrapper[4672]: I0217 17:40:46.874154 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/722890b1-cebb-4946-ac06-75501774f138-utilities\") pod \"redhat-operators-vcth2\" (UID: \"722890b1-cebb-4946-ac06-75501774f138\") " pod="openshift-marketplace/redhat-operators-vcth2" Feb 17 17:40:46 crc kubenswrapper[4672]: I0217 17:40:46.874242 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/722890b1-cebb-4946-ac06-75501774f138-catalog-content\") pod \"redhat-operators-vcth2\" (UID: \"722890b1-cebb-4946-ac06-75501774f138\") " pod="openshift-marketplace/redhat-operators-vcth2" Feb 17 17:40:46 crc kubenswrapper[4672]: I0217 17:40:46.874297 4672 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kn5g\" (UniqueName: \"kubernetes.io/projected/722890b1-cebb-4946-ac06-75501774f138-kube-api-access-8kn5g\") pod \"redhat-operators-vcth2\" (UID: \"722890b1-cebb-4946-ac06-75501774f138\") " pod="openshift-marketplace/redhat-operators-vcth2" Feb 17 17:40:46 crc kubenswrapper[4672]: I0217 17:40:46.874685 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/722890b1-cebb-4946-ac06-75501774f138-utilities\") pod \"redhat-operators-vcth2\" (UID: \"722890b1-cebb-4946-ac06-75501774f138\") " pod="openshift-marketplace/redhat-operators-vcth2" Feb 17 17:40:46 crc kubenswrapper[4672]: I0217 17:40:46.874759 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/722890b1-cebb-4946-ac06-75501774f138-catalog-content\") pod \"redhat-operators-vcth2\" (UID: \"722890b1-cebb-4946-ac06-75501774f138\") " pod="openshift-marketplace/redhat-operators-vcth2" Feb 17 17:40:46 crc kubenswrapper[4672]: I0217 17:40:46.895390 4672 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kn5g\" (UniqueName: \"kubernetes.io/projected/722890b1-cebb-4946-ac06-75501774f138-kube-api-access-8kn5g\") pod \"redhat-operators-vcth2\" (UID: \"722890b1-cebb-4946-ac06-75501774f138\") " pod="openshift-marketplace/redhat-operators-vcth2" Feb 17 17:40:46 crc kubenswrapper[4672]: I0217 17:40:46.977125 4672 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vcth2" Feb 17 17:40:47 crc kubenswrapper[4672]: I0217 17:40:47.476414 4672 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vcth2"] Feb 17 17:40:47 crc kubenswrapper[4672]: I0217 17:40:47.995054 4672 generic.go:334] "Generic (PLEG): container finished" podID="722890b1-cebb-4946-ac06-75501774f138" containerID="bbf4624f2eb2818b88d2b37aae1d33b13d1e2ea2c33f71a2d1c4f48e38e152d7" exitCode=0 Feb 17 17:40:47 crc kubenswrapper[4672]: I0217 17:40:47.995095 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vcth2" event={"ID":"722890b1-cebb-4946-ac06-75501774f138","Type":"ContainerDied","Data":"bbf4624f2eb2818b88d2b37aae1d33b13d1e2ea2c33f71a2d1c4f48e38e152d7"} Feb 17 17:40:47 crc kubenswrapper[4672]: I0217 17:40:47.995118 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vcth2" event={"ID":"722890b1-cebb-4946-ac06-75501774f138","Type":"ContainerStarted","Data":"8929cb90e90c34f2adfd31ef56d97133579fd892987be081b5f5de32befcdbe8"} Feb 17 17:40:49 crc kubenswrapper[4672]: I0217 17:40:49.007924 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vcth2" event={"ID":"722890b1-cebb-4946-ac06-75501774f138","Type":"ContainerStarted","Data":"61cfc36a08613f81d58c0b603ef1f60691a5517c9edbda90c8f3c1d1c4751c3d"} Feb 17 17:40:50 crc kubenswrapper[4672]: E0217 17:40:50.946956 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:40:51 crc kubenswrapper[4672]: E0217 17:40:51.961911 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:40:55 crc kubenswrapper[4672]: I0217 17:40:55.064912 4672 generic.go:334] "Generic (PLEG): container finished" podID="722890b1-cebb-4946-ac06-75501774f138" containerID="61cfc36a08613f81d58c0b603ef1f60691a5517c9edbda90c8f3c1d1c4751c3d" exitCode=0 Feb 17 17:40:55 crc kubenswrapper[4672]: I0217 17:40:55.065062 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vcth2" event={"ID":"722890b1-cebb-4946-ac06-75501774f138","Type":"ContainerDied","Data":"61cfc36a08613f81d58c0b603ef1f60691a5517c9edbda90c8f3c1d1c4751c3d"} Feb 17 17:40:56 crc kubenswrapper[4672]: I0217 17:40:56.079582 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vcth2" event={"ID":"722890b1-cebb-4946-ac06-75501774f138","Type":"ContainerStarted","Data":"06fe7e14516c04fa0e291cca79a98618880fb8a7c63873db17ebd1ffd596e8b0"} Feb 17 17:40:56 crc kubenswrapper[4672]: I0217 17:40:56.106295 4672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vcth2" podStartSLOduration=2.6190872990000003 podStartE2EDuration="10.106278802s" podCreationTimestamp="2026-02-17 17:40:46 +0000 UTC" firstStartedPulling="2026-02-17 17:40:47.997132007 +0000 UTC m=+5856.751220739" lastFinishedPulling="2026-02-17 17:40:55.48432347 +0000 UTC m=+5864.238412242" observedRunningTime="2026-02-17 17:40:56.098624969 +0000 UTC m=+5864.852713701" watchObservedRunningTime="2026-02-17 17:40:56.106278802 +0000 UTC m=+5864.860367534" Feb 17 17:40:56 crc kubenswrapper[4672]: I0217 17:40:56.945705 4672 scope.go:117] "RemoveContainer" containerID="84b26503293fd480153540a364dd1bb213df8906602f27c523499fd5a410b40b" Feb 17 17:40:56 crc kubenswrapper[4672]: E0217 17:40:56.946323 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:40:56 crc kubenswrapper[4672]: I0217 17:40:56.978146 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vcth2" Feb 17 17:40:56 crc kubenswrapper[4672]: I0217 17:40:56.978332 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vcth2" Feb 17 17:40:58 crc kubenswrapper[4672]: I0217 17:40:58.025419 4672 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vcth2" podUID="722890b1-cebb-4946-ac06-75501774f138" containerName="registry-server" probeResult="failure" output=< Feb 17 17:40:58 crc kubenswrapper[4672]: timeout: failed to connect service ":50051" within 1s Feb 17 17:40:58 crc kubenswrapper[4672]: > Feb 17 17:41:04 crc kubenswrapper[4672]: E0217 17:41:04.946965 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:41:05 crc kubenswrapper[4672]: E0217 17:41:05.947096 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:41:07 crc kubenswrapper[4672]: I0217 17:41:07.024857 4672 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vcth2" Feb 17 17:41:07 crc kubenswrapper[4672]: I0217 17:41:07.072450 4672 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vcth2" Feb 17 17:41:07 crc kubenswrapper[4672]: I0217 17:41:07.260583 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vcth2"] Feb 17 17:41:08 crc kubenswrapper[4672]: I0217 17:41:08.198964 4672 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vcth2" podUID="722890b1-cebb-4946-ac06-75501774f138" containerName="registry-server" containerID="cri-o://06fe7e14516c04fa0e291cca79a98618880fb8a7c63873db17ebd1ffd596e8b0" gracePeriod=2 Feb 17 17:41:08 crc kubenswrapper[4672]: I0217 17:41:08.848398 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vcth2" Feb 17 17:41:08 crc kubenswrapper[4672]: I0217 17:41:08.956797 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/722890b1-cebb-4946-ac06-75501774f138-catalog-content\") pod \"722890b1-cebb-4946-ac06-75501774f138\" (UID: \"722890b1-cebb-4946-ac06-75501774f138\") " Feb 17 17:41:08 crc kubenswrapper[4672]: I0217 17:41:08.956893 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/722890b1-cebb-4946-ac06-75501774f138-utilities\") pod \"722890b1-cebb-4946-ac06-75501774f138\" (UID: \"722890b1-cebb-4946-ac06-75501774f138\") " Feb 17 17:41:08 crc kubenswrapper[4672]: I0217 17:41:08.957046 4672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kn5g\" (UniqueName: \"kubernetes.io/projected/722890b1-cebb-4946-ac06-75501774f138-kube-api-access-8kn5g\") pod \"722890b1-cebb-4946-ac06-75501774f138\" (UID: \"722890b1-cebb-4946-ac06-75501774f138\") " Feb 17 17:41:08 crc kubenswrapper[4672]: I0217 17:41:08.957669 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/722890b1-cebb-4946-ac06-75501774f138-utilities" (OuterVolumeSpecName: "utilities") pod "722890b1-cebb-4946-ac06-75501774f138" (UID: "722890b1-cebb-4946-ac06-75501774f138"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:41:08 crc kubenswrapper[4672]: I0217 17:41:08.966739 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/722890b1-cebb-4946-ac06-75501774f138-kube-api-access-8kn5g" (OuterVolumeSpecName: "kube-api-access-8kn5g") pod "722890b1-cebb-4946-ac06-75501774f138" (UID: "722890b1-cebb-4946-ac06-75501774f138"). InnerVolumeSpecName "kube-api-access-8kn5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:41:09 crc kubenswrapper[4672]: I0217 17:41:09.059896 4672 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/722890b1-cebb-4946-ac06-75501774f138-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:41:09 crc kubenswrapper[4672]: I0217 17:41:09.059937 4672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kn5g\" (UniqueName: \"kubernetes.io/projected/722890b1-cebb-4946-ac06-75501774f138-kube-api-access-8kn5g\") on node \"crc\" DevicePath \"\"" Feb 17 17:41:09 crc kubenswrapper[4672]: I0217 17:41:09.092855 4672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/722890b1-cebb-4946-ac06-75501774f138-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "722890b1-cebb-4946-ac06-75501774f138" (UID: "722890b1-cebb-4946-ac06-75501774f138"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:41:09 crc kubenswrapper[4672]: I0217 17:41:09.161728 4672 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/722890b1-cebb-4946-ac06-75501774f138-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:41:09 crc kubenswrapper[4672]: I0217 17:41:09.210104 4672 generic.go:334] "Generic (PLEG): container finished" podID="722890b1-cebb-4946-ac06-75501774f138" containerID="06fe7e14516c04fa0e291cca79a98618880fb8a7c63873db17ebd1ffd596e8b0" exitCode=0 Feb 17 17:41:09 crc kubenswrapper[4672]: I0217 17:41:09.210160 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vcth2" event={"ID":"722890b1-cebb-4946-ac06-75501774f138","Type":"ContainerDied","Data":"06fe7e14516c04fa0e291cca79a98618880fb8a7c63873db17ebd1ffd596e8b0"} Feb 17 17:41:09 crc kubenswrapper[4672]: I0217 17:41:09.210197 4672 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vcth2" event={"ID":"722890b1-cebb-4946-ac06-75501774f138","Type":"ContainerDied","Data":"8929cb90e90c34f2adfd31ef56d97133579fd892987be081b5f5de32befcdbe8"} Feb 17 17:41:09 crc kubenswrapper[4672]: I0217 17:41:09.210223 4672 scope.go:117] "RemoveContainer" containerID="06fe7e14516c04fa0e291cca79a98618880fb8a7c63873db17ebd1ffd596e8b0" Feb 17 17:41:09 crc kubenswrapper[4672]: I0217 17:41:09.210427 4672 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vcth2" Feb 17 17:41:09 crc kubenswrapper[4672]: I0217 17:41:09.230159 4672 scope.go:117] "RemoveContainer" containerID="61cfc36a08613f81d58c0b603ef1f60691a5517c9edbda90c8f3c1d1c4751c3d" Feb 17 17:41:09 crc kubenswrapper[4672]: I0217 17:41:09.272048 4672 scope.go:117] "RemoveContainer" containerID="bbf4624f2eb2818b88d2b37aae1d33b13d1e2ea2c33f71a2d1c4f48e38e152d7" Feb 17 17:41:09 crc kubenswrapper[4672]: I0217 17:41:09.278093 4672 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vcth2"] Feb 17 17:41:09 crc kubenswrapper[4672]: I0217 17:41:09.288456 4672 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vcth2"] Feb 17 17:41:09 crc kubenswrapper[4672]: I0217 17:41:09.343769 4672 scope.go:117] "RemoveContainer" containerID="06fe7e14516c04fa0e291cca79a98618880fb8a7c63873db17ebd1ffd596e8b0" Feb 17 17:41:09 crc kubenswrapper[4672]: E0217 17:41:09.344232 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06fe7e14516c04fa0e291cca79a98618880fb8a7c63873db17ebd1ffd596e8b0\": container with ID starting with 06fe7e14516c04fa0e291cca79a98618880fb8a7c63873db17ebd1ffd596e8b0 not found: ID does not exist" containerID="06fe7e14516c04fa0e291cca79a98618880fb8a7c63873db17ebd1ffd596e8b0" Feb 17 17:41:09 crc kubenswrapper[4672]: I0217 17:41:09.344283 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06fe7e14516c04fa0e291cca79a98618880fb8a7c63873db17ebd1ffd596e8b0"} err="failed to get container status \"06fe7e14516c04fa0e291cca79a98618880fb8a7c63873db17ebd1ffd596e8b0\": rpc error: code = NotFound desc = could not find container \"06fe7e14516c04fa0e291cca79a98618880fb8a7c63873db17ebd1ffd596e8b0\": container with ID starting with 06fe7e14516c04fa0e291cca79a98618880fb8a7c63873db17ebd1ffd596e8b0 not found: ID does not exist" Feb 17 17:41:09 crc kubenswrapper[4672]: I0217 17:41:09.344316 4672 scope.go:117] "RemoveContainer" containerID="61cfc36a08613f81d58c0b603ef1f60691a5517c9edbda90c8f3c1d1c4751c3d" Feb 17 17:41:09 crc kubenswrapper[4672]: E0217 17:41:09.344835 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61cfc36a08613f81d58c0b603ef1f60691a5517c9edbda90c8f3c1d1c4751c3d\": container with ID starting with 61cfc36a08613f81d58c0b603ef1f60691a5517c9edbda90c8f3c1d1c4751c3d not found: ID does not exist" containerID="61cfc36a08613f81d58c0b603ef1f60691a5517c9edbda90c8f3c1d1c4751c3d" Feb 17 17:41:09 crc kubenswrapper[4672]: I0217 17:41:09.345784 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61cfc36a08613f81d58c0b603ef1f60691a5517c9edbda90c8f3c1d1c4751c3d"} err="failed to get container status \"61cfc36a08613f81d58c0b603ef1f60691a5517c9edbda90c8f3c1d1c4751c3d\": rpc error: code = NotFound desc = could not find container \"61cfc36a08613f81d58c0b603ef1f60691a5517c9edbda90c8f3c1d1c4751c3d\": container with ID starting with 61cfc36a08613f81d58c0b603ef1f60691a5517c9edbda90c8f3c1d1c4751c3d not found: ID does not exist" Feb 17 17:41:09 crc kubenswrapper[4672]: I0217 17:41:09.345815 4672 scope.go:117] "RemoveContainer" containerID="bbf4624f2eb2818b88d2b37aae1d33b13d1e2ea2c33f71a2d1c4f48e38e152d7" Feb 17 17:41:09 crc kubenswrapper[4672]: E0217 17:41:09.346147 4672 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbf4624f2eb2818b88d2b37aae1d33b13d1e2ea2c33f71a2d1c4f48e38e152d7\": container with ID starting with bbf4624f2eb2818b88d2b37aae1d33b13d1e2ea2c33f71a2d1c4f48e38e152d7 not found: ID does not exist" containerID="bbf4624f2eb2818b88d2b37aae1d33b13d1e2ea2c33f71a2d1c4f48e38e152d7" Feb 17 17:41:09 crc kubenswrapper[4672]: I0217 17:41:09.346178 4672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf4624f2eb2818b88d2b37aae1d33b13d1e2ea2c33f71a2d1c4f48e38e152d7"} err="failed to get container status \"bbf4624f2eb2818b88d2b37aae1d33b13d1e2ea2c33f71a2d1c4f48e38e152d7\": rpc error: code = NotFound desc = could not find container \"bbf4624f2eb2818b88d2b37aae1d33b13d1e2ea2c33f71a2d1c4f48e38e152d7\": container with ID starting with bbf4624f2eb2818b88d2b37aae1d33b13d1e2ea2c33f71a2d1c4f48e38e152d7 not found: ID does not exist" Feb 17 17:41:09 crc kubenswrapper[4672]: I0217 17:41:09.945319 4672 scope.go:117] "RemoveContainer" containerID="84b26503293fd480153540a364dd1bb213df8906602f27c523499fd5a410b40b" Feb 17 17:41:09 crc kubenswrapper[4672]: E0217 17:41:09.945870 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:41:09 crc kubenswrapper[4672]: I0217 17:41:09.962842 4672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="722890b1-cebb-4946-ac06-75501774f138" path="/var/lib/kubelet/pods/722890b1-cebb-4946-ac06-75501774f138/volumes" Feb 17 17:41:19 crc kubenswrapper[4672]: E0217 17:41:19.947244 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:41:20 crc kubenswrapper[4672]: E0217 17:41:20.946866 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:41:22 crc kubenswrapper[4672]: I0217 17:41:22.945382 4672 scope.go:117] "RemoveContainer" containerID="84b26503293fd480153540a364dd1bb213df8906602f27c523499fd5a410b40b" Feb 17 17:41:22 crc kubenswrapper[4672]: E0217 17:41:22.946013 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:41:30 crc kubenswrapper[4672]: E0217 17:41:30.947970 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:41:33 crc kubenswrapper[4672]: I0217 17:41:33.944472 4672 scope.go:117] "RemoveContainer" containerID="84b26503293fd480153540a364dd1bb213df8906602f27c523499fd5a410b40b" Feb 17 17:41:33 crc kubenswrapper[4672]: E0217 17:41:33.945002 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:41:35 crc kubenswrapper[4672]: E0217 17:41:35.949137 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" Feb 17 17:41:45 crc kubenswrapper[4672]: E0217 17:41:45.952056 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-qrhj8" podUID="dc5471f5-2491-4841-be45-09c8f14b35c0" Feb 17 17:41:48 crc kubenswrapper[4672]: I0217 17:41:48.944820 4672 scope.go:117] "RemoveContainer" containerID="84b26503293fd480153540a364dd1bb213df8906602f27c523499fd5a410b40b" Feb 17 17:41:48 crc kubenswrapper[4672]: E0217 17:41:48.945665 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-d6dhs_openshift-machine-config-operator(fa9cd2c6-74a5-4567-a141-be56c668e566)\"" pod="openshift-machine-config-operator/machine-config-daemon-d6dhs" podUID="fa9cd2c6-74a5-4567-a141-be56c668e566" Feb 17 17:41:50 crc kubenswrapper[4672]: E0217 17:41:50.947613 4672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="9e58ce9b-ddd5-42bb-8e07-08a22c8871a5" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515145124154024447 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015145124155017365 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015145110176016506 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015145110177015457 5ustar corecore